id
stringlengths
10
10
title
stringlengths
26
192
abstract
stringlengths
172
1.92k
authors
stringlengths
7
591
published_date
stringlengths
20
20
link
stringlengths
33
33
markdown
stringlengths
269
344k
2309.15803
ANNCRIPS: Artificial Neural Networks for Cancer Research In Prediction & Survival
Prostate cancer is a prevalent malignancy among men aged 50 and older. Current diagnostic methods primarily rely on blood tests, PSA:Prostate-Specific Antigen levels, and Digital Rectal Examinations (DRE). However, these methods suffer from a significant rate of false positive results. This study focuses on the development and validation of an intelligent mathematical model utilizing Artificial Neural Networks (ANNs) to enhance the early detection of prostate cancer. The primary objective of this research paper is to present a novel mathematical model designed to aid in the early detection of prostate cancer, facilitating prompt intervention by healthcare professionals. The model's implementation demonstrates promising potential in reducing the incidence of false positives, thereby improving patient outcomes. Furthermore, we envision that, with further refinement, extensive testing, and validation, this model can evolve into a robust, marketable solution for prostate cancer detection. The long-term goal is to make this solution readily available for deployment in various screening centers, hospitals, and research institutions, ultimately contributing to more effective cancer screening and patient care.
Amit Mathapati
2023-09-26T08:11:35Z
http://arxiv.org/abs/2309.15803v1
# A.N.N.C.R.I.P.S - Artificial Neural Networks for Cancer Research In Prediction & Survival ###### Abstract Prostate cancer stands as the most frequently diagnosed cancer among men aged 50 and older. Contemporary diagnostic and screening procedures predominantly rely on blood tests to assess prostate-specific antigen (PSA) levels and Digital Rectal Examinations (DRE). Regrettably, these methods are plagued by a substantial occurrence of false-positive results (FPTRs), which can engender unwarranted anxiety and invasive follow-up procedures for patients. To address these pressing issues, this research project seeks to harness the potential of intelligent Artificial Neural Networks (ANNs). This study's overarching objective is to develop an advanced mathematical model specifically tailored to enhance the early detection of prostate cancer, thus facilitating prompt medical intervention and ultimately improving patient outcomes. By seamlessly integrating ANNs into the diagnostic process, we aim to enhance both the accuracy and reliability of prostate cancer screening, thereby drastically reducing the incidence of FPTRs. This model signifies a promising solution for healthcare practitioners to furnish more precise and timely assessments of their patients' conditions. In the pursuit of these objectives, we will meticulously execute a series of rigorous testing and validation procedures, coupled with comprehensive training of the ANN model, using meticulously curated and diverse datasets. The ultimate aspiration is to create a deployable and marketable solution grounded in this mathematical model, seamlessly adaptable to various healthcare settings, including screening centers, hospitals, and research institutions. This innovative approach has the potential to revolutionize prostate cancer screening, contributing to elevated standards of patient care and early intervention, with the goal of saving lives and mitigating the substantial burden imposed by this prevalent disease. machine learning, artificial intelligence, cancer, artificial neural networks, prostate cancer ## I Introduction ### _The Genesis & Emphasis on Prostate Cancer?_ Prostate cancer represents the foremost prevalent form of non-cutaneous malignancy among the male population in the United States, as substantiated by authoritative sources[1]. This insidious disease casts its shadow over the lives of an alarming one in every six men, underscoring its substantial public health impact. Intriguingly, it is worth noting that a non-smoking male individual is at a notably heightened risk of receiving a prostate cancer diagnosis when juxtaposed with the cumulative risk of all other cancer types combined. This staggering fact underscores the paramount importance of addressing prostate cancer as a top-tier healthcare concern. Moreover, the relative incidence of prostate cancer diagnosis eclipses that of breast cancer among women, further underscoring the gravity of the issue at hand. In contemporary times, the prevalence of this disease has surged to a staggering extent, with a conservative estimate suggesting that a strikingly high number of over two million men in the United States are currently grappling with the complex challenges posed by prostate cancer. It is imperative to acknowledge that prostate cancer does not discriminate based on age, as all men stand vulnerable to its insidious onset. However, a notable correlation exists between the incidence of this disease and advancing age, as well as a pertinent familial predisposition. The inexorable passage of time manifests as a precipitating factor, increasing the likelihood of an individual's susceptibility to prostate cancer. Hence, the nexus between age and the diagnosis of prostate cancer merits substantial attention. The year 2009 stands as a pivotal milestone in the narrative of prostate cancer epidemiology. During this significant period, statistical projections cast a sobering light on the disease's prevalence, with an estimated 192,000 men anticipated to receive a prostate cancer diagnosis. Tragically, this affliction exacted an even graver toll, with over 27,000 men succumbing to its relentless progression. These statistics serve as a stark reminder of the pressing need to channel resources and research endeavors towards combating prostate cancer's profound public health implications. ### _Current Screening Methods: A Comprehensive Overview_ At present, the landscape of prostate cancer diagnosis predominantly relies upon two principal methodologies: the assessment of Protein Specific Antigen (PSA) levels in the bloodstream and the undertaking of Digital Rectum Examination (DRE). These modalities hold a pervasive presence across the spectrum of medical institutions and research establishments, serving as the primary means to detect and evaluate the presence of prostate cancer. However, it is imperative to recognize that they are not without their inherent shortcomings, chief among them being the propensity to yield a substantial number of False Positive Test Results (FPTRs). PSA, a protein produced by the prostate gland in nominal quantities, takes center stage as a linchpin in prostate cancer diagnosis. When prostate-related issues arise, the production and release of PSA can escalate at an alarming rate, propagating into various parts of the body via the circulatory system. Notably, PSA levels are categorized into three distinct ranges for diagnostic purposes: levels below 4 nanograms per milliliter (ng/mL) are generally deemed within the normal range. A reading between 4 to 10 ng/mL is classified as an intermediate level, while PSA levels soaring above the 10 ng/mL threshold are ominously associated with a heightened risk of prostate cancer among patients. In parallel, the Digital Rectum Examination (DRE) constitutes a tangible approach to appraising the prostate's physical state. However, this assessment method often invokes skepticism and apprehension among patients, as it necessitates a palpation-based examination of the prostate to detect any irregularities in its shape, texture, or overall formation. At the DRE stage, the attending physician can, to a certain extent, discern the likelihood of prostate cancer or other male-specific malignancies. However, the DRE, though informative, predominantly serves as a preliminary indicator rather than a definitive diagnostic tool. It frequently guides medical practitioners towards the need for further, more invasive procedures, such as biopsies. Biopsies, though indispensable for securing a conclusive diagnosis, present their own set of challenges. This invasive procedure entails the insertion of a needle into the prostate gland to procure tissue samples, guided by the aid of ultrasound imaging. Regrettably, the biopsy process is notably painful and discomforting, often dissuading a significant number of patients from undergoing the procedure. Consequently, a substantial cohort of potential prostate cancer cases remains undiagnosed due to this aversion to invasive testing, exacerbating the diagnostic conundrum. It is crucial to underscore that despite the widespread utilization of the aforementioned methodologies, the specer of FPTRs looms large. False positive results not only trigger undue psychological distress for patients but also generate a cascade of unnecessary follow-up procedures. To address this vexing issue, the present research endeavors to harness the potency of intelligent Artificial Neural Networks (ANNs) to construct an adept, readily deployable, and marketable mathematical model. This model aims to circumvent the need for a protracted series of trial-and-error methods, ultimately expediting the diagnosis and initiation of treatment at an earlier juncture in the disease progression. Through the strategic implementation of ANNs, this research aspires to revolutionize the landscape of prostate cancer detection, mitigating the deleterious consequences of FPTRs, and ushering in a new era of precise, timely, and patient-friendly diagnostics and therapeutics. ## II Anncripts ### Artificial Neural Networks Artificial Neural Networks (ANNs) represent a profoundly intriguing and innovative paradigm for information processing, drawing inspiration from the intricate workings of biological nervous systems, most notably, the human brain's remarkable capacity to process and interpret complex data. Figuratively speaking, ANNs emulate the neural network within the human brain, as depicted in Figure 1, to undertake intricate computational tasks with remarkable efficiency and adaptability. In essence, ANNs replicate the fundamental structure of biological neural networks. These networks consist of neurons interconnected by synapses, mirroring the communication pathways within the human brain. In this neural framework, synapses serve as sensors, adept at capturing inputs from the surrounding environment. Meanwhile, the soma, or the central body of the neuron, stands as the fundamental processing unit, orchestrating the intricate web of computations that define the neural network's functionality. This intricate interplay of synapses and soma embodies the essence of ANNs, as they leverage this biologically inspired architecture to facilitate complex data analysis and pattern recognition. To better appreciate the conceptualization of ANNs, refer to the illustrative representation depicted in Figure 2. Fig. 1: Neuron model in human brain Fig 2: Basic Neural Network model This simplified model elucidates the core structure that underpins ANNs' functionality, offering a visual framework for understanding their modus operandi. Through the deployment of ANNs, we endeavor to harness the inherent capabilities of neural networks in accelerating the process of knowledge extraction and data interpretation, revolutionizing diverse domains, including medical diagnostics, where precise and rapid decision-making is of paramount importance. In the realm of Artificial Neural Networks (ANNs), the journey of information processing embarks as neurons engage with a set of inputs. These inputs are not merely processed in isolation; rather, they undergo a transformative voyage orchestrated by the activation function, ultimately yielding corresponding outputs. Central to this process are the weights associated with each connection, a critical determinant of both the strength and sign of the interactions within the neural network. Tailoring the number of output layers in the network is a pivotal design consideration, and the choice of the activation function intricately influences the nature of the produced outputs. Within the intricate tapestry of ANNs, two primary categories of neural network structures emerge: acyclic or feed-forward networks and cyclic or recurrent networks. The former, exemplified by the feed-forward network, operates as a function of its current inputs, devoid of any internal states beyond the weight coefficients themselves. In stark contrast, recurrent networks take a more intricate approach by looping their outputs back into their own inputs, thereby endowing the system with memory-like capabilities. This internal feedback loop allows recurrent networks to exhibit dynamic behaviors, such as convergence to a stable state, oscillations, and in some instances, even chaotic patterns. Furthermore, the network's response to a given input is intricately linked to its initial state, often shaped by previous inputs, thus bestowing upon it the capability to support short-term memory, a feature that finds significant utility in various computational contexts. The architectural design of ANNs can encompass a spectrum of complexity, ranging from a single hidden layer to networks replete with multiple hidden layers. These hidden layers operate in tandem to process the incoming inputs, unraveling intricate patterns and relationships embedded within the data. The collective activation levels across the network form a dynamic system, with the potential to converge to a stable equilibrium, oscillate in a rhythmic fashion, or exhibit chaotic behaviors, depending on the intricacies of the network's structure and the inputs encountered. In the following sections of this paper, we delve deeper into the operational dynamics of these neural network structures, exploring their capacity to model complex systems, adapt to varying data distributions, and, most importantly, advance our understanding of how ANNs can be harnessed to revolutionize the landscape of prostate cancer detection and diagnosis. Through a comprehensive exploration of these concepts, we aim to provide a solid foundation for the application of ANNs in medical diagnostics, elucidating their potential to expedite early detection, enhance accuracy, and ultimately improve patient outcomes in the context of prostate cancer. Consider the simple network shown in Fig 3 which has two inputs, two hidden units and an output unit. Given an input vector \(\mathrm{X}=(\mathrm{x1,x2})\), the activation of the input units is set to (a1, a2) = (x1, x2), and the network computes \[\begin{array}{l}\mathrm{a5=g(W_{i},a_{i}+W_{i},a_{i})}\\ \mathrm{=g(W_{i},g(W_{i},a_{i}+W_{i},a_{i})+W_{i},a_{i})+}\\ +\ \mathrm{W_{i},g(W_{i},a_{i}+W_{i},a_{i}))}\end{array}\ldots\,\mathrm{Eqn.1}\] Fig. 3 A simple neural network with two inputs and two hidden units and a single output. Thus, by expressing the output of each hidden unit as a function of its inputs, we have represented a. as a function of the network inputs along with the weights. There exist simple single layer feed forward neural networks, but we have used the multi layered feed-forward net. ### _Multilayer Feed Forward Neural Networks: Unlocking Complexity & Versatility_ When we delve into the domain of multilayer feed-forward neural networks, we uncover a realm of computational sophistication characterized by the presence of multiple hidden layers within the model. This multi-layered architectural configuration bestows upon the network a host of advantages, chief among them being an augmented capacity for complex computations and the ability to accommodate a broader spectrum of hypotheses, thereby enhancing its representational power. Each hidden layer within this multifaceted model serves as an embodiment of a threshold function, poised to evaluate and process the inputs received from the network's input layer. At the heart of this intricate neural network framework lies the aggregation of inputs, a critical precursor to their transformation through the transfer function denoted as "f." It is through this transfer function that the neural network refines and structures the incoming information, ultimately producing meaningful outputs. In the vast lexicon of neural network architectures, a diverse array of threshold functions finds application, each tailored to specific computational requirements and analytical contexts. These threshold functions, encapsulating distinct mathematical properties and behaviors, offer a rich tapestry of options for fine-tuning the neural network's operations to align with the task at hand. One of the pivotal threshold functions employed in neural networks is the "hard-limit transfer function." Apply named, this function exerts a stringent control over the neuron's output, constraining it to one of two discrete values: 0 or an alternative output value contingent upon the aggregate input arguments supplied by the network's input layer. This binary nature of the hard-limit transfer function makes it particularly well-suited for scenarios where decisions or classifications are dichotomous, exemplifying its utility in various computational and analytical contexts. The sum of inputs acts as the parameters to the transfer function f. The various threshold functions used in the neural networks [2] are: **i)**: **Threshold Function** The threshold function also called as the hard-limit transfer function limits the output of the neuron to either 0, if total inputs arguments from the input layer have a value less than 0 or 1 if the net value is greater than or equal to 0. **i)**: **Linear Transfer Function** The linear transfer function is used to differentiate between the net input value lying on one side of the linear line through the origin. The log-sigmoid transfer function is most used in backpropagation models as it is differentiable. With more hidden units in the model, and the hypotheses space increasing, the back propagation model helps us to train the model more efficiently. In the back propagation model the output obtained from the training is compared with actual outputs and we calculate the error. ## III Linking ANN's to Prostate Cancer Analysis ### **Data Analysis and Model Description** The existing diagnostic methods for prostate cancer frequently yield a considerable number of False Positive Test Results (FPRTs), posing significant challenges in terms of precision and accuracy. To address this pressing concern, we turn to the formidable capabilities of Artificial Neural Networks (ANNs) to construct models that not only curtail the incidence of FPTRs but also endeavor to heighten the overall diagnostic accuracy. Our foundational dataset comprises a comprehensive cohort of 1983 patients [3], hailing from diverse backgrounds and medical histories. Within this cohort, 1551 patients were conclusively diagnosed with prostate cancer, while 432 individuals, following a battery of Fig. 4: Linear Transfer function initial tests and biopsies, were subsequently deemed free from prostate cancer. This invaluable dataset was thoughtfully sourced from Weil Medical College, Department of Urology, and has been meticulously curated with unwavering guidance from a cadre of dedicated medical professionals, including doctors and physicians. To safeguard the privacy and confidentiality of the patients, all personally identifiable information, including names, was rigorously omitted from our analysis. The immense dataset at our disposal was thoughtfully subdivided into four distinct smaller datasets, each serving as a microcosm of the broader patient population. The first dataset encompassed 400 patients diagnosed with prostate cancer and an additional 100 who were conclusively free from the disease. This division was mirrored in the subsequent two datasets. In the final dataset, 351 patients received a prostate cancer diagnosis, while 132 individuals emerged unscathed from the specter of cancer. This strategic segmentation allowed us to train the neural network model individually on each of these datasets, thereby facilitating a granular and contextually nuanced understanding of the model's performance across diverse patient groups. The neural network model we employed adheres to a structured architecture, commencing with an input layer capable of accommodating a multitude of inputs denoted as x1, x2, x3, and so forth. Each of these inputs is assigned a specific weight, represented by w1, w2, w3, and so forth, reflecting the nuanced importance and impact of individual input features. The hallmark of this model lies in its ability to consolidate these inputs, effectively summing the products of inputs and their respective weights for all variables, thus generating an aggregate signal that is then conveyed to the subsequent neural layers for further processing. For a visual representation of this data flow, please refer to Figure 2, which provides a schematic elucidation of this crucial processA. \[u=\sum_{j=1}^{m}wjxj\qquad\ldots\text{Eqn 2}\] We have used the multi - layered feed forward model which would back propagate the outputs back to the hidden units i.e., we back propagate the model. Input vectors and the corresponding target (output) vectors are used to train the network until it can approximate a function; associate input vectors with specific output vectors or classify the input vectors in an appropriate way as specified by the model. In the standard back propagation models the network weights are moved along the negative of the gradient of the performance function. The term back propagation refers to the way the gradient is computed for nonlinear multilayer networks. In the more specific back propagation models, they tend to provide more accurate results as expected from the targets. A new input in this model will lead to an output like the correct output for input vectors used in training that are similar to the new input being presented [4]. The dataset consists of four variables which are selected and have been of much relevance in diagnosing the cancer among men [5]. The four variables used are as: * Age of the patient * Prostate size in grams (using Ultrasound imaging) * Most Recent PSA Level * Most Recent free PSA Level These four variables were selected and distributed into the data sets. These variables were obtained from the patients diagnosed in Weill Medical College. These inputs are fed to the networks in the form of input vectors. We need to distribute the data sets into 3 different subsets where 60% of the input vectors are used to train the network, 20% of the input vectors are used to validate the model i.e., the network that has been created in this case and the remaining 20% of the input vectors are used to rest the network for the generalization. This combination of the training, validating and testing data can be configured but after a series of tests we did find this to give better results as compared to other combinations. This percentage distribution is also dependent on the amount of input vectors that we have and the number of input variables that the network has. ### _Styles of Training_ In the intricate domain of neural network training, the choice of training method holds paramount significance, as it fundamentally dictates how the model adapts and evolves in response to data. In our research endeavor, we have primarily embraced the batch training method, a methodological approach that introduces distinct characteristics and advantages into the training process. Under the purview of batch training, a critical facet manifests--the weight and bias adjustments take place solely upon the comprehensive presentation of an entire batch of input vectors, coupled with their corresponding target values, to the neural network. This approach engenders a synchronized framework for weight updates, treating the individual inputs as if they were concurrently processed, despite their sequential arrangement within a data structure, such as an array. This training cycle perseveres until specific pre-specified conditions have been met or until the model has successfully attained its predefined objectives, encapsulating the essence of batch training. To elucidate this process further, it is imperative to underscore the central role played by target vectors, judiciously defined during the network's configuration phase. These target vectors serve as benchmarks against which the model's generated outputs are meticulously compared and assessed. Furthermore, the process commences with the initialization of the model's weights, thoughtfully considering all input vectors, thus laying the foundational groundwork for the impending training iterations. As the training journey unfolds, it sets in motion a profound learning process, conceptually framed as an optimization quest within the expansive weight space. The crux of this learning voyage hinges on the classical metric of error computation, wherein the model's generated outputs are subjected to rigorous evaluation against the authentic target values. It is the magnitude of this error that serves as the ledestar for subsequent weight adjustments. Based on the magnitude of this error, the neural network fine-tunes its internal weights, and the model is further refined. This iterative process of backpropagation permeates through the intricate layers of the neural network, facilitating a continuous refinement of its configuration and structure. The set of outputs generated from the model will be compared to the target vectors which had been specified earlier to the network. We need to initialize the weights to all the input vectors. The training is started for the input vectors and the net starts to learn. The learning is formulated as an optimization search in weight space. We have used the classical measure of error between the outputs that are obtained from the network that has been configured and the actual target values. Depending upon this error values, the weights would change we would train the network again as this would be back propagated to the hidden layers in the model. The squared error for a single transition of input vector would be: E = 0.5 Err\({}^{2}\) = 0.5 (Output - network output)... Eqn 3 We can use the gradient descent to reduce the squared error by calculating the partial derivative of E with respect to each of the weight. We have, \[\frac{\partial y}{\partial x}=\text{ - Err *}\frac{\partial}{\partial y_{1}}( \text{Output - Activationfn}(\sum_{j-0}^{m}Wjxj))\] ... Eqn 4 For the sigmoid function we have the derivative as: g' = g(1-g) So from the above equation 4 we can further modify it as : = - Err * g'(inputs) * xj... Eqn 5 So the corresponding weight would be updated as follows: \[\text{W}_{\text{j}}\ \ Building the Network and Training _A. Dataset Distribution_ From the four data sets of 1983 patients we build the network model using the Matlab tool for neural networks. We trained the network with different training functions and different learning rated over a multiple number of times. We need to present the input vector with four variables and in set of around 500 values as: P = [54, 52; 12 14; 1.2 2.3; 11 18]; T = [1 10] In the input vector we have the first input vector consisting of four input variables in the form of: P1 = 54 12 1.2 11 P2 = 52 14 2.3 18 And their corresponding target values as 1 and respectively indicating that the first patient p1 in the data set was diagnosed with prostate cancer and the patient p2 was not diagnosed with prostate cancer. Next the network is built, we started with a single hidden layer, and after many tests we checked, the accuracy gained as compared to two hidden layers was less. _B. Training Functions_ We trained the model over a list of training functions and the various results and observations are as follows: i. Batch Gradient Descent ii. Variable Learning Rate iii. Resilient Backpropagation iv. Quasi Newton Algorithm v. Levenberg - Marquardt Algorithm **i. Batch Gradient Descent** The batch steepest descent training function is traingd. The weights and biases are updated in the direction of the negative gradient of the performance function. We would stop the training of the performance function specified falls below the goal mentioned, if the magnitude of the gradient is less than value specified, or if the training time is longer than time seconds. After training the network we simulate the network to check for the output values after the model has been trained n number of times. For the first data set we have in the graph for X - axes as the number of inputs and Y- axis as the squared error-Performance: For four different runs of learning rate=0.07 we check that the each run has different number of epochs and the squared errors decreases till it reaches 0. Then we simulate the network response to the original set of inputs to check for the values as compared to target values and we can check that as follows: Fig. 8 traingd Input Vectors \(p\) vs Neural Network values We check for the accuracy of the input vectors and we can see that most of the values are reaching the upper surface of value 1 with some exceptions for value approaching 0.5 depicting the patients not diagnosed with prostate cancer. Then we tested the network for the accuracy in this case for two input vectors q and r. q =[41 62 72 60 75 70 79 71 52 54 ;21 0 44 32 61 32.4 0 0 72.7 65.5 ;3.3 0 4.2 7.3 10 5.2 0 0 5 6.7 ;11 0 26 8 17 8 0 0 20 11]; b=sim(net,q) r =[66 68 36 65 53 55 65 72 62 70 56; 59.1 76.6 14.4 49 22 40 0 69 117 67.4 39; 1.8 1.8 0.2 7.5 0 4.2 0 8.9 17.9 54.6 6.3; 51 31 0 11 0 20 0 19 22.3 26 10]; c=sim(net, r) We check for the output values for the input vector q and r as follows: Fig. 10 trained Input Vectors \(r\) vs Neural Network values The training algorithm was too slow as it took more number of epochs to reach the goal state or reaching the condition for stopping the training. ## 2 Variable Learning Rate We have two variable learning rate algorithms [4]. The performance of the algorithm is very sensitive to the proper setting of the learning rate. If the learning rate is set too high, the algorithm can oscillate and become unstable. If the learning rate is too small, the algorithm takes too long to converge. It is not practical to determine the optimal setting for the learning rate before training, and, in fact, the optimal learning rate changes during the training process, as the algorithm moves across the performance surface. We need to allow the learning rate to change during the training process. An adaptive learning rate attempts to keep the learning step size as large as possible while keeping learning stable. The learning rate is made responsive to the complexity of the local error surface. An adaptive learning rate requires some changes in the training procedure used by the previous method. First, the initial network output and error are calculated. At each epoch new weights and biases are calculated using the current learning rate. New outputs and errors are then calculated. This procedure increases the learning rate, but only to the extent that the network can learn without large error increases. Thus, a near-optimal learning rate is obtained for the local terrain. When a larger learning rate could result in stable learning, the learning rate is increased. When the learning rate is too high to guarantee a decrease in error, it is decreased until stable learning resumes. We check for these heuristic techniques, which were developed from an analysis of the performance of the standard steepest descent algorithm. Fig. 11 trainingda No of Epochs Vs Squared Error Performance Fig. 12 traingda Input Vectors p vs Neural Network values Fig. 10 trained Input Vectors \(r\) vs Neural Network values Fig. 9 trained Input Vectors \(q\) vs Neural Network values Fig. 13 tranigda Input Vectors q vs Neural Network values Fig. 14 traingda Input Vectors r vs Neural Network values And for the function traingdx we have: Fig. 15 traingdx No of Epochs vs Squared Error Performance Fig. 18 traingdx Input Vectors r vs Neural Network values In this case the function traingdx combines adaptive learning rate with momentum training. It is invoked in the same way as traingda, except that it has the momentum coefficient mc as an additional training parameter. Fig. 17 traingdx Input Vectors q vs Neural Network values Fig. 13 traingda Input Vectors q vs Neural Network values Fig. 14 traingda Input Vectors r vs Neural Network values Fig. 14 traingda Input Vectors r vs Neural Network values And for the function traingdx we have: Fig. 15 traingdx No of Epochs vs Squared Error Performance Fig. 18 traingdx Input Vectors r vs Neural Network values ## 3 Resilient Back Propagation Multilayer networks typically use sigmoid transfer functions in the hidden layers. These functions are often called "squashing" functions, because they compress an infinite input range into a finite output range. Sigmoid functions are characterized by the fact that their slopes must approach zero as the input gets large. This causes a problem when you use steepest descent to train a multilayer network with sigmoid functions, because the gradient can have a very small magnitude and, therefore, cause small changes in the weights and biases, even though the weights and biases are far from their optimal values. Only the sign of the derivative is used to determine the direction of the weight update; the magnitude of the derivative has no effect on the weight update. The size of the weight change is determined by a separate update value. The update value for each weight and bias is increased by a factor whenever the derivative of the performance function with respect to that weight has the same sign for two successive iterations. The update value is decreased by a factor whenever the derivative with respect to that weight changes sign from the previous iteration. If the derivative is zero, then the update value remains the same. Whenever the weights are oscillating, the weight change is reduced. If the weight continues to change in the same direction for several iterations, then the magnitude of the weight change increases. This training function is generally much faster than the standard steepest descent algorithm. It also has the nice property that it requires only a modest increase in memory requirements. You do need to store the update values for each weight and bias, which is equivalent to storage of the gradient. which doesn't require calculation of second derivatives. These are called quasi-Newton (or secant) methods. They update an approximate Hessian matrix at each iteration of the algorithm. The update is computed as a function of the gradient. The quasi-Newton method that has been most successful in published studies is the Broyden, Fletcher, Goldfarb, and Shanno (BFGS) update. We have the equation as: This algorithm requires more computation in each iteration and more storage than the conjugate gradient methods, although it generally converges in fewer iterations. The approximate Hessian must be stored, and its dimension is n x n, where n is equal to the number of weights and biases in the network. For very large networks it might be better to use Rprop or one of the conjugate gradient algorithms. For smaller networks, however, trainbfg can be an efficient training function ## V Levenberg - Marquardt Algorithms The Levenberg-Marquardt algorithm was designed to approach second-order training speed without having to compute the Hessian matrix. This is the faster than the other methods considered; also the accuracy level as compared to the other models is high. Due to non-storage of the matrix values the processing speed is reduced and we can obtain the results much faster. Instead we use a Jacobian matrix Jr that contains the first derivatives of the network errors with respect to the weights and the biases. We did check that for smaller set of input vectors in hundreds the training function performs very well but if the number of input vectors increases drastically then the training function performance decreases and the time taken to complete the training with respect to the number of epochs is huge. We can check from the graph that the number of epochs taken in this case is leas and it almost reduces the mean squared errors thus giving a better training function to build the model. As for the input vectors and vectors q and r we have the output values as: Fig. 23 trainlm No of Epochs vs Squared Error Performance Fig. 24 trainlm Input Vectors p vs Neural Network values Fig. 25 trainlm Input Vectors q vs Neural Network values Fig. 26 trainlm Input Vectors r vs Neural Network values Comparisons among the training algorithms: Various training algorithms are dependent upon the training data and the model. The factors affecting this could be the complexity of the problem, number of input vectors, hidden layers, error goal and whether the network is being used for pattern recognition or function approximation. We can check from the above runs for all the four different data sets, it tends to have different accuracy percentage and the number of epochs taken is less. The comparisons of ANNCRIPS with respect to other non-neural networks models is shown in table 1.1 and comparisons between ANNCRIPS and other neural networks models is shown in table 1.2. ## VI Further improvements: We could test the model over a huge data set and train it multiple numbers of times with all the training functions. The inclusion of more variables would enhance the accuracy level and can help us to predict the occurrence of prostate cancer among the patients. As the number of variables increases, we get more number of input vectors over which we can train the model. Also, by changing the number if hidden layers we can accommodate a large number of hypotheses to train the model. ## VII Conclusion: Thus, we see that Artificial Neural networks can be efficiently used to diagnose cancer at an early stage enabling us to reduce the number of false positive test results. Also, this learning technique takes into consideration a large number of factors like the different input arguments from the patients, the number of hidden layers in the network, etc. After training the model over a huge data set and simulating the model by validating over the set of input data present we could check the remaining test set for its accuracy. ## VIII Acknowledgements: * Prof. Bart Selman. Cornell University * Dr. Ashutosh Tewari, Cornell Weill Medical College, Dept. of Urology * Douglas S. Scherr, M.D., Cornell Weill Medical College, Dept. of Urology * Micheal Herman, Cornell Weill Medical College, Dept. of Urology \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Reference** & **Training Cohort** & **Input variables** & **Precision(\%)** \\ \hline ANNCRIPS & Screening population & Age PSA tPSA & 81 \\ & & & tPSA & \\ \hline Porter et al.[11] & Screening population & Age PSA Prostate volume & 77 \\ & & & PSAD DRE & \\ \hline Stephan et al.[12] & Mixed screened \& non- & Age \%fPSA Prostate volume & 65-93 \\ & & & & \\ \hline Stephan et al.[13] & Mixed screened \& non- & Age \%fPSA & \\ \hline \end{tabular} \end{table} Table 1: \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Reference** & **Study Cohort** & **Predictive Model** & **Precision (\%)** \\ \hline ANNCRIPS & Pre-screening of prostate cancer & MLP – Back Propagation model & 81 \\ \hline Optenberg et al.[8] & Suspicion for prostate prostate prostate & Multiple Logistic regression & 81 \\ \hline Benecchi[9] & Urologic symptoms, abnormal PREA or DRERE & Neuro-fuzzy inference model & 80 \\ \hline Herman et al.[10] & Screening cohort & Look-up table & 62-68 \\ \hline \end{tabular} \end{table} Table 1: Robert Leung, Cornell Weill Medical College, Dept. of Urology * [6] Karan Kamdar, University of Mumbai * [7] Late. Prof. K.G. Balakrishnan, University of Mumbai
2307.16695
A theory of data variability in Neural Network Bayesian inference
Bayesian inference and kernel methods are well established in machine learning. The neural network Gaussian process in particular provides a concept to investigate neural networks in the limit of infinitely wide hidden layers by using kernel and inference methods. Here we build upon this limit and provide a field-theoretic formalism which covers the generalization properties of infinitely wide networks. We systematically compute generalization properties of linear, non-linear, and deep non-linear networks for kernel matrices with heterogeneous entries. In contrast to currently employed spectral methods we derive the generalization properties from the statistical properties of the input, elucidating the interplay of input dimensionality, size of the training data set, and variability of the data. We show that data variability leads to a non-Gaussian action reminiscent of a ($\varphi^3+\varphi^4$)-theory. Using our formalism on a synthetic task and on MNIST we obtain a homogeneous kernel matrix approximation for the learning curve as well as corrections due to data variability which allow the estimation of the generalization properties and exact results for the bounds of the learning curves in the case of infinitely many training data points.
Javed Lindner, David Dahmen, Michael Krämer, Moritz Helias
2023-07-31T14:11:32Z
http://arxiv.org/abs/2307.16695v2
# A theory of data variability in Neural Network Bayesian inference ###### Abstract Bayesian inference and kernel methods are well established in machine learning. The neural network Gaussian process in particular provides a concept to investigate neural networks in the limit of infinitely wide hidden layers by using kernel and inference methods. Here we build upon this limit and provide a field-theoretic formalism which covers the generalization properties of infinitely wide networks. We systematically compute generalization properties of linear, non-linear, and deep non-linear networks for kernel matrices with heterogeneous entries. In contrast to currently employed spectral methods we derive the generalization properties from the statistical properties of the input, elucidating the interplay of input dimensionality, size of the training data set, and variability of the data. We show that data variability leads to a non-Gaussian action reminiscent of a \(\varphi^{3}+\varphi^{4}\)-theory. Using our formalism on a synthetic task and on MNIST we obtain a homogeneous kernel matrix approximation for the learning curve as well as corrections due to data variability which allow the estimation of the generalization properties and exact results for the bounds of the learning curves in the case of infinitely many training data points. ## I Introduction Machine learning and in particular deep learning continues to influence all areas of science. Employed as a scientific method, explainability, a defining feature of any scientific method, however, is still largely missing. This is also important to provide guarantees and to guide educated design choices to reach a desired level of accuracy. The reason is that the underlying principles by which artificial neural networks reach their unprecedented performance are largely unknown. There is, up to date, no complete theoretical framework which fully describes the behavior of artificial neural networks so that it would explain the mechanisms by which neural networks operate. Such a framework would also be useful to support architecture search and network training. Investigating the theoretical foundations of artificial neural networks on the basis of statistical physics dates back to the 1980s. Early approaches to investigate neural information processing were mainly rooted in the spin-glass literature and included the computation of the memory capacity of the perceptron, path integral formulations of the network dynamics [36], and investigations of the energy landscape of attractor network [1; 10; 11]. As in the thermodynamic limit in solid state physics, modern approaches deal with artificial neural networks (ANN) with an infinite number of hidden neurons to simplify calculations. This leads to a relation between ANNs and Bayesian inference on Gaussian processes [29; 39], known as the Neural Network Gaussian Process (NNGP) limit: The prior distribution of network outputs across realizations of network parameters here becomes a Gaussian process that is uniquely described by its covariance function or kernel. This approach has been used to obtain insights into the relation of network architecture and trainability [30; 34]. Other works have investigated training by gradient descent as a means to shape the corresponding kernel [15]. A series of recent studies also captures networks at finite width, including adaptation of the kernel due to feature learning effects [4; 20; 44; 27; 33; 45]. Even though training networks with gradient descent is the most abundant setup, different schemes such as Bayesian Deep Learning [26] provide an alternative perspective on training neural networks. Rather than finding the single-best parameter realization to solve a given task, the Bayesian approach aims to find the optimal parameter distribution. In this work we adopt the Bayesian approach and investigate the effect of variability in the training data on the generalization properties of wide neural networks. We do so in the limit of infinitely wide linear and non-linear networks. To obtain analytical insights, we apply tools from statistical field theory to derive approximate expressions for the predictive distribution in the NNGP limit.The remainder of this work is structured in the following way: In Section II we describe the setup of supervised learning in shallow and deep networks in the framework of Bayesian inference and we introduce a synthetic data set that allows us to control the degree of pattern separability, dimensionality, and variability of the resulting overlap matrix. In Section III we develop the field theoretical approach to learning curves and its application to the synthetic data set as well as to MNIST [17]: Section III.1 presents the general formalism and shows that data variability in general leads to a non-Gaussian process. Here we also derive perturbative expressions to characterize the posterior distribution of the network output. We first illustrate these ideas on the simplest but non-trivial example of linear Bayesian regression and then generalize them first to linear and then to non-linear deep networks. We show results for the synthetic data set to obtain interpretable expressions that allow us to identify how data variability affects generalization; we then illustrate the identified mechanisms on MNIST. In Section IV we summarize our findings, discuss them in the light of the literature, and provide an outlook. ## II Setup In this background section we outline the relation between neural networks, Gaussian processes, and Bayesian inference. We further present an artificial binary classification task which allows us to control the degree of pattern separation and variability and test the predictive power of the theoretical results for the network generalization properties. ### Neural networks, Gaussian processes and Bayesian inference The advent of parametric methods such as neural networks is preceded by non-parametric approaches such as Gaussian processes. There are, however, clear connections between the two concepts which allow us to borrow from the theory of Gaussian processes and Bayesian inference to describe the seemingly different neural networks. We will here give a short recap on neural networks, Bayesian inference, Gaussian processes, and their mutual relations. Figure 1: **Field theory of generalization in Bayesian inference.****a)** A binary classification task, such as distinguishing pairs of digits in MNIST, can be described with help of an overlap matrix \(K^{x}\) that represents similarity across the \(c=c_{1}+c_{2}\) images of the training set of two classes, \(1\) and \(2\) with \(D_{1}\) and \(D_{2}\) samples respectively. Entries of the overlap matrix are heterogeneous. Different drawings of \(c\) example patterns each lead to different realizations of the overlap matrix; the matrix is stochastic. We here describe the matrix elements by a correlated multivariate Gaussian. **b)** The data is fed through a feed-forward neural network to produce an output \(y\). In the case of infinitely wide hidden layers and under Gaussian priors on the network weights, the output of the network is a Gaussian process with the kernel \(K^{y}\), which depends on the network architecture and the input kernel \(K^{x}\). **c)** To obtain statistical properties of the posterior distribution, we compute its disorder-averaged moment generating function \(\overline{Z}(J,l_{*})\) diagrammatically. **d)** The leading-order contribution from the homogeneous kernel \(\langle y^{*}\rangle_{0}\) is corrected by \(\langle y^{*}\rangle_{1}\) due to the variability of the overlaps; both follow as derivatives of \(\overline{Z}(J,l_{*})\). **e)** Comparing the mean network output on a test point \(\langle y^{*}\rangle\), the zeroth order theory \(\langle y_{*}\rangle_{0}\) (blue dashed), the first-order approximation in the data-variability \(\langle y_{*}\rangle_{0+1}\) (blue-red dashed) and empirical results (black crosses) as a function of the amount of training data (learning curve) shows how variability in the data set limits the network performance and validates the theory. Background: Neural Networks In general a feed forward neural network maps inputs \(x_{\alpha}\in\mathbb{R}^{N_{\mathrm{dim}}}\) to outputs \(y_{\alpha}\in\mathbb{R}^{N_{\mathrm{out}}}\) via the transformations \[h_{\alpha}^{(l)} =\mathbf{W}^{(l)}\phi^{(l)}\left(h_{\alpha}^{(l-1)}\right)\quad \text{with}\quad h_{\alpha}^{0}=\mathbf{V}x_{\alpha}\,,\] \[y_{\alpha} =\mathbf{U}\phi^{(L+1)}\left(h_{\alpha}^{(L)}\right)\,, \tag{1}\] where \(\phi^{(l)}(x)\) are activation functions, \(\mathbf{V}\in\mathbb{R}^{N_{h}\times N_{\mathrm{dim}}}\) are the read-in weights, \(N_{\mathrm{dim}}\) is the dimension of the input, \(\mathbf{W}^{(l)}\in\mathbb{R}^{N_{h}\times N_{h}}\) are the hidden weights, \(N_{h}\) denotes the number of hidden neurons, and \(\mathbf{U}\in\mathbb{R}^{N_{\mathrm{out}}\times N_{h}}\) are the read-out weights. Here \(l\) is the layer index \(1\leq l\leq L\) and \(L\) the number of layers of the network; we here assume layer- independent activation functions \(\phi^{(l)}=\phi\). The collection of all weights are the model parameters \(\Theta=\left\{\mathbf{V},\mathbf{W}^{(1)},\ldots,\mathbf{W}^{(L)},\mathbf{U}\right\}\). The goal of training a neural network in a supervised manner is to find a set of parameters \(\hat{\Theta}\) which reproduces the input-output relation \(\left(x_{\mathrm{tr},\alpha},y_{\mathrm{tr},\alpha}\right)_{1\leq\alpha\leq D}\) for a set of \(D\) pairs of inputs and outputs as accurately as possible, while also maintaining the ability to generalize. Hence one partitions the data into a training set \(\mathcal{D}_{\mathrm{tr}}\), \(\left|\mathcal{D}_{\mathrm{tr}}\right|=D\), and a test-set \(\mathcal{D}_{\mathrm{test}}\), \(\left|\mathcal{D}_{\mathrm{test}}\right|=D_{\mathrm{test}}\). The training data is given in the form of the matrix \(\mathbf{x}_{\mathrm{tr}}\in\mathbb{R}^{N_{\mathrm{dim}}\times N_{\mathrm{tr}}}\) and \(\mathbf{y}_{\mathrm{tr}}\in\mathbb{R}^{N_{\mathrm{out}}\times N_{\mathrm{tr}}}\). The quality of how well a neural network is able to model the relation between inputs and outputs is quantified by a task-dependent loss function \(\mathcal{L}\left(\Theta,x_{\alpha},y_{\alpha}\right)\). Starting with a random initialization of the parameters \(\Theta\), one tries to find an optimal set of parameters \(\hat{\Theta}\) that minimizes the loss \(\sum_{\alpha=1}^{D}\mathcal{L}\left(\Theta,x_{\mathrm{tr},\alpha},y_{\mathrm{tr },\alpha}\right)\) on the training set \(\mathcal{D}_{\mathrm{tr}}\). The parameters \(\hat{\Theta}\) are usually obtained through methods such as stochastic gradient descent. The generalization properties of the network are quantified after the training by computing the loss \(\mathcal{L}\left(\hat{\Theta},x_{\alpha},y_{\alpha}\right)\) on the test set \(\left(x_{\mathrm{test},\alpha},y_{\mathrm{test},\alpha}\right)\in\mathcal{D}_{ \mathrm{test}}\), which are data samples that have not been used during the training process. Neural networks hence provide, by definition, a parametric modeling approach, as the goal is to a find an optimal set of parameters \(\hat{\Theta}\). #### ii.2.2 Background: Bayesian inference and Gaussian processes The parametric viewpoint in Section II.1.1 which yields a point estimate \(\hat{\Theta}\) for the optimal set of parameters can be complemented by considering a Bayesian perspective [26, 21, 25]: For each network input \(x_{\alpha}\), the network equations (1) yield a single output \(y\left(x_{\alpha}|\Theta\right)\). One typically considers a stochastic output \(y\left(x_{\alpha}|\Theta\right)+\xi_{\alpha}\) where the \(\xi_{\alpha}\) are Gaussian independently and identically distributed (i.i.d.) with variance \(\sigma_{\mathrm{reg}}^{2}\)[38]. This regularization allows us to define the probability distribution \(p\left(y|x_{\alpha},\Theta\right)=\left\langle\delta\left[y_{\alpha}-y(x_{ \mathrm{tr},\alpha}|\Theta)-\xi_{\alpha}\right]\right\rangle_{\xi_{\alpha}}= \mathcal{N}\left(y_{\alpha};\,y(x_{\alpha}|\Theta),\sigma_{\mathrm{reg}}^{2}\right)\). An alternative interpretation of \(\xi_{\alpha}\) is a Gaussian noise on the labels. Given a particular set of the network parameters \(\Theta\) this implies a joint distribution \(p\left(\mathbf{y}|\mathbf{x}_{\mathrm{tr}},\Theta\right):=\prod_{\alpha=1}^{D} \left\langle\delta\left[y_{\alpha}-y(x_{\mathrm{tr},\alpha}|\Theta)-\xi_{ \alpha}\right]\right\rangle_{\xi_{\alpha}}=\prod_{\alpha=1}^{D}p(y_{\alpha}|x_{ \alpha},\Theta)\) of network outputs \(\left\{y_{\alpha}\right\}_{1\leq\alpha\leq D}\), each corresponding to one network input \(\left\{x_{\mathrm{tr},\alpha}\right\}_{1\leq\alpha\leq D}\). One aims to use the training data \(\mathcal{D}_{\mathrm{tr}}\) to compute the posterior distribution for the weights \(\mathbf{V},\mathbf{W}^{(1)},\ldots,\mathbf{W}^{(L)},\mathbf{U}\) by conditioning on the network outputs to agree to the desired training values. Concretely, we here assume as a prior for the model parameters that the parameter elements \(V_{ij},W_{ij}^{(l)},U_{ij}\) are i.i.d. according to centered Gaussian distributions \(V_{ij}\sim\mathcal{N}\left(0,\sigma_{v}^{2}/N_{\mathrm{dim}}\right)\), \(W_{ij}^{(l)}\sim\mathcal{N}\left(0,\sigma_{w}^{2}/N_{h}\right)\), and \(U_{ij}\sim\mathcal{N}\left(\sigma_{w}^{2}/N_{\mathrm{out}}\right)\). The posterior distribution of the parameters \(p\left(\Theta|\mathbf{x}_{\mathrm{tr}},\mathbf{y}_{\mathrm{tr}}\right)\) then follows from Bayes' theorem as \[p\left(\Theta|\mathbf{x}_{\mathrm{tr}},\mathbf{y}_{\mathrm{tr}}\right)=\frac{p \left(\mathbf{y}_{\mathrm{tr}}|\mathbf{x}_{\mathrm{tr}},\Theta\right)\,p\left( \Theta\right)}{p\left(\mathbf{y}_{\mathrm{tr}}|\mathbf{x}_{\mathrm{tr}}\right)}\,, \tag{2}\] withthe likelihood \(p\left(\mathbf{y}_{\mathrm{tr}}|\mathbf{x}_{\mathrm{tr}},\Theta\right)\), the weight prior \(p\left(\Theta\right)\) and the model evidence \(p\left(\mathbf{y}_{\mathrm{tr}}|\mathbf{x}_{\mathrm{tr}}\right)=\int d\Theta\,p \left(\mathbf{y}_{\mathrm{tr}}|\mathbf{x}_{\mathrm{tr}},\Theta\right)p\left(\Theta\right)\), which provides the proper normalization. The posterior parameter distribution \(p\left(\Theta|\mathbf{x}_{\mathrm{tr}},\mathbf{y}_{\mathrm{tr}}\right)\) also determines the distribution of the network output \(y_{\star}\) corresponding to a test-point \(x_{\ast}\) by marginalizing over the parameters \(\Theta\) \[p\left(y_{\ast}|x_{\ast},\mathbf{x}_{\mathrm{tr}},\mathbf{y}_{\mathrm{tr}}\right) =\int d\Theta\,p\left(y_{\ast}|x_{\ast},\Theta\right)p\left(\Theta| \mathbf{x}_{\mathrm{tr}},\mathbf{y}_{\mathrm{tr}}\right)\,, \tag{3}\] \[=\frac{p\left(y_{\ast},\mathbf{y}_{\mathrm{tr}}|x_{\ast},\mathbf{x }_{\mathrm{tr}}\right)}{p\left(\mathbf{y}_{\mathrm{tr}}|\mathbf{x}_{\mathrm{tr}} \right)}\,. \tag{4}\] One can understand this intuitively: The distribution in (2) provides a set of viable parameters \(\Theta\) based on the training data. An initial guess for the correct choice of parameters via the prior \(p\left(\Theta\right)\) is refined, based on whether the choice of parameters accurately models the relation of the training-data, which is encapsulated in the likelihood \(p\left(\mathbf{y}_{\mathrm{tr}}|\mathbf{x}_{\mathrm{tr}},\Theta\right)\). This viewpoint of Bayesian parameter selection is also equivalent to what is known as Bayesian deep learning [26]. The distribution \(p\left(y_{\ast},\mathbf{y}_{\mathrm{tr}}|x_{\ast},\mathbf{x}_{\mathrm{tr}}\right)\) describes the joint network outputs for all training points and the test point. In the case of wide networks, where \(N_{h}\rightarrow\infty\), [29, 39] showed that the distribution of network outputs \(p\left(y_{\ast},\mathbf{y}_{\mathrm{tr}}|x_{\ast},\mathbf{x}_{\mathrm{tr}}\right)\) approaches a Gaussian process \(y\sim\mathcal{N}\left(0,K^{y}\right)\), where the covariance \(\left\langle y_{\alpha}y_{\beta}\right\rangle=K_{\alpha\beta}^{y}\) is also denoted as the kernel. This is beneficial, as the inference for the network output \(y_{\ast}\) for a test point \(x_{\ast}\) then also follows a Gaussian distribution with mean and covariance given by [32] \[\left\langle y_{\ast}\right\rangle =K_{\ast\alpha}^{y}\left(K^{y}\right)_{\alpha\beta}^{-1}\,y_{ \mathrm{tr},\beta} \tag{5}\] \[\left\langle\left(y_{\ast}-\left\langle y_{\ast}\right\rangle \right)^{2}\right\rangle =K_{\ast\ast}^{y}-K_{\ast\alpha}^{y}\left(K^{y}\right)_{\alpha\beta}^{-1}\,K_{ \beta\ast}^{y}\,, \tag{6}\] where summation over repeated indices is implied. There has been extensive research in relating the outputs of wide neural networks to Gaussian processes [3, 19, 29] including recent work on corrections due to finite-width effects \(N_{h}\gg 1\)[27, 20, 2, 44, 35, 45, 44]. ### Our contribution A fundamental assumption of supervised learning is the existence of a joint distribution \(p(x_{\mathrm{tr}},y_{\mathrm{tr}})\) from which the set of training data as well as the set of test data are drawn. In this work we follow the Bayesian approach and investigate the effect of variability in the training data on the generalization properties of wide neural networks. We do so in the kernel limit of infinitely wide linear and non-linear networks. Variability here has two meanings: First, for each drawing of \(D\) pairs of training samples \((x_{\mathrm{tr},\alpha},y_{\mathrm{tr},\alpha})_{1\leq\alpha\leq D}\) one obtains a \(D\times D\) kernel matrix \(K^{y}\) with heterogeneous entries; so in a single instance of Bayesian inference, the entries of the kernel matrix vary from one entry to the next. Second, each such drawing of \(D\) training data points and one test data point \((x_{*},y_{*})\) leads to a different kernel \(\{K^{y}_{\alpha\beta}\}_{1\leq\alpha,\beta\leq D+1}\), which follows some probabilistic law \(K^{y}\sim p(K^{y})\). Our work builds upon previous results for the NNGP limit to formalize the influence of such stochastic kernels. We here develop a field theoretic approach to systematically investigate the influence of the underlying kernel stochasticity on the generalization properties of the network, namely the learning curve, the dependence of \(\langle y_{*}\rangle\) on the number of training samples \(D=|\mathcal{D}_{\mathrm{tr}}|\). As we assume Gaussian i.i.d. priors on the network parameters, the output kernel \(K^{y}_{\alpha\beta}\) solely depends on the network architecture and the input overlap matrix \[K^{x}_{\alpha\beta}=\sum_{i=1}^{N_{\mathrm{dim}}}x_{\alpha i}x_{\beta i}\quad x _{\alpha},x_{\beta}\in\mathcal{D}_{\mathrm{tr}}\cup\mathcal{D}_{\mathrm{test}}\,, \tag{7}\] with \(\alpha,\beta=1...D+1\). We next define a data model which allows us to approximate the probability measure for the data variability. ### Definition of a synthetic data set To investigate the generalization properties in a binary classification task, we introduce a synthetic stochastic binary classification task. This task allows us to control the statistical properties of the data with regard to the dimensionality of the patterns, the degree of separation between patterns belonging to different classes, and the variability in the kernel. Moreover, it allows us to construct training-data sets \(\mathcal{D}_{\mathrm{tr}}\) of arbitrary sizes and we will show that the statistics of the resulting kernels is indeed representative for more realistic data sets such as MNIST. The data set consists of pattern realizations \(x_{\alpha}\in\{-1,1\}^{N_{\mathrm{dim}}}\) with dimension \(N_{\mathrm{dim}}\) even. We denote the entries \(x_{\alpha,i}\) of this \(N_{\mathrm{dim}}\)-dimensional vector for data point \(\alpha\) as pixels that randomly take either of two values \(x_{\alpha,i}\in\{-1,1\}\) with respective probabilities \(p(x_{\alpha,i}=1)\) and \(p(x_{\alpha,i}=-1)\) that depend on the class \(c(\alpha)\in\{1,2\}\) of the pattern realization and whether the index \(i\) is in the left half ( \(i\leq N_{\mathrm{dim}}/2\)) or the right half ( \(i>N_{\mathrm{dim}}/2\)) of the pattern: For class \(c(\alpha)=1\) each pixel \(x_{\alpha,1\leq i\leq N_{\mathrm{dim}}}\) is realized independently as a binary variable as \[x_{\alpha,i} =\begin{cases}1&\text{with }p\\ -1&\text{with }(1-p)\end{cases}\quad\text{for }i\leq\frac{N_{\mathrm{dim}}}{2}\,, \tag{8}\] \[x_{\alpha,i} =\begin{cases}1&\text{with }(1-p)\\ -1&\text{with }p\end{cases}\quad\text{for }i>\frac{N_{\mathrm{dim}}}{2}\,. \tag{9}\] For a pattern \(x_{\alpha}\) in the second class \(c(\alpha)=2\) the pixel values are distributed independently of those in the first class with a statistics that equals the negative pixel values of the first class, which is \(P\left(x_{\alpha i}\right)=P\left(-x_{\beta i}\right)\) with \(c(\beta)=1\). The number of training samples \(N_{\mathrm{dim}}\) is \(N_{\mathrm{dim}}\). The number of training samples \(N_{\mathrm{dim}}\) is \(N_{\mathrm{dim}}\). \(1\) and \(c(\alpha)=2\). There are two limiting cases for \(p\) which illustrate the construction of the patterns: In the limit \(p=1\), each pattern \(x_{\alpha}\) in \(c=1\) consists of a vector, where the first \(N_{\text{dim}}/2\) pixels have the value \(x_{\alpha i}=1\), whereas the second half consists of pixels with the value \(x_{\alpha,i}=-1\). The opposite holds for patterns in the second class \(c=2\). This limiting case is shown in Figure 2 (right column). In the limit case \(p=0.5\) each pixel assumes the value \(x_{\alpha,i}=\pm 1\) with equal probability, regardless of the pattern class-membership or the pixel position. Hence one cannot distinguish the class membership of any of the training instances. This limiting case is shown in Figure 2 (left column). If \(c(\alpha)=1\) we set \(y_{\text{tr},\alpha}=-1\) and for \(c(\alpha)=2\) we set \(y_{\text{tr},\alpha}=1\). We now investigate the description of this task in the framework of Bayesian inference. The hidden variables \(h_{\alpha}^{0}\) (1) in the input layer under a Gaussian prior on \(V_{ij}\overset{\text{i.i.d.}}{\sim}\mathcal{N}\left(0,\sigma_{v}^{2}/N_{\text {dim}}\right)\) follow a Gaussian process with kernel \(K^{(0)}\) given by \[K_{\alpha\beta}^{0} =\langle h_{\alpha}^{0}h_{\beta}^{0}\rangle_{V\sim\mathcal{N} \left(0,\frac{\sigma_{v}^{2}}{N_{\text{dim}}}\right)}\,, \tag{10}\] \[=\frac{\sigma_{v}^{2}}{N_{\text{dim}}}\,\sum_{i=1}^{N_{\text{dim }}}x_{\alpha i}x_{\beta i}\,. \tag{11}\] Separability of the two classes is reflected in the structure of this input kernel \(K^{0}\) as shown in Figure 2: In the cases with \(p=0.8\) and \(p=1\) one can clearly distinguish blocks; the diagonal blocks represent intra-class overlaps, the off-diagonal blocks inter-class overlaps. This is not the case for \(p=0.5\), where no clear block-structure is visible. In the case of \(p=0.8\) one can further observe that the blocks are not as clear-cut as in the case \(p=1\), but rather noisy, similar to \(p=0.5\). This is due to the probabilistic realization of patterns, which induces stochasticity in the blocks of the input kernel \(K^{0}\) (10). To quantify this effect, based on the distribution of the pixel values (9) we compute the distribution of the entries of \(K^{0}\) for the binary classification task. The mean of the overlap elements \(\mu_{\alpha\beta}\) and their covariances \(\Sigma_{(\alpha\beta)(\gamma\delta)}\) are defined via \[\mu_{\alpha\beta} =\left\langle K_{\alpha\beta}^{0}\right\rangle\,, \tag{12}\] \[\Sigma_{(\alpha\beta)(\gamma\delta)} =\left\langle\delta K_{\alpha\beta}^{0}\,\delta K_{\gamma\delta }^{0}\right\rangle\,,\] (13) \[\delta K_{\alpha\beta}^{0} =K_{\alpha\beta}^{0}-\mu_{\alpha\beta}\,, \tag{14}\] where the expectation value \(\left\langle\cdot\right\rangle\) is taken over drawings of \(D\) training samples each. By construction we have \(\mu_{\alpha\beta}=\mu_{\beta\alpha}\). The covariance is further invariant under the exchange of \((\alpha,\beta)\leftrightarrow(\gamma,\delta)\) and, due to the symmetry of \(K_{\alpha\beta}^{0}=K_{\beta\alpha}^{0}\), also under swapping \(\alpha\leftrightarrow\beta\) and \(\gamma\leftrightarrow\delta\) separately. In the artificial task-setting, the parameter \(p\), the pattern dimensionality \(N_{\text{dim}}\), and the variance \(\sigma_{v}^{2}/N_{\text{dim}}\) of each read-in weight \(V_{ij}\) define the elements of \(\mu_{\alpha\beta}\) and \(\Sigma_{(\alpha\beta)(\gamma\delta)}\), which read \[\mu_{\alpha\beta} =\sigma_{v}^{2}\begin{cases}1&\alpha=\beta\\ u&c_{\alpha}=c_{\beta}\\ -u&c_{\alpha}\neq c_{\beta}\end{cases}\,,\] \[\Sigma_{(\alpha\beta)(\alpha\beta)} =\frac{\sigma_{v}^{4}}{N_{\text{dim}}}\kappa,\] \[\Sigma_{(\alpha\beta)(\alpha\delta)} =\frac{\sigma_{v}^{4}}{N_{\text{dim}}}\begin{cases}\nu&\text{ for }\begin{cases}c_{\alpha}=c_{\beta}=c_{\delta}\\ c_{\alpha}\neq c_{\beta}=c_{\delta}\\ -\nu&\text{ for }\begin{cases}c_{\alpha}=c_{\beta}\neq c_{\delta}\\ c_{\alpha}=c_{\delta}\neq c_{\beta}\end{cases}\end{cases}\,,\] \[\text{with}\quad\kappa :=1-u^{2}\,,\] \[\nu :=u\left(1-u\right),\] \[u :=4p(p-1)+1\,. \tag{15}\] In addition to this, the tensor elements of \(\Sigma_{(\alpha\beta)(\gamma\delta)}\) are zero for the following index combinations because we fixed the value of \(K_{\alpha\alpha}^{0}\) by construction: \[\Sigma_{(\alpha\beta)(\gamma\delta)} =0\quad\text{with}\quad\alpha\neq\beta\neq\gamma\neq\delta\,,\] \[\Sigma_{(\alpha\alpha)(\beta\gamma)} =0\quad\text{with}\quad\alpha\neq\beta\neq\gamma\,,\] \[\Sigma_{(\alpha\alpha)(\beta\beta)} =0\quad\text{with}\quad\alpha\neq\beta\,,\] \[\Sigma_{(\alpha\alpha)(\alpha\beta)} =0\quad\text{with}\quad\alpha\neq\beta\,,\] \[\Sigma_{(\alpha\alpha)(\alpha\alpha)} =0\quad\text{with}\quad\alpha\neq\beta\,. \tag{16}\] The expressions for \(\Sigma_{(\alpha\beta)(\alpha\beta)}\) and \(\Sigma_{(\alpha\beta)(\alpha\delta)}\) in (15) show that the magnitude of the fluctuations are controlled through the parameter \(p\) and the pattern dimensionality \(N_{\text{dim}}\): The covariance \(\Sigma\) is suppressed by a factor of \(1/N_{\text{dim}}\) compared to the mean values \(\mu\). Hence we can use the pattern dimensionality \(N_{\text{dim}}\) to investigate the influence of the strength of fluctuations. As illustrated in Figure 1a, the elements \(\Sigma_{(\alpha\beta)(\alpha\beta)}\) denote the variance of individual entries of the kernel, while \(\Sigma_{(\alpha\beta)(\alpha\gamma)}\) are covariances of entries across elements of a given row \(\alpha\), visible as horizontal or vertical stripes in the color plot of the kernel. Equation (15) implies, by construction, a Gaussian distribution of the elements \(K_{\alpha\beta}^{0}\) as it only provides the first two cumulants. One can show that the higher-order cumulants of \(K_{\alpha\beta}^{0}\) scale sub-leading in the pattern dimension and are hence suppressed by a factor \(\mathcal{O}\left(1/N_{\text{dim}}\right)\) compared to \(\Sigma_{(\alpha\beta)(\gamma\delta)}\). ## III Results In this section we derive the field theoretic formalism which allows us to compute the statistical properties of the inferred network output in Bayesian inference with a stochastic kernel. We show that the resulting process is non-Gaussian and reminiscent of a \(\varphi^{3}+\varphi^{4}\)-theory. Specifically, we compute the mean of the predictive distribution of this process conditioned on the training data. This is achieved by employing systematic approximations with the help of Feynman diagrams. Subsequently we show that our results provide an accurate bound on the generalization capabilities of the network. We further discuss the implications of our analytic results for neural architecture search. ### Field theoretic description of Bayesian inference #### iii.1.1 Bayesian inference with stochastic kernels In general, a network implements a map from the inputs \(x_{\alpha}\) to corresponding outputs \(y_{\alpha}\). In particular a model of the form (1) implements a non-linear map \(\psi:\mathbb{R}^{N_{\text{dim}}}\to\mathbb{R}^{N_{h}}\) of the input \(x_{\alpha}\in\mathbb{R}^{N_{\text{dim}}}\) to a hidden state \(h_{\alpha}\in\mathbb{R}^{N_{h}}\). This map may also involve multiple hidden-layers, biases and non-linear transformations. The read-out weight \(\mathbf{U}\in\mathbb{R}^{1\times N_{h}}\) links the scalar network output \(y_{\alpha}\in\mathbb{R}\) and the transformed inputs \(\psi\left(x_{\alpha}\right)\) with \(1\leq\alpha\leq D_{\text{tot}}=D+D_{\text{test}}\) which yields \[y_{\alpha}=\mathbf{U}\,\psi\left(x_{\alpha}\right)+\xi_{\alpha}\,, \tag{17}\] where \(\xi_{\alpha}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,\sigma_{\text{reg}}^{2})\) is a regularization noise in the same spirit as in [38]. We assume that the prior on the read-out vector elements is a Gaussian \(\mathbf{U}_{i}\overset{\text{i.i.d.}}{\sim}\mathcal{N}\left(0,\sigma_{u}^{2}/ N_{h}\right)\). The distribution of the set of network outputs \(y_{1\leq\alpha\leq D_{\text{tot}}}\) is then in the limit \(N_{h}\to\infty\) a multivariate Gaussian [29]. The kernel matrix of this Gaussian is obtained by taking the expectation value with respect to the read-out vector, which yields \[\left\langle y_{\alpha}\,y_{\beta}\right\rangle_{\mathbf{U}} =:K_{\alpha\beta}^{y}=\sigma_{u}^{2}\,K_{\alpha\beta}^{\psi}+ \delta_{\alpha\beta}\,\sigma_{\text{reg}}^{2}\,, \tag{18}\] \[K_{\alpha\beta}^{\psi} =\frac{1}{N_{h}}\,\sum_{i=1}^{N_{h}}\psi_{i}\left(x_{\alpha} \right)\,\psi_{i}\left(x_{\beta}\right)\,. \tag{19}\] The kernel matrix \(K_{\alpha\beta}^{y}\) describes the covariance of the network's output and hence depends on the kernel matrix \(K_{\alpha\beta}^{\psi}\). The additional term \(\delta_{\alpha\beta}\,\sigma_{\text{reg}}^{2}\) acts as a regularization term, which is also known as a ridge regression [14] or Tikhonov regularization [41]. In the context of neural networks one can motivate the regularizer \(\sigma_{\text{reg}}^{2}\) by using the \(L^{2}\)-regularization in the readout layer. This is also known as weight decay [12]. Introducing the regularizer \(\sigma_{\text{reg}}^{2}\) is necessary to ensure that one can properly invert the matrix \(K_{\alpha\beta}^{y}\). Different drawings of sets of training data \(\mathcal{D}_{\text{tr}}\) lead to different realizations of kernel matrices \(K^{\psi}\) and \(K^{y}\). The network output \(y_{\alpha}\) hence follows a multivariate Gaussian with a stochastic kernel matrix \(K^{y}\). A more formal derivation of the Gaussian statistics, including an argument for its validity in deep neural networks, can be found in [19]. A consistent derivation using field theoretical methods and corrections in terms for the width of the hidden layer \(N_{h}\) for deep and recurrent networks has been presented in [35]. In general, the input kernel matrix \(K^{0}\) (10) and the output kernel matrix \(K^{y}\) are related in a non-trivial fashion, which depends on the specific network architecture at hand. From now on we make an assumption on the stochasticity of \(K^{0}\) and assume that the input kernel matrix \(K^{0}\) is distributed according to a multivariate Gaussian \[K^{0}\sim\mathcal{N}\left(\mu,\,\Sigma\right)\,, \tag{20}\] where \(\mu\) and \(\Sigma\) are given by (12) and (13), respectively. In the limit of large pattern dimensions \(N_{\text{dim}}\gg\)1 this assumption is warranted for the kernel matrix \(K^{0}\). This structure further assumes, that the overlap statistics are unimodal, which is indeed mostly the case for data such as MNIST. Furthermore we assume that this property holds for the output kernel matrix \(K^{y}\) as well and that we can find a mapping from the mean \(\mu\) and covariance \(\Sigma\) of the input kernel to the mean \(m\) and covariance \(C\) of the output kernel \(\left(\mu_{\alpha\beta},\,\Sigma_{\left(\alpha\beta\right)\left(\gamma\delta \right)}\right)\to\left(m_{\alpha\beta},\,C_{\left(\alpha\beta\right)\left( \gamma\delta\right)}\right)\) so that \(K^{y}\) is also distributed according to a multivariate Gaussian \[K^{y}\sim\mathcal{N}(m,\,C). \tag{21}\] For each realization \(K_{\alpha\beta}^{y}\), the joint distribution of the network outputs \(y_{1\leq\alpha\leq D_{\text{tot}}}\) corresponding to the training and test data points \(\mathbf{x}\) follow a multivariate Gaussian \[p\left(\mathbf{y}|\mathbf{x}\right)\sim\mathcal{N}\!\left(0,K^{y}\right). \tag{22}\] The kernel allows us to compute the conditional probability \(p\left(y_{*}|\mathbf{x}_{\text{tr}},\mathbf{y}_{\text{tr}},x_{*}\right)\) (3) for a test point \(\left(x_{*},y_{*}\right)\in\mathcal{D}_{\text{test}}\) conditioned on the data from the training set \(\left(\mathbf{x}_{\text{tr}},\mathbf{y}_{\text{tr}}\right)\in\mathcal{D}_{ \text{tr}}\). This distribution is Gaussian with mean and variance given by (5) and (6), respectively. It is our goal to take into account that \(K^{0}\) is a stochastic quantity, which depends on the particular draw of the training and test data set \(\left(\mathbf{x}_{\text{tr}},\mathbf{y}_{\text{tr}}\right)\in\mathcal{D}_{ \text{tr}},\left(x_{*},y_{*}\right)\in\mathcal{D}_{\text{test}}\). The labels \(\mathbf{y}_{\text{tr}},y_{*}\) are, by construction, deterministic and take either one of the values \(\pm 1\). In the following we investigate the mean of the predictive distribution on the number of training samples, which we call the learning curve. A common assumption is that this learning curve is rather insensitive to the very realization of the chosen training points. Thus we assume that the learning curve is self-averaging.. The mean computed for a single draw of the training data is hence expected to agree well to the average over many such drawings. Under this assumption it is sufficient to compute the data-averaged mean inferred network output, which reduces to computing the disorder-average of the following quantity \[\left\langle y_{*}\right\rangle_{K^{y}}=\left\langle K_{*\alpha}^{y}\left[K^{ y}\right]_{\alpha\beta}^{-1}\right\rangle_{K^{y}}y_{\beta}\,. \tag{23}\] To perform the disorder average and to compute perturbative corrections, we will follow these steps * construct a suitable dynamic moment-generating function \(Z_{K^{y}}(l_{*})\) with the source term \(l_{*}\), * propagate the input stochasticity to the network output \(K^{0}_{\alpha\beta}\to K^{y}_{\alpha\beta}\), * disorder-average the functional using the model \(K^{y}_{\alpha\beta}\sim\mathcal{N}(m_{\alpha\beta},\,C_{(\alpha\beta)(\gamma \delta)})\), * and finally perform the computation of perturbative corrections using diagrammatic techniques. #### ii.2.2 Constructing the dynamic moment generating function \(Z_{K^{y}}(l_{*})\) Our ultimate goal is to compute learning curves. Therefore we want to evaluate the disorder averaged mean inferred network output (23). Both the presence of two correlated random matrices and the fact that one of the matrices appears as an inverse complicate this process. One alternative route is to define the moment-generating function \[Z(l_{*}) =\int dy_{*}\exp(l_{*}y_{*})p\left(y_{*}|x_{*},\mathbf{x}_{\text{ tr}},\mathbf{y}_{\text{tr}}\right)\,, \tag{24}\] \[=\frac{\int dy_{*}\exp(l_{*}y_{*})p\left(y_{*},\mathbf{y}_{\text{ tr}}|x_{*},\mathbf{x}_{\text{tr}}\right)}{p\left(\mathbf{y}_{\text{tr}}| \mathbf{x}_{\text{tr}}\right)}\,,\] (25) \[=:\frac{\mathcal{Z}(l_{*})}{\mathcal{Z}(0)}\,, \tag{26}\] with joint Gaussian distributions \(p\left(y_{*},\mathbf{y}_{\text{tr}}|x_{*},\mathbf{x}_{\text{tr}}\right)\) and \(p\left(\mathbf{y}_{\text{tr}}|\mathbf{x}_{\text{tr}}\right)\) that each can be readily averaged over \(K^{y}\). Equation (23) is then obtained as \[\left\langle y_{*}\right\rangle_{K^{y}}=\frac{\partial}{\partial l_{*}}\left\langle \frac{\mathcal{Z}(l_{*})}{\mathcal{Z}(0)}\right\rangle_{K^{y}}\Bigg{|}_{l_{*}= 0}\,. \tag{27}\] A complication of this approach is that the numerator and denominator co-fluctuate. The common route around this problem is to consider the cumulant-generating function \(W(l_{*})=\ln\mathcal{Z}(l_{*})\) and to obtain \(\left\langle y_{*}\right\rangle_{K^{y}}=\frac{\partial}{\partial l_{*}}\left\langle W (l_{*})\right\rangle_{K^{y}}\), which, however, requires averaging the logarithm. This is commonly done with the replica trick [8; 23]. We here follow a different route to ensure that the disorder-dependent normalization \(\mathcal{Z}(0)\) drops out and construct a dynamic moment generating function [5]. Our goal is hence to design a dynamic process where a time dependent observable is related to our mean-inferred network output \(y_{*}\). We hence define the linear process in the auxiliary variables \(q_{\alpha}\) \[\frac{\partial q_{\alpha}(t)}{\partial t}=-K^{y}_{\alpha\beta}\,q_{\beta}(t)+ y_{\alpha}\,, \tag{28}\] for \((x_{\alpha},y_{\alpha})\in\mathcal{D}_{\text{tr}}\). From this we see directly that \(q_{\alpha}(t\to\infty)=\left[K^{y}\right]^{-1}_{\alpha\beta}y_{\beta}\) is a fixpoint. The fact that \(K^{y}_{\alpha\beta}\) is a covariance matrix ensures that it is positive semi-definite and hence implies the convergence to a fixpoint. We can obtain (5) \(\left\langle y_{*}\right\rangle=K^{y}_{\alpha\alpha}\left[K^{y}\right]^{-1}_{ \alpha\beta}y_{\beta}\) from (28) as a linear readout of \(q_{\alpha}(t\to\infty)\) with the matrix \(K^{y}_{\alpha\alpha}\). Using the Martin-Siggia-Rose-deDominicis-Janssen formalism [13; 16; 22; 37] one can express this as the first derivative of the moment generating function \(Z_{K^{y}}(l_{*})\) in frequency space \[Z_{K^{y}}(l_{*}) =\int\mathcal{DQ}\tilde{Q}\exp\left(S(Q,\tilde{Q},l_{*})\right)\,, \tag{29}\] \[S(Q,\tilde{Q},l_{*}) =\tilde{Q}^{\top}_{\alpha}\big{(}-i\omega\mathbb{I}+K^{y}\big{)}_ {\alpha\beta}\,Q_{\beta}\] (30) \[-\tilde{Q}(\omega=0)_{\alpha}y_{\alpha}\] \[+l_{*}K^{y}_{\alpha,\alpha}Q_{\alpha}(\omega=0)\,, \tag{31}\] where \(\tilde{Q}^{\top}_{\alpha}\left(\cdots\right)Q_{\beta}=\frac{1}{2\pi}\int d \omega\tilde{Q}_{\alpha}(\omega)\left(\cdots\right)Q_{\beta}(-\omega)\). As \(Z_{K^{y}}(l_{*})\) is normalized such that \(Z_{K^{y}}(0)=1\quad\forall\,K^{y}\), we can compute (23) by evaluating the derivative of the disorder-averaged moment-generating function \(\overline{Z}(l_{*})\) \[\overline{Z}(l_{*}) =\left\langle\int\mathcal{D}\{Q,\tilde{Q}\}\exp\left(S(Q,\tilde{Q },l_{*})\right)\right\rangle_{K^{y}}\,, \tag{32}\] \[\left\langle y_{*}\right\rangle_{K^{y}} =\frac{\partial\overline{Z}(l_{*})}{\partial l_{*}}\bigg{|}_{l_{*}= 0}\,. \tag{33}\] By construction the distribution of the kernel matrix entries \(K^{y}_{\alpha\beta}\) is a multivariate Gaussian (20). In the following we will treat the stochasticity of \(K^{y}_{\alpha\beta}\) perturbatively to gain insights into the influence of input stochasticity. ii.2.3 Perturbative treatment of the disorder averaged moment generating function \(\overline{Z}(l_{*})\) To compute the disorder averaged mean-inferred network output (23) we need to compute the disorder average of the dynamic moment generating function \(\overline{Z}(l_{*})\) and its derivative at \(l_{*}=0\). Due to the linear appearance of \(K^{y}\) in the action (30) and the Gaussian distribution for \(K^{y}\) (21) we can do this directly and obtain the action \[\overline{Z}(l_{*}) =\int\mathcal{DQ}\mathcal{D}\tilde{Q}\,\left\langle\exp\left(S \right)\right\rangle_{K^{y}}\,,\] \[=\int\mathcal{DQ}\mathcal{D}\tilde{Q}\,\exp\left(\overline{S} \right)\,, \tag{34}\] \[\overline{S}(Q,\tilde{Q},l_{*}) =\tilde{Q}^{\top}\left(-i\omega\mathbb{I}+m\right)Q\] \[-\tilde{Q}^{0}_{\eta}\,y_{\eta}\] \[+l_{*}m_{*\epsilon}Q^{0}_{\epsilon}\] \[+\frac{1}{2}\tilde{Q}^{\top}_{\alpha}Q_{\beta}\,C_{(\alpha\beta)( \gamma\delta)}\,\tilde{Q}^{\top}_{\gamma}Q_{\delta}\] \[+l_{*}C_{(\epsilon\alpha)(\beta)}(\beta)^{0}_{\alpha}\tilde{Q}^{ \top}_{\beta}Q_{\gamma}\,, \tag{35}\] with \(Q^{0}:=Q(\omega=0)\) and \(\tilde{Q}^{0}:=\tilde{Q}(\omega=0)\). As we ultimately aim to obtain corrections for the mean inferred network output \(\langle y_{*}\rangle\), we utilize the action in (35) and established results from field theory to derive the leading order terms as well as perturbative corrections diagrammatically. The presence of the variance and covariance terms in (35) introduces corrective factors, which cannot appear in the 0th-order approximation, which corresponds to the homogeneous kernel that neglects fluctuations in \(K^{y}\) by setting \(C_{(\alpha\beta)(\gamma\delta)}=0\). This will provide us with the tools to derive an asymptotic bound for the mean inferred network output \(\langle y^{*}\rangle\) in the case of an infinitely large training data set. This bound is directly controlled by the variability in the data. We provide empirical evidence for our theoretical results for linear, non-linear, and deep-kernel-settings and show how the results could serve as indications to aid neural architecture search based on the statistical properties of the underlying data set. iii.2.4 Field theoretic elements to compute the mean inferred network output \(\langle y_{*}\rangle\) The field theoretic description of the inference problem in form of an action (35) allows us to derive perturbative expressions for the statistics of the inferred network output \(\langle y_{*}\rangle_{K^{y}}\) in a diagrammatic manner. This diagrammatic treatment for perturbative calculations is a powerful tool and is standard practice in statistical physics [46], data analysis and signal reconstruction [7], and more recently in the investigation of artificial neural networks [6]. Comparing the action (35) to prototypical expressions from classical statistical field theory such as the \(\varphi^{3}+\varphi^{4}\) theory[13; 46] one can similarly associate the elements of a field theory: * \(-\tilde{Q}^{0}_{\alpha}y_{\alpha}\doteq\) is a monopole term * \(l_{*}m_{*\epsilon}Q^{0}_{\epsilon}\doteq\) is a source term * \(\Delta_{\alpha\beta}:=(i\omega\mathbb{I}-m)^{-1}_{\alpha\beta}\doteq\) is a propagator that connect the fields \(Q_{\alpha}(\omega),\tilde{Q}_{\beta}\,(-\omega)\) * \(l_{*}C_{(\ast\alpha)(\beta\gamma)}Q^{0}_{\alpha}\tilde{Q}^{\top}_{\beta}Q_{ \gamma}\doteq\) is a three-point vertex * \(\frac{1}{2}\tilde{Q}^{\top}_{\alpha}Q_{\beta}\,C_{(\alpha\beta)(\gamma\delta) }\,\tilde{Q}^{\top}_{\gamma}Q_{\delta}\doteq\) is a four-point vertex. The following rules for Feynman diagrams simplify calculations: 1. To obtain corrections to first order in \(C\sim\mathcal{O}\left(1/N_{\text{dim}}\right)\), one has to compute all diagrams with a single vertex (three-point or four-point) [13]. This approach assumes that the interaction terms \(C_{(\alpha\beta)(\gamma\delta)}\) that stem from the variability of the data are small compared to the mean \(m_{\alpha,\beta}\). In the case of strong heterogeneity one cannot use a conventional expansion in the number of vertices \(C_{(\alpha\beta)(\gamma\delta)}\) and would have to resort to other methods. 2. Vertices, source terms, and monopoles have to be connected with one another using the propagator \(\Delta_{\alpha\beta}=(i\omega\mathbb{I}-m)^{-1}_{\alpha\beta}\) which couple \(Q_{\alpha}(\omega)\) and \(\tilde{Q}_{\beta}(-\omega)\) which each other. 3. We only need diagrams with a single external source term \(l_{*}\) because we seek corrections to the mean-inferred network output. Because the source \(l_{*}\) couples to the \(\omega=0\) component \(Q^{0}\) of the field \(Q\), propagators to these external legs are evaluated at \(\omega=0\), thus replacing \((i\omega\mathbb{I}-m)^{-1}_{\alpha\beta}\rightarrow-(m^{-1})_{\alpha\beta}\). 4. The structure of the integrals appearing in the four-point and three-point vertices containing \(C_{(\alpha\beta)(\gamma\delta)}\) with contractions by \(\Delta_{\alpha,\beta}\) or \(\Delta_{\gamma,\delta}\) within a pair of indices \((\alpha\beta)\) or \((\gamma\delta)\) yield vanishing contributions; such diagrams are known as closed response loops [13]. This is because the propagator \(\Delta_{\alpha,\beta}(t-s)\) in time domain vanishes for \(t=s\), which corresponds to the integral \(\int d\omega\,\Delta_{\alpha,\beta}(\omega)\) over all frequencies \(\omega\). 5. As we have frequency conservation at the vertices in the form \(\frac{1}{2}\tilde{Q}^{\top}_{\alpha}Q_{\beta}\,C_{(\alpha,\beta)(\gamma,\delta) }\,\tilde{Q}^{\top}_{\gamma}Q_{\delta}\) and since by point 4. above we only need to consider contractions by \(\Delta_{\beta\gamma}\) or \(\Delta_{\delta\alpha}\) by attaching the external legs all frequencies are constrained to \(\omega=0\), so also propagators within a loop are replaced by \(\Delta_{\alpha\beta}=(i\omega\mathbb{I}-m)^{-1}_{\alpha\beta}\rightarrow-(m^{-1} )_{\alpha\beta}\). These rules directly yield that the corrections for the disorder averaged mean-inferred network to first order in \(C_{(\alpha\beta)(\gamma\delta)}\) can only include the diagrams \[\langle y_{*}\rangle \doteq\] \[+\ \mathcal{O}\left(C^{2}\right) \tag{36}\] which translate to our main result \[\langle y_{*}\rangle_{0+1} =m_{*\alpha}m^{-1}_{\alpha\beta}\,y_{\beta}\] \[+m_{*\epsilon}m^{-1}_{c\alpha}C_{(\alpha\beta)(\gamma\delta)}m^{- 1}_{\beta\gamma}\,m^{-1}_{\delta\rho}\,y_{\rho}\] \[-C_{(\ast\alpha)(\beta\gamma)}m^{-1}_{\alpha\beta}m^{-1}_{\gamma \delta}\,y_{\delta}+\mathcal{O}\left(C^{2}\right)\,. \tag{37}\] We here define the first line as the zeroth-order approximation \(\langle y_{*}\rangle_{0}:=m_{*\alpha}m^{-1}_{\alpha\beta}\,y_{\beta}\), which has the same form as (5), and the latter two lines as perturbative corrections \(\langle y_{*}\rangle_{1}=\mathcal{O}\left(C\right)\) which are of linear order in \(C\). Evaluation of expressions for block-structured overlap matrices To evaluate the first order correction \(\left\langle y_{*}\right\rangle_{1}\) in (37) we make use of the fact that Bayesian inference is insensitive to the order in which the training data are presented. We are hence free to assume that all training samples of one class are presented en bloc. Moreover, supervised learning assumes that all training samples are drawn from the same distribution. As a result, the statistics is homogeneous across blocks of indices that belong to the same class. The propagators \(-m_{\alpha\beta}^{-1}\) and interaction vertices \(C_{(\alpha\beta)(\gamma\delta)}\) and \(C_{(*\alpha)(\beta\gamma)}\), correspondingly, have a block structure. To obtain an understanding how variability of the data and hence heterogeneous kernels affect the ability to make predictions, we consider the simplest yet non-trivial case of binary classification where we have two such blocks. In this section we focus on the overlap statistics given by the artificial data set described in Section II.3. This data set entails certain symmetries. Generalizing the expressions to a less symmetric task is straightforward. For the classification task, with two classes \(c_{\alpha}\in\{1,2\}\), the structure for the mean overlaps \(\mu_{\alpha\beta}\) and their covariance \(\Sigma_{(\alpha\beta)(\gamma\delta)}\) at the read-in layer of the network given by (15) are inherited by the mean \(m_{\alpha\beta}\) and the covariance \(C_{(\alpha\beta)(\gamma\delta)}\) of the overlap matrix at the output of the network. In particular, all quantities can be expressed in terms of only four parameters \(a\), \(b\), \(K\), \(v\) whose values, however, depend on the network architecture and will be given for linear and non-linear networks below. For four indices \(\alpha,\beta,\gamma,\delta\) that are all different \[m_{\alpha\alpha} =a\,,\] \[m_{\alpha\beta} =\begin{cases}b&c_{\alpha}=c_{\beta}\\ -b&c_{\alpha}\neq c_{\beta}\end{cases}\,,\] \[C_{(\alpha\alpha),(\gamma\delta)} =0\,,\] \[C_{(\alpha\beta)(\alpha\beta)} =K\,,\] \[C_{(\alpha\beta)(\alpha\delta)} =\begin{cases}v&c_{\alpha}=c_{\beta}=c_{\delta};\quad c_{\alpha} \neq c_{\beta}=c_{\delta}\\ -v&c_{\alpha}=c_{\beta}\neq c_{\delta};\quad c_{\alpha}=c_{\delta}\neq c_{ \beta}\end{cases}\,. \tag{38}\] This symmetry further assumes that the network does not have biases and utilizes point-symmetric activation functions \(\phi(x)\) such as \(\phi(x)=\text{erf}(\text{x})\). In general, all tensors are symmetric with regard to swapping \(\alpha\leftrightarrow\beta\) as well as \(\gamma\leftrightarrow\delta\) and the tensor \(C_{(\alpha\beta)(\gamma\delta)}\) is invariant under swaps of the index-pairs \((\alpha\beta)\leftrightarrow(\gamma\delta)\). We further assume that the class label for class 1 is \(y\) and that the class label for class 2 is \(-y\). In subsequent calculations and experiments we consider the prediction for the class \(y=-1\). This setting is quite natural, as it captures the presence of differing mean intra- and inter-class overlaps. Further \(K\) and \(v\) capture two different sources of variability. Whereas \(K\) is associated with the presence of i.i.d. distributed variability on each entry of the overlap matrix separately, \(v\) corresponds to variability stemming from correlations between different patterns. Using the properties in (38) one can evaluate (37) for the inference of test-points \(*\) within class \(c_{1}\) on a balanced training set with \(D\) samples explicitly to \[\left\langle y_{*}\right\rangle_{0} =Dgy\,, \tag{39}\] \[\left\langle y_{*}\right\rangle_{1} =vg\hat{y}\left(q_{1}+3q_{2}\right)\left(D^{3}-3D^{2}+2D\right)\] \[+Kg\hat{y}\left(q_{1}+q_{2}\right)\left(D^{2}-D\right)\] \[-v\hat{y}\left(q_{1}+q_{2}\right)\left(D^{2}-D\right)\] \[+\mathcal{O}\left(C_{(\alpha\beta)(\gamma\delta)}^{2}\right)\quad \text{for}\quad*\in c_{1} \tag{40}\] with the additional variables \[g =\frac{b}{(a-b)+bD}\,,\] \[q_{2} =-\frac{1}{(a-b)+bD}\,,\] \[q_{1} =\frac{1}{a-b}+q_{2}\,,\] \[\hat{y} =\frac{y}{(a-b)+bD}\,,\] \[g =\frac{b}{(a-b)+bD}\,, \tag{41}\] which stem from the analytic inversion of block-matrices. Carefully treating the dependencies of the parameters in (41) and (40), one can compute the limit \(D\gg 1\) and show that the \(\mathcal{O}(1)\)-behavior of (40) for test points \(*\in c_{1}\) for the zeroth-order approximation, \(\lim_{D\to\infty}\left\langle y_{*}\right\rangle_{0}:=\left\langle y_{*} \right\rangle_{0}^{(\infty)}\), and the first-order correction, \(\lim_{D\to\infty}\left\langle y_{*}\right\rangle_{1}:=\left\langle y_{*} \right\rangle_{1}^{(\infty)}\), is given by \[\left\langle y_{*}\right\rangle_{0}^{(\infty)} =y\;, \tag{42}\] \[\left\langle y_{*}\right\rangle_{1}^{(\infty)} =\frac{y}{(a-b)b}\left((K-4v)-v\frac{a-b}{b}\right)\;. \tag{43}\] This result implies that regardless of the amount of training data \(D\), the lowest value of the limiting behavior is controlled by the data variability represented by \(v\) and \(K\). Due to the symmetric nature of the task setting, neither the limiting behavior (43) nor the original expression (40) explicitly show the dependence on the relative number of training samples in the two respective classes \(c_{1,2}\). This is due to the fact that the task setup in (38) is symmetric. In the case of asymmetric statistics this behavior changes. Moreover, the difference between variance \(a\) and covariance \(b\) enters the expression in a non-trivial manner Using those results, we will investigate the implications for linear, non-linear, and deep kernels using the artificial data set, Section II.3, as well as real-world data. ### Applications to linear, non-linear and deep non-linear NNGP kernels #### iii.2.1 Linear Kernel Before going to the non-linear case, let us investigate the implications of (40) and (43) for a simple one-layer linear network. We assume that our network consists of a read-in weight \(\mathbf{V}\in\mathbb{R}^{1\times N_{\mathrm{dim}}};\mathbf{V}_{i}\sim\mathcal{N }\left(0,\sigma_{v}^{2}/N_{\mathrm{dim}}\right)\), which maps the \(N_{\mathrm{dim}}\) dimensional input vector to a one-dimensional output space. Including a regularization noise, the output hence reads \[y_{\alpha}=\mathbf{V}x_{\alpha}+\xi_{\alpha}\,. \tag{44}\] In this particular case the read-in, read-out, and hidden weights in the general setup (1) coincide with each other. Computing the average with respect to the weights \(\mathbf{V}\) yields the kernel \[K_{\alpha\beta}^{y}=\left\langle y_{\alpha}y_{\beta}\right\rangle_{\mathbf{V}} =K_{\alpha\beta}^{0}+\delta_{\alpha\beta}\,\sigma_{\mathrm{reg}}^{2}\,, \tag{45}\] where \(K_{\alpha\beta}^{0}\) is given by (10); it is hence a rescaled version of the overlap of the input vectors and the variance of the regularization noise. We now assume that the matrix elements of the input-data overlap (45) are distributed according to a multivariate Gaussian (20). As the mean and the covariance of the entries \(K_{\alpha\beta}^{y}\) are given by the statistics (15) we evaluate (40) and (43) with \[a^{(\mathrm{Lin})} =\sigma_{v}^{2}+\sigma_{\mathrm{reg}}^{2}\,,\] \[b^{(\mathrm{Lin})} =\sigma_{v}^{2}\,u\,,\] \[K^{(\mathrm{Lin})} =\frac{\sigma_{v}^{4}}{N_{\mathrm{dim}}}\left(1-u^{2}\right),\] \[v^{(\mathrm{Lin})} =\frac{\sigma_{v}^{4}}{N_{\mathrm{dim}}}\,u\left(1-u\right),\] \[u :=4p(p-1)+1\,. \tag{46}\] The asymptotic result for the first order correction, assuming that \(\sigma_{v}^{2}\neq 0\), can hence be evaluated, assuming \(p\neq 0.5\), as \[\left\langle y_{*}\right\rangle_{1}^{(\infty)}= \frac{y_{1}\sigma_{v}^{2}\frac{\left(1-u\right)}{N_{\mathrm{dim} }}}{\left(\sigma_{v}^{2}\left(1-u\right)+\sigma_{\mathrm{reg}}^{2}\right)u} \left(-2u-\frac{\sigma_{\mathrm{reg}}^{2}}{\sigma_{v}^{2}}\right)\,. \tag{47}\] Using this explicit form of \(\left\langle y_{*}\right\rangle_{1}^{(\infty)}\) one can see * as \(u\in[0,1]\) the corrections are always negative and hence provide a less optimistic estimate for the generalization compared to the zeroth-order approximation; * in the limit \(\sigma_{v}^{2}\rightarrow\infty\) the regularizer in (47) becomes irrelevant and the matrix inversion becomes unstable. * taking \(\sigma_{v}^{2}\to 0\) yields a setting where constructing the limiting formula (47) is not useful, as all relevant quantities (40) like \(g,v,K\to 0\) vanish; hence the inference yields zero which is consistent with our intuition: \(\sigma_{v}^{2}\to 0\) implies that only the regularizer decides, which is unbiased with regards to the class membership of the data. Hence the kernel cannot make any prediction which is substantially informed by the data. Figure (3) shows that the zeroth-order approximation\(\left\langle y^{*}\right\rangle_{0}\), even though it is able to capture some dependence on the amount of training data, is indeed too optimistic and predicts a mean-inferred network output closer to its negative target value \(y=-1\) than numerically obtained. The first-order correction on the other hand is able to reliably predict the results. Furthermore the limiting results \(D\rightarrow\infty\) match the numerical results for different task settings \(p\). These limiting results are consistently higher than the zeroth-order approximation \(\left\langle y_{*}\right\rangle_{0}\) and depend on the level of data variability. Deviations of the empirical results from the theory in the case \(p=0.6\) could stem from the fact that for \(p=0.5\) the fluctuations are maximal and our theory assumes small fluctuations. #### iii.2.2 Non-Linear Kernel We will now investigate how the non-linearities \(\phi\) present in typical network architectures (1) influence our results for the learning curve (40) and (43). As the ansatz in Section III.1 does not make any assumption, apart from Gaussianity, on the overlap-matrix \(K^{y}\), the results presented in Section III.1.5 are general. One can use the knowledge of the statistics of the overlap matrix in the read-in layer \(K^{0}\) in (15) to extend the result (40) to both non-linear and deep feed-forward neural networks. As in Section III.2.1 we start with the assumption that the input kernel matrix is distributed according to a multivariate Gaussian: \(K_{\alpha\beta}^{0}\sim\mathcal{N}(\mu_{\alpha\beta},\Sigma_{(\alpha\beta)( \gamma\delta)})\). In the non-linear case, we consider a read-in layer \(\mathbf{V}\in\mathbb{R}^{N_{h}\times N_{\mathrm{dim}}};\mathbf{V}_{i,j}\sim \mathcal{N}\left(0,\sigma_{v}^{2}/N_{\mathrm{dim}}\right)\), which maps the inputs to the hidden-state space and a separate read-out layer \(\mathbf{W}\in\mathbb{R}^{1\times N_{h}};\mathbf{W}_{i}\sim\mathcal{N}\left(0, \sigma_{w}^{2}/N_{h}\right)\), obtaining a neural network with a single hidden layer \[h_{\alpha}^{(0)} =\mathbf{V}x_{\alpha}\,,\] \[y_{\alpha} =\mathbf{W}\phi\left(h_{\alpha}^{(0)}\right)+\xi_{\alpha}\,, \tag{48}\] and network kernel \[\left\langle y_{\alpha}y_{\beta}\right\rangle_{\mathbf{V},\mathbf{W}} =\frac{\sigma_{w}^{2}}{N_{h}}\sum_{i=1}^{N_{h}}\left\langle\phi \left(h_{\alpha i}^{(0)}\right)\phi\left(h_{\beta i}^{(0)}\right)\right\rangle_{ \mathbf{V}}\] \[+\delta_{\alpha\beta}\,\sigma_{\text{reg}}^{2}\,. \tag{49}\] As we consider the limit \(N_{h}\to\infty\), one can replace the empirical average \(\frac{1}{N_{h}}\sum_{i=1}^{N_{h}}...\) with a distributional average \(\frac{1}{N_{h}}\sum_{i=1}^{N_{h}}...\to\left\langle...\right\rangle_{\mathbf{h} ^{(0)}}\)[18, 30]. This yields the following result for the kernel matrix \(K_{\alpha\beta}^{y}\) of the multivariate Gaussian \[K_{\alpha\beta}^{y}\underset{N_{h}\to\infty}{\rightarrow}\sigma_ {w}^{2}\Big{\langle}\phi\left(h_{\alpha}^{(0)}\right)\phi\left(h_{\beta}^{(0)} \right)\Big{\rangle}_{\mathbf{h}^{(0)},\mathbf{V}}+\delta_{\alpha\beta}\, \sigma_{\text{reg}}^{2}\,. \tag{50}\] The expectation over the hidden states \(h_{\alpha}^{(0)},h_{\beta}^{(0)}\) is with regard to the Gaussian \[\left(\begin{array}{c}h_{\alpha}^{(0)}\\ h_{\beta}^{(0)}\end{array}\right)\sim\mathcal{N}\left(\left(\begin{array}{c }0\\ 0\end{array}\right),\left(\begin{array}{cc}K_{\alpha\alpha}^{0}&K_{\alpha \beta}^{0}\\ K_{\beta\alpha}^{0}&K_{\beta\beta}^{0}\end{array}\right)\right)\,, \tag{51}\] with the variance \(K_{\alpha\alpha}^{0}\) and the covariance \(K_{\alpha\beta}^{0}\) given by (10). Evaluating the Gaussian integrals in (50) is analytically possible in certain limiting cases [3, 40]. For an erf-activation function, as a prototype of a saturating activation function, this average yields \[\left\langle\phi^{2}\left(h_{\alpha}^{(0)}\right)\right\rangle_{ \mathbf{h}^{(0)}}= \frac{4}{\pi}\arctan\left(\sqrt{1+4K_{\alpha\alpha}^{0}}\right)-1\,, \tag{52}\] \[\left\langle\phi\left(h_{\alpha}^{(0)}\right)\phi\left(h_{\beta}^ {(0)}\right)\right\rangle_{\mathbf{h}^{(0)}}= \frac{2}{\pi}\arcsin\left(\frac{2K_{\alpha\beta}^{0}}{1+2K_{ \alpha\alpha}^{0}}\right)\,. \tag{53}\] We use that the input kernel matrix \(K^{0}\) is distributed as \(K_{\alpha\beta}^{0}\sim\mathcal{N}(\mu_{\alpha\beta},\Sigma_{(\alpha\beta)( \gamma\delta)})\). Equation (50) hence provides information on how the mean overlap \(m_{\alpha\beta}\) changes due to the application of the non-linearity \(\phi(\cdot)\), fixing the parameters \(a\), \(b\), \(K\), \(v\) of the general form (38) as \[a^{\text{(Non-lin)}} =K_{\alpha\alpha}^{y}=\sigma_{w}^{2}\Big{\langle}\phi^{2}\left(h_ {\alpha}^{(0)}\right)\Big{\rangle}_{\mathbf{h}^{(0)}}+\sigma_{\text{reg}}^{2}\,, \tag{54}\] \[b^{\text{(Non-lin)}} =K_{\alpha\beta}^{y}=\sigma_{w}^{2}\Big{\langle}\phi\left(h_{ \alpha}^{(0)}\right)\phi\left(h_{\beta}^{(0)}\right)\Big{\rangle}_{\mathbf{h} ^{(0)}}\,\,. \tag{55}\] where the averages over \(h^{(0)}\) are evaluated with regard to the Gaussian (51) for \(\phi(x)=\text{erf}(x)\). We further require in 55 that \(\alpha\neq\beta,c(\alpha)=c(\beta)\). To evaluate the corrections in (40), we also need to understand how the presence of the non-linearity \(\phi(x)\) shapes the parameters \(K,v\) that control the variability. Under the assumption of small covariance \(\Sigma_{(\alpha\beta)(\gamma\delta)}\) one can use (53) to compute \(C_{(\alpha,\beta)(\gamma,\delta)}\) using linear response theory. As \(K_{\alpha\beta}^{0}\) is stochastic and provided by (20), we decompose \(K_{\alpha\beta}^{0}\) into a deterministic kernel \(\mu_{\alpha\beta}\) and a stochastic perturbation \(\eta_{\alpha\beta}\sim\mathcal{N}\left(0,\Sigma_{(\alpha\beta)(\gamma\delta)}\right)\). Linearizing (55) around \(\mu_{\alpha\beta}\) via Price's theorem [31], the stochasticity in the read-out layer yields \[C_{(\alpha\beta)(\gamma\delta)} =\sigma_{w}^{4}\,K_{\alpha\beta}^{(\phi^{\prime})}\,K_{\gamma \delta}^{(\phi^{\prime})}\,\Sigma_{(\alpha\beta)(\gamma\delta)}\,, \tag{56}\] \[K_{\alpha\beta}^{(\phi^{\prime})} =\left\langle\phi^{\prime}\left(h_{\alpha}^{(0)}\right)\phi^{ \prime}\left(h_{\beta}^{(0)}\right)\right\rangle, \tag{57}\] where \(h^{(0)}\) is distributed as in (51). This clearly shows that the variability simply transforms with a prefactor \[K^{\text{(Non-lin)}} =\sigma_{w}^{4}\,K_{\alpha\beta}^{(\phi^{\prime})}\,K_{\alpha \beta}^{(\phi^{\prime})}\,\kappa\,,\] \[v^{\text{(Non-lin)}} =\sigma_{w}^{4}K_{\alpha\beta}^{(\phi^{\prime})}\,K_{\alpha \delta}^{(\phi^{\prime})}\,v\,, \tag{58}\] with \(\kappa,\nu\) defined as in (15). Evaluating the integral in \(\left\langle\phi^{\prime}\left(h_{\alpha}^{(0)}\right)\phi^{\prime}\left(h_{ \beta}^{(0)}\right)\right\rangle\) is hard in general. In fact, the integral which occurs is equivalent to the one in [24] for the Lyapunov exponent and, equivalently, in [34, 30] for the susceptibility in the propagation of information in deep feed-forward neural networks. This is consistent with the assumption that our treatment of the non-linearity follows a linear response approach as in [24]. For the erf-activation we can evaluate the kernel \(K^{(\phi^{\prime})}_{\alpha\beta}\) as \[K^{(\phi^{\prime})}_{\alpha\beta} =\frac{4}{\pi\left(1+2a^{(0)}\right)}\left(1-\left(\frac{2b^{(0)}} {1+2a^{(0)}}\right)^{2}\right)^{-\frac{1}{2}}\,, \tag{59}\] \[a^{(0)} =\sigma_{v}^{2}\quad,\quad b^{(0)}=\sigma_{v}^{2}u\,,\] (60) \[u =4p(p-1)+1\,, \tag{61}\] which allows us to evaluate (58). Already in the one hidden-layer setting we can see that the behavior is qualitatively different from a linear setting: \(K^{(\mathrm{Non-lin})}\) and \(v^{(\mathrm{Non-lin})}\) scale with a linear factor which now also involves the parameter \(\sigma_{v}^{2}\) in a non-linear manner. #### iii.2.3 Multilayer-Kernel So far we considered single-layer networks. However, in practice the application of multi-layer networks is often necessary. One can straightforwardly extend the results from the non-linear case (III.2.2) to the deep non-linear case. We consider the architecture introduced in (1) in Section II.1.1 where the variable \(L\) denotes the number of hidden layers, and \(1\leq l\leq L\) is the layer index. Similar to the computations in Section III.2.2 one can derive a set of relations to obtain \(K^{y}_{\alpha\beta}\) \[K^{0}_{\alpha\beta} =\frac{\sigma_{v}^{2}}{N_{\mathrm{dim}}}\;K^{x}_{\alpha\beta}\,,\] \[K^{(\phi)l}_{\alpha\beta} =\sigma_{w}^{2}\;\left\langle\phi\left(h_{\alpha}^{(l-1)}\right) \phi\left(h_{\beta}^{(l-1)}\right)\right\rangle\,,\] \[K^{y}_{\alpha\beta} =\sigma_{u}^{2}\;\left\langle\phi\left(h_{\alpha}^{(L)}\right) \phi\left(h_{\beta}^{(L)}\right)\right\rangle+\delta_{\alpha\beta}\,\sigma_{ \mathrm{reg}}^{2}\,. \tag{62}\] As [30; 34; 42] showed for feed-forward networks, deep non-linear networks strongly alter both the variance and the covariance. So we expect them to influence the generalization properties. In order to understand how the fluctuations \(C_{(\alpha\beta)(\gamma\delta)}\) transform through propagation, one can employ the chain rule to linearize (62) and obtain \[C^{yy}_{(\alpha,\beta)(\gamma,\delta)}=\sigma_{u}^{4}\,\prod_{l=1}^{L}\left[K ^{(\phi^{\prime})l}_{\alpha\beta}K^{(\phi^{\prime})l}_{\gamma\delta}\right]\, \Sigma_{(\alpha\beta)(\gamma\delta)}\,. \tag{63}\] A systematic derivation of this result as the leading order fluctuation correction in \(N_{h}^{-1}\) is found in the appendix of [35]. Equation (62) and (63) show that the kernel performance will depend on the non-linearity \(\phi\), the variances \(\sigma_{v}^{2}\), \(\sigma_{w}^{2}\), \(\sigma_{u}^{2}\), and the network depth \(L\). Figure 4 (a) shows the comparison of the mean inferred network output \(\langle y^{*}\rangle\) for the true test label \(y=-1\) between empirical results and the first order corrections. The regime (\(\sigma_{w}^{2}<1\)) in which the kernel vanishes, leads to a poor performance. The marginal regime (\(\sigma_{w}^{2}\simeq 1\)) provides a better choice for the overall network performance. Equation (4) (b) shows that the maximum absolute value for the predictive mean is achieved slightly in the supercritical regime \(\sigma_{w}^{2}>1\). With larger number of layers, the optimum becomes more and more pronounced and approaches the critical value \(\sigma_{w}^{2}=1\) from above. The optimum for the predictive mean to occur slightly in the supercritical regime may be surprising with regard to the expectation that network trainability peaks precisely at \(\sigma_{w}^{2}=1\)[30]. In particular at shallow depths, the optimum becomes very wide and shifts to \(\sigma_{w}>1\). For few layers, even at \(\sigma_{w}^{2}>1\) the increase of variance \(K^{y}_{\alpha\alpha}\) per layer remains moderate and stays within the dynamical range of the activation function. Thus differences in covariance are faithfully transmitted by the kernel and hence allow for accurate predictions. The theory including corrections to linear order throughout matches the empirical results and hence provides good estimates for choosing the kernel architecture. Figure 4: **Predictive mean in a deep non-linear feed forward network with heterogeneous kernel.****(a)** Comparison of mean inferred network output for non-linear network with \(\phi(x)=\mathrm{erf(x)}\), five layers for different values of the gain \(\sigma_{w}\). The figure displays numerical results (bars), zeroth-order approximation (dashed) and first-order corrections (solid). **(b)** Similar comparison as in (a) for different network depths \(L=5,10,20,50\). In all settings we used \(N_{\mathrm{dim}}=50\) for \(D=100\), \(p=0.8\), \(\sigma_{v}^{2}=1\), \(\sigma_{\mathrm{reg}}^{2}=1\). Empirical results display mean and standard deviation over 1000 trials with 1000 test points per trial. #### iii.3.4 Experiments on Non-Symmetric Task Settings and MNIST In contrast to the symmetric setting in the previous subsections, real data-sets such as MNIST exhibit asymmetric statistics so that the different blocks in \(m_{\alpha\beta}\) and \(C_{(\alpha\beta)(\gamma\delta)}\) assume different values in general. All theoretical results from Section III.1 still hold. However, as the tensor elements of \(m_{\alpha,\beta}\) and \(C_{(\alpha,\beta)(\gamma,\delta)}\) change, one needs to reconsider the evaluation in Section III.1.5 in the most general form which yields a more general version of the result. Finite MNIST datasetFirst we consider a setting, where we work with the pure MNIST dataset for two distinct labels \(0\) and \(4\). In this setting we estimate the class-dependent tensor elements \(m_{\alpha\beta}\) and \(C_{(\alpha\beta)(\gamma,\delta)}\) directly from the data. We define the data-set size per class, from which we sample the theory as \(D_{\text{base}}\). The training points are also drawn from a subset of these \(D_{\text{base}}\) data points. To compare the analytical learning curve for \(\langle y_{\ast}\rangle\) at \(D\) training data-points to the empirical results, we need to draw multiple samples of training datasets of size \(D<D_{\text{base}}\). As the amount of data in MNIST is limited, these samples will generally not be independent and therefore violate our assumption. Nevertheless we can see in Figure 5 that if \(D\) is sufficiently small compared to \(D_{\text{base}}\), the empirical results and theoretical results match well. Gaussianized MNIST datasetTo test whether deviations in Figure 5 at large \(D\) stem from correlations in the samples of the dataset we construct a generative scheme for MNIST data. This allows for the generation of infinitely many training points and hence the assumption that the training data is i.i.d. is fulfilled. We construct a pixel-wise Gaussian distribution for MNIST images from the samples. We use this model to sample as many MNIST images as necessary for the evaluation of the empirical learning curves. Based on the class-dependent statistics for the pixel means and the pixel covariances in the input data one can directly compute the elements of the mean \(\mu_{\alpha\beta}\) and the covariance \(\Sigma_{(\alpha\beta)(\gamma\delta)}\) for the distribution of the input kernel matrix \(K^{0}_{\alpha\beta}\). We see in Figure 6 that our theory describes the results well for this data-set also for large numbers of training samples. Furthermore we can see that in the case of an asymmetric data-set the learning curves depend on the balance ratio of training data \(\rho=D_{c_{1}}/D_{c_{2}}\). The bias towards class one in Figure 6 b) is evident from the curves with \(\rho>0.5\) predicting a lower mean inferred network output, closer to the target label \(y=-1\) of class \(1\). ## IV Discussion In this work we investigate the influence of data variability on the performance of Bayesian inference. The probabilistic nature of the data manifests itself in a heterogeneity of the entries in the block-structured kernel matrix of the corresponding Gaussian process. We show Figure 5: **Predictive mean for a linear network with MNIST data**: Comparison of mean inferred network output for a linear network with 1 layer for different training set sizes \(D\). The figure displays numerical results (bars), zeroth-order prediction (dashed) and first-order corrections (solid). Settings \(N_{\text{dim}}=784\), \(\sigma^{2}_{\text{reg}}=2\), \(D_{\text{base}}=4000\). MNIST classes \(c_{1}=0\), \(c_{2}=4\), \(y_{c_{1}}=-1\), \(y_{c_{2}}=1\); balanced data-set in \(D_{\text{base}}\) and at each \(D\). Empirical results display mean and standard deviation over 1000 trials with 1000 test points per trial. Figure 6: **Predictive mean for a erf-network with Gaussianized MNIST data**: **a)** Mean inferred network output for MNIST classification with \(\phi(x)=\text{erf}(x)\). Figure shows zeroth-order (dashed line), first-order (solid line), and empirical results (bars). **b)** Mean inferred network output in first order approximation (solid lines) and empirical results (bars) for MNIST classification with different ratios \(\rho=D_{c_{1}}/D_{c_{2}}\) between numbers of training samples per class \(D_{c_{1}}\) and \(D_{c_{2}}\), respectively; \(\rho=0.5\) (yellow), \(\rho=0.6\) (blue), \(\rho=0.7\) (red). Empirical results display mean and standard deviation over 50 trials with 1000 test points per trial. that this heterogeneity for a sufficiently large number of \(D\) of data samples can be treated as an effective non-Gaussian theory. By employing a time-dependent formulation for the mean of the predictive distribution, this heterogeneity can be treated as a disorder average that circumvents the use of the replica trick. A perturbative treatment of the variability yields first-order corrections to the mean in the variance of the heterogeneity that always push the mean of the predictive distribution towards zero. In particular, we obtain limiting expressions that accurately describe the mean in the limit of infinite training data, qualitatively correcting the zeroth-order approximation corresponding to homogeneous kernel matrices, is overconfident in predicting the mean to perfectly match the training data in this limit. This finding shows how variability fundamentally limits predictive performance and provides not only a quantitative but also a qualitative difference. Moreover at a finite number of training data the theory explains the empirically observed performance accurately. We show that our framework captures predictions in linear, non-linear shallow and deep networks. In non-linear networks, we show that the optimal value for the variance of the prior weight distribution is achieved in the super-critical regime. The optimal range for this parameter is broad in shallow networks and becomes progressively more narrow in deep networks. These findings support that the optimal initialization is not at the critical point where the variance is unity, as previously thought [30], but that super-critical initialization may have an advantage when considering input variability. An artificial dataset illustrates the origin and the typical statistical structure that arises in heterogeneous kernels, while the application of the formalism to MNIST [17] demonstrates potential use to predict the expected performance in real world applications. The field theoretical formalism can be combined with approaches that study the effect of fluctuations due to the finite width of the layers [28; 43; 45; 35]. In fact, in the large \(N_{h}\) limit the NNGP kernel is inert to the training data, the so called lazy regime. At finite network width, the kernel itself receives corrections which are commonly associated with the adaptation of the network to the training data, thus representing what is known as feature learning. The interplay of heterogeneity of the kernel with such finite-size adaptations is a fruitful future direction. Another approach to study learning in the limit of large width is offered by the neural tangent kernel (NTK) [15], which considers the effect of gradient descent on the network output up to linear order in the change of the weights. A combination of the approach presented here with the NTK instead of the NNGP kernel seems possible and would provide insights into how data heterogeneity affects training dynamics. The analytical results presented here are based on the assumption that the variability of the data is small and can hence be treated perturbatively. In the regime of large data variability, it is conceivable to employ self-consistent methods instead, which would technically correspond to the computation of saddle points of certain order parameters, which typically leads to an infinite resummation of the perturbative terms that dominate in the large \(N_{h}\) limit. Such approaches may be useful to study and predict the performance of kernel methods for data that show little or no linear separability and are thus dominated by variability. Another direction of extension is the computation of the variance of the Bayesian predictor, which in principle can be treated with the same set of methods as presented here. Finally, since the large width limit as well as finite-size corrections, which in particular yield the kernel response function that we employed here, can be obtained for recurrent and deep networks in the same formalism [35] as well as for residual networks (ResNets) [9], the theory of generalization presented here can straight forwardly be extended to recurrent networks and to ResNets. ###### Acknowledgements. We thank Claudia Merger, Bastian Epping, Kai Segadlo and Alexander van Meegen for helpful discussions. This work was partly supported by the German Federal Ministry for Education and Research (BMBF Grant 01IS19077A to Julich and BMBF Grant 01IS19077B to Aachen) and funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 368482240/GRK2416, the Excellence Initiative of the German federal and state governments (ERS PFJARA-SDS005), and the Helmholtz Association Initiative and Networking Fund under project number SO-092 (Advanced Computing Architectures, ACA). Open access publication funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 491111487.
2309.05345
Empirical study on the efficiency of Spiking Neural Networks with axonal delays, and algorithm-hardware benchmarking
The role of axonal synaptic delays in the efficacy and performance of artificial neural networks has been largely unexplored. In step-based analog-valued neural network models (ANNs), the concept is almost absent. In their spiking neuroscience-inspired counterparts, there is hardly a systematic account of their effects on model performance in terms of accuracy and number of synaptic operations.This paper proposes a methodology for accounting for axonal delays in the training loop of deep Spiking Neural Networks (SNNs), intending to efficiently solve machine learning tasks on data with rich temporal dependencies. We then conduct an empirical study of the effects of axonal delays on model performance during inference for the Adding task, a benchmark for sequential regression, and for the Spiking Heidelberg Digits dataset (SHD), commonly used for evaluating event-driven models. Quantitative results on the SHD show that SNNs incorporating axonal delays instead of explicit recurrent synapses achieve state-of-the-art, over 90% test accuracy while needing less than half trainable synapses. Additionally, we estimate the required memory in terms of total parameters and energy consumption of accomodating such delay-trained models on a modern neuromorphic accelerator. These estimations are based on the number of synaptic operations and the reference GF-22nm FDX CMOS technology. As a result, we demonstrate that a reduced parameterization, which incorporates axonal delays, leads to approximately 90% energy and memory reduction in digital hardware implementations for a similar performance in the aforementioned task.
Alberto Patiño-Saucedo, Amirreza Yousefzadeh, Guangzhi Tang, Federico Corradi, Bernabé Linares-Barranco, Manolis Sifalakis
2023-09-11T09:45:11Z
http://arxiv.org/abs/2309.05345v1
Empirical study on the efficiency of Spiking Neural Networks with axonal delays, and algorithm-hardware benchmarking ###### Abstract The role of axonal synaptic delays in the efficacy and performance of artificial neural networks has been largely unexplored. In step-based analog-valued neural network models (ANNs), the concept is almost absent. In their spiking neuroscience-inspired counterparts, there is hardly a systematic account of their effects on model performance in terms of accuracy and number of synaptic operations.This paper proposes a methodology for accounting for axonal delays in the training loop of deep Spiking Neural Networks (SNNs), intending to efficiently solve machine learning tasks on data with rich temporal dependencies. We then conduct an empirical study of the effects of axonal delays on model performance during inference for the Adding task [1, 2, 3], a benchmark for sequential regression, and for the Spiking Heidelberg Digits dataset (SHD) [4], commonly used for evaluating event-driven models. Quantitative results on the SHD show that SNNs incorporating axonal delays instead of explicit recurrent synapses achieve state-of-the-art, over 90% test accuracy while needing less than half trainable synapses. Additionally, we estimate the required memory in terms of total parameters and energy consumption of accommodating such delay-trained models on a modern neuromorphic accelerator [5, 6]. These estimations are based on the number of synaptic operations and the reference GF-22nm FDX CMOS technology. As a result, we demonstrate that a reduced parameterization, which incorporates axonal delays, leads to approximately 90% energy and memory reduction in digital hardware implementations for a similar performance in the aforementioned task. Spiking Neural Networks Synaptic Delays Axonal Delays Temporal Signal Analysis Spiking Heidelberg Digits ## 1 Introduction Spiking Neural Networks (SNNs) are models more closely resembling biology than Analog Neural Networks (ANNs) due to their statefulness and binary event-driven encoding of information, which on novel neuromorphic processors, render them highly efficient in temporal processing applications. Lending to more (compact) parameterization SNNs demonstrate competitive performance to deep ANNs (DNNs) [7]; while potentially using fewer MAC operations in digital hardware implementations. Furthermore, the statefulness of SNNs, embodied in the (decaying) membrane potential of neurons, allows them to be mapped to RNNs [8] effectively, even without recurrent synaptic connections. However, for temporal tasks, the best-performing SNN models almost universally include explicit recurrent connections [4, 7, 9, 10, 11], which exponentially increases the number of required synaptic weights as a function of the number of neurons, adding a burden to neuromorphic hardware development. Meanwhile, the role of axonal delays, i.e., the delay for a spike (action potential) to travel from the soma to the axon terminals, which is a critical element of parameterization in biological neural networks, has remained largely unexplored or characterized in the study of the efficacy, model size, and performance of SNNs. This paper attempts an initial characterization of and effects of synaptic delays on SNN model performance and the impact of accounting for them in neuromorphic processor architectures. The first contribution of the work in this paper is a simple strategy of training SNN models with axonal delays, which is conformal with back-propagation (BP) frameworks commonly used for SNN/DNN training (BP through-time (BPTT) for DNNs and its extension spatio-temporal BP (STBP) for SNNs). The second contribution regards an assessment and quantification of the effects of synaptic delay parameterization on model performance (accuracy), model complexity (network structure) and model size (number of parameters). The third contribution is a quantification of energy and memory cost of deploying models with synaptic delays on a modern neuromorphic processor, based on two different design strategies. ## 2 Related Work Perhaps one of the reasons that delay model training has not been as mainstream in artificial neural network research until now, is the fact that ANN accelerators do not specifically account and optimize for them at the hardware level. By contrast, many digital neuromorphic accelerators provide explicit hardware support for delay structures (dendritic/axonal); either per neuron [12, 13, 14], or shared across neurons [15, 5]. This makes delay model training an attractive exploration in relation to compute and power efficiency. Recurrency in neural networks offers a constrained way of compensating for synaptic delay parameterization, limited to a single-timestep. Despite this limitation, only a handful of works have explored the explicit use of synaptic delays independently of recurrences. One common formalization in the literature of TDNNs [16, 17, 18] and delay-aware SNNs [19, 20, 21, 22] is to parameterize synapses with an additional learnable delay variable, trainable with back-propagation [23, 20], local Hebbian-like learning rules [24], or annealing algorithms [25]. An alternative approach in TDNNs involves mapping delays in the spatial domain and train them with autoregressive models and so-called temporal convolutions (TCNs) [26, 27, 28, 29, 30]. This approach enables structurally simpler models, which are easier/faster to train, but not very compact as their breadth/depth must scale linearly with the number of timesteps needed to capture temporal dependencies. Our approach is akin to this latter strategy but because of the incremental delay quantization-pruning, our models neither narrow the aperture of the temporal window nor make it homogeneous for all neurons (does not lead to deep models). ## 3 Methods ### Delay Model Description We use multilayer Leaky Integrate-and-Fire (LIF) Spiking Neural Networks (SNNs). LIF neurons are stateful, and represent a compromise between biological plausibility and computational efficiency for hardware implementation. Their excitation depends on both their time-dependent input \(I\) from other neurons and on their internal state, known as the membrane potential \(u\) subject to leaky integration with a time constant \(\tau\). The equations of the membrane potential update in a discrete-time implementation of a LIF spiking neuron are: \[u_{k}=u_{k-1}e^{-\frac{1}{\tau}}(1-\theta_{k-1})+I_{k-1} \tag{1}\] \[\theta_{k}=\begin{cases}1&u_{k}\geq u_{th}\\ 0&\text{otherwise}\end{cases} \tag{2}\] where \(\theta\) denotes a function to generate activations or spikes whenever the membrane potential reaches a threshold associated with the neuron, \(u_{th}\). Multilayer SNNs can be organized as feedforward or recurrent networks. In a recurrent SNN layer, neurons exhibit lateral connectivity, as in Fig. 1(a), and their input is computed by adding the weighted contribution from the \(N\) neurons to the previous or pre-synaptic layer and from the \(M\) neighboring neurons in their own layer, as shown in the next equation: \[I_{k}[recurrent]=\sum_{i=1}^{N}w_{i}\theta_{i,k}+\sum_{j=1}^{M}w_{j}\theta_{j,k} \tag{3}\] To incorporate axonal delays in networks of spiking neurons, we create multiple time-delayed projections or synapses for every pre-synaptic/post-synaptic neuron pair. This way, the activation of a neuron at a given time depends on both its current state and a subset of past activations from neurons in the pre-synaptic layer, with direct projections. The input of a neuron incorporating the proposed model for axonal delays is: \[I_{k}[delays]=\sum_{d\in D}\sum_{i=1}^{N}w_{i,d}\theta_{i,k-d} \tag{4}\] where \(D\in[0,T]\) is the set of delays chosen for a given task. Control over the temporal stride of the delays and the depth of the temporal receptive field is included in the model (see Fig.1(b) for a visualization of the concept). This increases flexibility in the total number of parameters. ### Model Training We train models that incorporate axonal delays using the following approach, which is compatible with vanilla back-propagation frameworks used to train SNNs and RNNs today (i.e. no special framework extensions). The idea is to express the (temporal) parameterization of delays as a spatial parameterization of synaptic weights, such that delay training is effected by merely optimizing for synaptic weights. We start with a set of parallel synapses per pair of pre and post-synaptic neurons each associated with a delayed output from the pre-synaptic neuron (using a predetermined range of delays and stride). We optimize the model as usual, and prune all delay-synapses that end up with small weights. We then fine-tune the model with only the remaining synapses. We may introduce new synapses to replace the pruned ones, with incrementally higher delay resolution in localized sub-regions of the initial delay range, and repeat the process. As a result different neurons end-up with different fan-in delay inputs. The resulting models are topologically feed-forward, consistently shallower with few parameters than their recurrent-connectivity counterparts, and exhibit state-of-art performance (confirmed in all experiments). Their simpler structure renders them attractive for resource-efficient deployment on neuromorphic accelerators. We trained SNN models with back-propagation (STBP specifically [31]). This method accounts for the past influence on current neurons' states by unrolling the network in time, and the errors are computed along the reverse paths of the unrolled network. To account for the discontinuity of the membrane potential, we employed a surrogate gradient function [8] with a fast sigmoid function as in [10]. During training, apart from the synaptic weights, we also optimized the membrane's time constants, as in [7]. Finally, we did not consider extra delays for the input at the first layer, as the input layer usually has more neurons and is responsible for a large portion of the synaptic parameters. ### Implementation cost in neuromorphic processors Neuromorphic hardware architectures implement stateful nodes with scalable event-driven communication, reducing communication and processing costs and, by extension, the required energy. Spiking Neural Networks are some of the most well-suited algorithms for these kinds of processors, and as such, the delay mechanism is supported by most neuromorphic chips. In this paper, we used a simple yet accurate methodology to compare the energy and memory overhead of the delay mechanism. The energy consumption is calculated based on counting the memory accesses (spike packets and neuron states read/write and weights read) and arithmetic operations (accumulation, comparison with threshold, etc.) using a netlist level simulation tool (Cadence JOULES) for an advanced technology node (Global-Foundries 22nm FDX). Memory cost is calculated from the total number of parameters (as shown in Fig.3), the neuron states, and the number of delayed spike packets required to perform inference. We explored two methods commonly used by digital neuromorphic platforms to implement delay: The Ring Buffer [12; 13; 14] and the Delay Queue [5; 15]. Figure 1: Projections over time in a pre/post-synaptic pair for a) Recurrently connected SNN (R-SNN ) and b) Delayed SNN (D-SNN) using receptive fields of stride 2 and depth 4. Weight values are color-consistent. #### 3.3.1 Ring Buffer A ring buffer is a special type of circular queue where currents with different delays accumulate in separate elements of the queue. When using the ring buffer, the maximum possible delay in the system will be limited to the size of the buffer, and the set of possible delays is linearly distributed i.e., the temporal stride is constant (see Fig.2(a)). In this method, there is one ring buffer per neuron; therefore, the memory overhead scales with the number of neurons. The estimated memory overhead for the ring buffer (total sum of the ring buffer sizes) is calculated as "number of postsynaptic neurons with synaptic delay \(\times\) maximum synaptic delay". The energy overhead is equal to one extra neural accumulation per time step (to accumulate the value of the ring buffer into the membrane potential). #### 3.3.2 Delay Queue The axon delay is encoded directly in the spikes in a delay queue. Therefore, each spike packet contains a few bits to indicate the amount of delay. In the destination neuro-synaptic core, instead of having a single queue for all spikes, several queues, each corresponding to a specific delay amount, are implemented. This method is more efficient to implement when spikes activity is sparse. Fig2(b) depicts an implementation of four delay queues. These delay queues are cascaded, are shared by many neurons, and encode an arbitrary amount of delay (does not need to be a linear distribution). In this scheme, unlike the ring buffer, the number of queues is defined based on the number of possible delays and not on the maximum delay amount. However, the size of each queue increases if the queue applies more delay on the spikes (which means the queue needs to keep the spikes for a longer period). Additionally, this method implements the axon delay which is more coarse-grained compared to the dendritic delay implemented by the ring buffer. To calculate the memory overhead of delay queues, we need to know the number and size of each queue. We assumed that the delay queues are shared between the neurons of a layer. The number of queues is equal to the number of possible delays. Also, since the proposed algorithm assumes that all input spikes are delayed evenly, the total size of all delay queues is equal to the "maximum number of input spikes of the layer in all time-steps \(\times\) the maximum amount of delay". In this way, there is enough space in the queue to keep the delayed spikes for each time step. We estimate the energy overhead from total number of reads and writes to the delay queues. Figure 2: (a) Using a ring buffer per neuron to implement synaptic delays. Unit of delay is the system time-step. (b) Implementation of axon delay by sorting spikes based on the encoded delays in separated delay queues. The queues are shared across neurons in a neuro-synaptic core (not shown in figure). ## 4 Results We report experiments that demonstrate qualitatively the advantages of training SNN models with axonal delays, and quantitatively the benefits from deploying them in digital neuromorphic processors. The first experiment illustrates that models with axonal delays encode more effectively long-term dependencies than networks with recurrent connections. The second experiment reveals that models with synaptic delays achieve state-of-the-art performance, in tasks rich with temporal features, while requiring fewer parameters than recurrent models (similar observations were confirmed on other datasets). This alludes to more compact models, that require less resources for executing on hardware accelerators. A third experiment quantifies this intuition by means with estimates of energy and memory cost, showing a reduction by an order of magnitude, when such models are employed on neuromorphic accelerators, by comparison to _equi-performing_ models with recurrent connections. All models were training with the deep learning framework PyTorch on Nvidia GeForce RTX GPU. ### Adding task The _adding task_ is a known benchmark used to evaluate the performance of neural network models on sequence data, such as LSTM [1] or TCN [29; 2]. The input data is a stream of random values chosen uniformly in [0,1], and two randomly selected indexes, one for every half of the sequence. The target, which should be computed at the end of the sequence, is the addition of the two values of the stream at the selected indexes. To use this task to evaluate generic SNNs, we feed the network through two input channels, one for the number stream and the other for the binary-encoded markers, and then compute the Mean Squared Error (MSE) between the target and the membrane potential of a readout neuron with an infinite threshold. Fig. 4 (top) shows that while both a recurrent connectivity and a delay-synapses enable an SNN to remember the indexed numbers and compute the result, the latter however exhibits a more "sensible" or interpretable evolution towards the answer. The bottom of the figure on the other hand, reveals that models with synaptic delays converge typically much faster than traditional ones with recurrent connectivity. ### SHD task Fig. 5 shows for different models, a comparison of the accuracy on the SHD dataset [4] as a function respectively of the number of model parameters and the number of spikes generated by the model at inference (the number of parameters is a proxy metric for the model size/complexity, and spikes is a proxy metric of energy consumption on any hardware accelerator). The comparison includes various models generated with our method while bounding the max numbers of delay synapses per neuron pair retained after pruning. No pruning refers to retaining all delay synapses. The comparison includes as baseline two recurrent SoA models from the literature [7] that use the adaptive LIF (ALIF) and LIF neuron models. The observation is that with the herein proposed training method we can generate models, which are exceptionally compact, energy-efficient, and yet achieve SoA accuracy. These results are further quantified and distilled in Table 1, where a comparison is made with different feed-forward and recurrent SNN architectures found in the literature for the same dataset. ### Energy estimations of hardware implementation Table 2 reports the proposed algorithm's estimated energy consumption and memory footprint for both of the commonplace implementations of delay synapses in existing neuromorphic processors discussed in section 3.3. The main take-away observation is that the energy and memory overhead from utilizing synaptic delay hardware structures is substantially off-set by the far more compact, with sparser activity, synaptic delay models. The energy estimations are provided only for comparison purposes and extracted from simulations of digital circuits (SRAM memory accesses Figure 3: Equations used to calculate the max number of parameters for (a) the delay-based architecture and (b) recurrent SNN proposed here. Figure 4: Top: An example of the Adding Task for a sequence length T=50, solved by an R-SNNN (orange) and a D-SNN (green). Notice how the D-SNN ”remembers” both values relevant to the task in a more natural way. Bottom: MSE per training epoch for R-SNN (with recurrent synapses) and D-SNN (with delay synapses) in the Adding task, for two sequence lengths: T=50 and T=500. D-SNNs converge faster and to a smaller error! Figure 5: Effect of synaptic delays on performance (SHD task). Left: num of parameters vs accuracy. Right: num of spikes vs accuracy. Red and orange points are recurrently connected SNNs. Colors ranging from green to black are SNNs with axonal delays and different pruning configurations. and arithmetic operations in float 16b data type). For memory overhead, we assumed that all parameters, neuron states, and spike packets use the same data types and only report the total number of memory words. Simulations are for CMOS digital technology node GF-22nm FDX, through Cadence software tools. ## 5 Conclusion We introduced a method for training SNN models with synaptic delays, and we report benefits of deploying such models in neuromorphic accelerators. The important observation from the resulting trained models is that even a small set of synaptic delays together with trainable time constants, supersede the need for complex lateral connectivity, reduce the number of layers and total number of parameters needed for good performance. This also reduces the memory footprint of these models in neuromorphic accelerators (compared to commonplace RNNs). Future work will focus on _hardware-aware_ training of synaptic delay models for compact mappings on neuromorphic accelerators. \begin{table} \begin{tabular}{|l|l|c|c|c|c|} \hline **Paper** & **Neuron Type** & **Architecture\({}^{*}\)** & **T** & **Params.** & **Acc.** \\ \hline Eshraghian, 2022 & LIF\({}^{\mathrm{a}}\) & 3000r & 100 & 11160000 & 83.2 \\ \hline Eshraghian, 2022 & LIF\({}^{\mathrm{a}}\) & 3000 & 100 & 2160000 & 66.3 \\ \hline Bauer, 2022 & SRM & 100+1281+1281+201 & 250 & 2100562 & 78.1 \\ Zenke, 2022 & LIF & 1024r & 2000 & 1785356 & 83.2 \\ \hline Fang, 2021 & SRM & 400+400 & 2000 & 448000 & 85.7 \\ \hline Yu, 2022 & LIF\({}^{\mathrm{b}}\) & 400+400 & 1000 & 448000 & 87.0 \\ \hline Zenke, 2021 & LIF & 256r+256r & 500 & 380928 & 82.0 \\ \hline Yin, 2020 & LIF\({}^{\mathrm{a}}\) & 256r & 250 & 249856 & 81.7 \\ \hline Yin, 2021 & LIF\({}^{\mathrm{c}}\) & 128r+128r & 250 & 141312 & **90.7** \\ \hline Zenke, 2022 & LIF & 128r & 2000 & 108544 & 71.4 \\ \hline Perez, 2021 & LIF & 128r & 2000 & 108544 & 82.7 \\ \hline Ours (1) & **LIF** & 644+64d & 250 & 98560 & **90.4** \\ \hline Ours (2) & **LIF** & 48d+43d & 250 & **66240** & 90.1 \\ \hline \multicolumn{5}{l}{\({}^{*}\) Conventions: r. with lateral recurrency, d: with delay synapses.} \\ \multicolumn{5}{l}{\({}^{\mathrm{a}}\) Binarized. \({}^{\mathrm{b}}\) MAP-SNN. \({}^{\mathrm{c}}\) Adaptive threshold.} \\ \end{tabular} \end{table} Table 1: Comparing accuracy and number of parameters for SHD. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Measurement** & **R1** & **R2** & **D1** & **D2** \\ \hline neurons per hidden layer & 128 & 48 & 8 & 8 \\ \hline number of delays & 1 & 1 & 10 & 5 \\ \hline avg spk/timestep, layer1 & 8.678 & 6.725 & 1.894 & 1.686 \\ \hline avg spk/timestep, layer 2 & 4.582 & 3.456 & 1.772 & 2.539 \\ \hline max spk/timestep, layer 1 & - & - & 7 & 7 \\ \hline test set accuracy & 81.020 & 80.200 & 82.170 & 80.510 \\ \hline \multicolumn{5}{l}{**Neurosynaptic cost estimation**} \\ \hline energy (uJ) & 20.213 & 7.390 & 2.304 & 1.745 \\ \hline memory (param. count) & 141358 & 41684 & 7876 & 6756 \\ \hline \multicolumn{5}{l}{**Delay queue estimations**} \\ \hline energy overhead (uJ)\({}^{*}\) & - & - & 0.059 & 0.030 \\ \hline mem. overhead (words) & - & - & 1890 & 1800 \\ \hline energy saving factor & 1 & 2.735 & 8.554 & 11.384 \\ \hline memory saving factor & 1 & 3.397 & 14.498 & 16.548 \\ \hline \multicolumn{5}{l}{**Ring buffer estimations**} \\ \hline energy overhead (uJ) & - & - & 0.085 & 0.085 \\ \hline mem. overhead (words) & - & - & 3?80 & 3560 \\ \hline energy saving factor & 1 & 2.735 & 8.463 & 11.046 \\ \hline memory saving factor & 1 & 3.397 & 12.147 & 13.996 \\ \hline \multicolumn{5}{l}{\({}^{*}\)The energy overhead is calculated per inference.} \\ \multicolumn{5}{l}{All networks evaluated for T=250. Columns:} \\ \multicolumn{5}{l}{R1: (Recurrent) LIF 128r+128r.} \\ \multicolumn{5}{l}{R2: (Recurrent) ALIF 48r+48r.} \\ \multicolumn{5}{l}{D1: (Delays) LIF 8d+8d, depth=150, stride=15.} \\ \multicolumn{5}{l}{D2: (Delays) LIF 8d+8d, depth=150, stride=30.} \\ \end{tabular} \end{table} Table 2: Energy and memory estimations for the proposed network, compared to an RSNN for similar accuracy.
2309.06710
Crystal structure prediction using neural network potential and age-fitness Pareto genetic algorithm
While crystal structure prediction (CSP) remains a longstanding challenge, we introduce ParetoCSP, a novel algorithm for CSP, which combines a multi-objective genetic algorithm (MOGA) with a neural network inter-atomic potential (IAP) model to find energetically optimal crystal structures given chemical compositions. We enhance the NSGA-III algorithm by incorporating the genotypic age as an independent optimization criterion and employ the M3GNet universal IAP to guide the GA search. Compared to GN-OA, a state-of-the-art neural potential based CSP algorithm, ParetoCSP demonstrated significantly better predictive capabilities, outperforming by a factor of $2.562$ across $55$ diverse benchmark structures, as evaluated by seven performance metrics. Trajectory analysis of the traversed structures of all algorithms shows that ParetoCSP generated more valid structures than other algorithms, which helped guide the GA to search more effectively for the optimal structures
Sadman Sadeed Omee, Lai Wei, Jianjun Hu
2023-09-13T04:17:28Z
http://arxiv.org/abs/2309.06710v1
Crystal structure prediction using neural network potential and age-fitness Pareto genetic algorithm + ###### Abstract While crystal structure prediction (CSP) remains a longstanding challenge, we introduce ParetoCSP, a novel algorithm for CSP, which combines a multi-objective genetic algorithm (MOGA) with a neural network inter-atomic potential (IAP) model to find energetically optimal crystal structures given chemical compositions. We enhance the NSGA-III algorithm by incorporating the genotypic age as an independent optimization criterion and employ the M3GNet universal IAP to guide the GA search. Compared to GN-OA, a state-of-the-art neural potential based CSP algorithm, ParetoCSP demonstrated significantly better predictive capabilities, outperforming by a factor of \(2.562\) across \(55\) diverse benchmark structures, as evaluated by seven performance metrics. Trajectory analysis of the traversed structures of all algorithms shows that ParetoCSP generated more valid structures than other algorithms, which helped guide the GA to search more effectively for the optimal structures. neural network potential genetic algorithm age-fitness Pareto optimization crystal structure prediction ## 1 Introduction Crystal structure prediction (CSP) is the problem of predicting the most energetically stable structure of a crystal given its chemical composition. Knowing the atomic structure is the most crucial aspect of comprehending crystalline materials. With the structural information of the material, advanced quantum-mechanical methods such as Density Functional Theory (DFT) can be utilized to calculate numerous physical characteristics of the crystal [1]. As the physical and chemical characteristics of a crystal are dictated by the arrangement and composition of its atoms, CSP is critical to finding new materials that possess the needed properties such as high thermal conductivity, high compressing strength, high electrical conductivity, or low refractive index. CSP based computational materials discovery is significant and has the potential to revolutionize a range of industries, such as those involving electric vehicles, Li-batteries, building construction, energy storage, and quantum computing hardware [2, 3, 4, 5, 6]. For this reason, CSP, along with machine learning (ML)-based inverse design [7, 5, 8, 9, 10], has emerged as one of the most potential methods for finding novel materials. Although there have been notable advancements in the field of CSP, the scientific community has yet to solve this fundamental challenge that has persisted for decades. CSP presents a significant challenge due to the requirement to search through an extensive range of potential configurations to identify the most stable arrangement of atoms of a crystal in a high-dimensional space. The complexity of CSP stems from the combinatorial nature of the optimization challenge, where the number of potential configurations grows exponentially with the number of atoms present in the crystal [1]. Additionally, the prediction of the most stable structure relies on several factors, including temperature, pressure, and chemical composition, further increasing the intricacy of the problem. Historically, the main method for determining crystal structures was through experimental X-ray diffraction (XRD) [11], which is time-consuming, expensive, and sometimes impossible, particularly for materials that are difficult to synthesize. Computational approaches for CSP provide a faster and more affordable alternative than experimental methods. A typical strategy involves searching for the crystal's lowest energy atomic arrangement by optimizing its potential energy surface (PES) using different search algorithms. However, in some cases, simpler metrics such as the cohesive energy or the formation energy of the structures can be used instead [4]. The highly non-convex nature of the PES, which can contain a vast number of local minima, reduces the efficiency of the search algorithms. Moreover, finding the global minimum of a PES is categorized as an NP-hard problem [12]. Most research on the CSP problem concentrates on _ab initio_ techniques, which involve exploring the atomic configuration space to locate the most stable structure based on the first-principles calculations of the free energy of possible structures [13; 14; 15]. Although these methods are highly accurate, the scalability and the applicability of these ab initio algorithms for predicting crystal structures remain a challenge. These methods are severely constrained because they rely on expensive first-principles density functional theory (DFT) calculations [16; 17] to determine the free energy of candidate structures. Furthermore, these methods are only applicable for predicting structures of comparatively small systems (\(<10-20\) atoms in the unit cell). Although there are inexpensive models available to estimate the free energy, they tend to have a poor correlation with reality, which can result in an inaccurate search [14]. For example, state-of-the-art (SOTA) graph neural networks (GNNs) have demonstrated the capability to accurately predict the formation energy of candidate structures [18; 19; 20; 21; 22; 23], their performance on predicting non-stable or meta-stable structures is significantly lower as they are usually trained with stable crystals. Several search algorithms have been applied to the CSP problem, including random sampling [12], simulated annealing [24; 25; 26], meta-dynamics [27; 28], basin hopping [29; 30], minima hopping [31], genetic algorithm (GA) [32; 33; 14; 34], particle swarm optimization (PSO) [15], Bayesian optimization (BO) [35; 36], and deep learning (DL) [37; 38]. Among them, the USPEX algorithm, developed by Glass et al. [14], is a prominent CSP algorithm based on evolutionary principles, using natural selection and reproduction to generate new crystal structures. It incorporates a combination of three operators- heredity, mutation, and permutation to explore the configuration space. To evaluate candidate structures, they use ab initio free energy calculation using tools like VASP [39] and SIESTA [40] which are highly accurate, but extremely time consuming. Another important CSP algorithm named CALYPSO was devised by Wang et al. [15], which employs a PSO algorithm to explore the energy landscape of crystal structures and identify the lowest energy structures. To accomplish this, they developed a special strategy for removing comparable structures and applied symmetry-breaking restrictions to boost search effectiveness. Both USPEX and CALYPSO methods have been successfully applied to predicting the crystal structures of diverse materials, including those under high-pressure conditions, complex oxides, alloys, and etc. The random sampling-based CSP algorithms have also demonstrated their effectiveness. For example, AIRSS presented by Pickard et al. [12], describes a scheme that generates different random crystal structures for different type of crystals and conducts DFT calculations on them to determine the most stable one. Another genre of CSP methods are template-based methods [41; 42; 43] which involves finding an existing crystal structure as the template using some heuristic methods, or the ML method, etc, which has a similar chemical formula and then replacing some of its atoms with different elements. However, the accuracy of these models is constrained by the diversity and availability of the templates, as well as the complexity of the target compound. Inspired by the recent success of DL-based methods in protein structure prediction [44; 45; 46], a DL-based algorithm, AlphaCrystal [38] has been designed to predict the contact map of a target crystal and then reconstruct its structure via a GA. However, the effectiveness of this model is constrained because its performance relies on the accuracy of the predicted space group, lattice parameters, and distance matrices. Moreover, it ultimately depends on the optimization algorithm for reconstructing the final structure from the contact map as it is unable to provide end-to-end prediction like DeepMind's AlphaFold2 [45]. Compared to previous DFT-based CSP algorithms such as USPEX and CALYPSO, a major progress in CSP is to use machine-learning potential models to replace the costly first principle energy calculation. Cheng et al. [36] developed a CSP framework named GN-OA, in which a graph neural network (GNN) model was first trained to predict the formation energy and then an optimization algorithm was then used to search for the crystal structure with the minimum formation energy, guided by the GNN energy model. They show that the BO search algorithm produces the best results among all optimization algorithms. However, predicting formation energy using GNNs has its drawback as its performance largely depends on the dataset it is trained on. A structure search trajectory analysis [47] also showed that current BO and PSO in GN-OA tend to generate too many invalid structures, which deteriorates its performance. While both USPEX and CALYPSO have been combined with ML potentials for CSP before GN-OA, they were only applicable to small crystal systems such as Carbon structures, Sodium under pressure, and Boron clusters [48; 49] due to the limitation of their ML potential models. Recently, significant progress has been achieved in ML potentials for crystals [50; 51; 52; 53; 54] that can work with multi- element crystals and larger crystals systems. This will bring unprecedented opportunities and promise for modern CSP research and materials discovery. For example, recent advancement in deep neural network-based energy potential (M3GNet IAP) [53] has shown its capability to cover \(89\) elements of the periodic table while the CHGNet [54] model was pretrained on the energies, forces, stresses, and magnetic moments from the Materials Project Trajectory Dataset, consisting of \(\sim 1.5\) million unstable and stable inorganic structures. It is intriguing to explore how well modern CSP algorithms based on these ML potential can perform. Inspired by this progress, we propose the ParetoCSP algorithm for CSP, which combines the M3GNet potential with the age-fitness pareto genetic algorithms for efficient structure search. In this algorithm, candidate structures in the GA population are compared based on both the genotypic age and the formation energy, predicted by a neural network potential such as M3GNet or CHGNet. Compared to previous GN-OAs, we showed that the significant global search capability of our ParetoCSP allows it to achieve much better prediction performance. Our contribution in this paper can be summarized as follows: * We develop an efficient ParetoCSP for CSP, which combines an updated multi-objective GA (NSGA-III) by the inclusion of the age fitness Pareto optimization criterion and a neural network potential (M3GNet IAP), utilized to correlate crystal structures to their final energy. * Our systematic evaluations on \(55\) benchmark crystals show that ParetoCSP outperforms GN-OA by a factor of \(2.562\) in terms of prediction accuracy. * We reinforce GN-OA by replacing its formation energy predictor MEGNet with the M3GNet IAP final energy model and show that it improves the default GN-OA by a factor of \(1.5\) in terms of prediction accuracy. We further demonstrated the significant improvement in the search capability of ParetoCSP by showing that ParetoCSP outperforms the updated GN-OA by a factor of \(1.71\) in terms of prediction accuracy. * We provide quantitative analysis of the structures generated by ParetoCSP using seven performance metrics, and empirically show that ParetoCSP found better quality of structures for the test formulas than those by GN-OA. * We perform a trajectory analysis of the generated structures by all evaluated CSP algorithms and show that ParetoCSP generates a great more valid solutions than the GN-OA algorithm, which may have contributed to ParetoCSP's better performance in predicting the crystal structures. ## 2 Method ### ParetoCSP: algorithm description The input of our algorithm (ParetoCSP) is the elemental composition of a crystal \(\{c_{i}\}\), where \(i\) is the index of an atom and \(c_{i}\) is the element of the \(i\)-th atom in the unit cell. A periodic crystal structure can be described by its lattice parameters (\(L\)) \(a,b,c\) (representing the unit cell size), and \(\alpha,\beta,\gamma\) (representing angles in the unit cell), the space group, and the atomic coordinates at unique Wyckoff positions. Our algorithm is based on the idea of the GN-OA algorithm [36] with two major upgrades including the multi-objective GA search algorithm and the use of M3GNet potential for energy calculation. GN-OA has been proven from previous researches that incorporating symmetry constraint expedites CSP [36; 55]. Similar to the GN-OA approach, our method also considers crystal structure prediction with symmetry constraints. We incorporate two additional structural features, namely crystal symmetry \(S\) and the occupancy of Wyckoff position \(W_{i}\) for each atom \(i\). These features are selected from a collection of \(229\) space groups and associated \(1506\) Wyckoff positions [56]. The method begins by selecting a symmetry \(S\) from the range of \(P2\) to \(P230\), followed by generating lattice parameters \(L\) within the chosen symmetry. Next, a combination of Wyckoff positions \(\{W_{i}\}\) is selected to fulfill the specified number of atoms in the cell. The atomic coordinates \(\{R_{i}\}\) are then determined based on the chosen Wyckoff positions \(\{W_{i}\}\) and lattice parameters \(L\). To generate crystal structures, we need to tune the \(S\), \(\{W_{i}\}\), \(L\), and \(\{R_{i}\}\) variables. By selecting different combinations of \(S\), \(W_{i}\), \(L\), and \(R_{i}\), we can generate a comprehensive array of possible crystal structures for the given \(c_{i}\). In theory, determining the energy of these various structures and selecting the one with the least energy should be the optimal crystal arrangement. However, exhaustively enumerating all these structures becomes practically infeasible due to the staggering number of potential combinations. To address this complexity, a more practical approach involves iteratively sampling candidate structures from the design space, under the assumption that one of the sampled structures will emerge as the most stable and optimal solution. Consequently, we adopt an optimization strategy to guide this search process towards identifying the structure with the lowest energy. In particular, we utilize an genetic algorithm, NSGA-III [57; 58], improved by incorporating AFPO [59] to enhance its performance and robustness. First, we generate \(n\) initial random structures. We then assign them an age of \(1\) and convert them into crystal graphs. There are multiple approaches to encode crystals as graphs [60; 18; 19; 61; 62]. In short, we can consider each atom of the crystal as nodes of the graph, and interaction between them (e.g., bonds) can be encoded as edges. Interactions can be limited to certain cutoff range to define more realistic graphs. Each node and edge need to assigned feature vectors for the DNN to learn the specific property. After generating the initial structures, we predict their final energy/atom using the M3GNet universal IAP [53]. Next we calculate fitness considering both energy and age of the generated crystals (two independent dimension in the Pareto front). After that, we check whether the total number of generations are less than a certain threshold \(\mathcal{G}\). If yes, we increase the age of all individuals by \(1\). This follows the Pareto tournament selection, which selects the parents among the individual structures for the next generation. We usually set the tournament size to \(2\) which selects half of the population as parents. Next, we perform the genetic operations - crossover and mutation. After crossover, we update the age of each individual by inheriting the maximum age of corresponding parents. Similarly, after mutation, individual ages are updated by inheriting the age of their respective parents. These operations result in a new population of \(n\) individuals for the next generation. The concept of age ensures a diverse population by containing both old and young individual, as well as effectively prevents from converging into local optima [59]. We then increase the generation number and repeat the whole process by calculating the final energy/atom of each structure until the generation number \(\leq\) the threshold Figure 1: **The flowchart of ParetoCSP algorithm.** It starts by generating \(n\) random crystals and assigning them an age of \(1\), where \(n\) denotes the population size. One complete generation then goes through the following steps: calculating energy of the structures and fitness, selecting parents, performing genetic operations, and updating the age. After a certain threshold of \(\mathcal{G}\) generations, the lowest energy structure from the multi-dimensional Pareto front is chosen and further relaxed and symmetrized to obtain the final optimal structure. The genetic encoding is shown in the lower right corner of the flowchart. It contains lattice parameters \(a\), \(b\), \(c\), \(\alpha\), \(\beta\), and \(\gamma\), the space group \(S\), the wyckoff position combination \(W_{i}\), and the atomic coordinates \(R_{i}\) of atom indexed by \(i\). \(\mathcal{G}\). After finishing \(\mathcal{G}\). generations, we obtain a set of \(\mathcal{F}\) non-dominated solutions on the Pareto front. We select the solution with the lowest final energy per atom as the optimal solution. We further relax the structure using the structure relaxation method of M3GNet IAP, which produces a more refined structure with lower final energy per atom. Finally, we perform a symmetrization operation to symmetrize the structure to output the final structure. Figure1 shows the flowchart of our ParetoCSP algorithm. ### AFPO: Age-fitness Pareto optimization One of the key requirements for a GA to achieve robust global search is to maintain the diversity of the population. Here, we employed the multi-objective genetic algorithm, AFPO by Schmidt and Lipson [59] to achieve this goal. The AFPO algorithm is inspired from the idea of age layered population structure (ALPS) [63; 64], which divides the evolving population into layers based on how long the genetic material has been present in the population so that competitions happen at different fitness levels, avoiding the occurrence of premature convergence. The _age_ of an individual is defined as how long the oldest part of its genotype has been present in the population [65]. Instead of partitioning the population into layers as done in the HFC algorithm [63], AFPO uses age as an explicit optimization criterion (an independent dimension in a multi-objective Pareto front). A solution is considered optimal if it has both higher fitness and lower age compared to other solutions. This enables the algorithm to maintain diversity in the population and avoid premature convergence to local optima, as well as to find better solutions at faster convergence speed [59]. The AFPO algorithm starts by initializing a population of \(N\) individuals randomly and assigned an age of one to all of them. The fitness of an individual is evaluated by calculating its performance for all objectives. The fitness values are then used to rank the individuals based on their Pareto dominance. The algorithm then updates and assigns the age for each individual. The age of an individual is increased by one with each generation. When crossover or mutation occurs, the individual's age is set to the maximum age of its parents. The algorithm uses a parameter called the tournament size \(K\) which determines the number of individuals that compete for selection. Specifically, \(K\) individuals are selected at random. It then forms the Pareto front among them, and eliminating any dominated individuals. After that, crossovers and mutations are applied to the parents to generate offspring. The objective function values for each offspring are evaluated and the updated ages are assigned to each offspring. The newly generated offspring replace some of the older individuals in the population based on their age and fitness values. To avoid premature convergence towards sub-optimal solutions, a few new random individuals are added to the population in each generation to maintain diversity. The algorithm continues to iterate through the above steps until a stopping criterion is met, such as a maximum number of generations or a desired level of convergence. For more details, the readers are referred to the reference [65]. ### NSGA-III: multi-objective GA We use the NSGA-III [57] algorithm to implement the age-fitness based genetic algorithm AFPO. NSGA-II is an improved version of the popular multi-objective evolutionary algorithm NSGA-II [66]. Here we describe the NSGA-III framework as defined in reference [57; 58]. The NSGA-III algorithm begins with defining a group of reference points. To create an offspring population \(Q_{i}\) at generation \(i\), the current parent population \(P_{i}\) undergoes genetic operations. The resulting population, \(P_{i}\cup Q_{i}\) is then sorted based on their nondomination levels (\(F_{1},F_{2}\), and so on). The algorithm saves all members up to the last fully accommodated level, \(F_{k}\) (considering all solutions from level (\(k+1\)) onward are rejected) in a set called \(\delta_{i}\). The individuals from \(\delta_{i}\setminus F_{k}\) have already been chosen for the next set of candidates, while the remaining spots are filled by individuals from \(F_{k}\). The selection process of NSGA-III is substantially altered from the approach used in NSGA-II. First, the objective values and reference points are normalized. Second, each member in \(\delta_{i}\) is assigned a reference point based on its distance to the individual with a reference line formed by connecting the ideal point to the reference point. This method enables the determination of the number and positions of population members linked to each supplied reference point in \(\delta\setminus F_{k}\). Next, a niching technique is applied to pick individuals from \(F_{k}\) who are underrepresented in \(\delta_{i}\setminus F_{k}\) based on the results of the association process explained earlier. Reference points with the fewest number of associations in the \(\delta\setminus F_{k}\) population are identified and corresponding points in the \(F_{k}\) set are searched. These selected members from \(F_{k}\) are then added to the population, one by one, until the required population size is achieved. Thus NSGA-III utilizes a different approach in contrast to NSGA-II to sustain diversity among population members by incorporating a set of well-distributed reference points that are provided initially and updated adaptively during the algorithm's execution [58]. More implementation details can be found in the reference [67]. ### M3GNet Inter-atomic Potential (IAP) Energy potential is one of the key components of modern CSP algorithms. Here we use M3GNet [53], which is a GNN based ML potential model that explicitly incorporates \(3\)-body interactions. This model combines the graph-based DL inter-atomic potential and the many-body features found in traditional IAPs with the flexible graph material representations. One notable distinction of M3GNet from previous material graph implementations is the inclusion of atom coordinates and the \(3\times 3\) lattice matrix in crystals. These additions are essential for obtaining tensorial quantities like forces and stresses through the use of auto-differentiation. In the M3GNet model, position-included graphs serve as inputs. Graph features include embedded atomic numbers of elements and pair bond distances. Like traditional GNNs, the node and the edge features are updated via the graph convolution operations. Our M3GNet potential was trained using both stable and unstable structures so that it can well capture the difference between these two. The precise and efficient relaxation of diverse crystal structures and the accurate energy prediction achieved by the M3GNet-based relaxation algorithm make it well-suited for large-scale and fast crystal structure prediction. ### Evaluation criteria Many earlier studies [15; 14; 12] have depended on manual structural examination and ab initio formation energy comparison to assess the performance of a Crystal Structure Prediction (CSP) algorithm. But these metrics do not address the situation that an algorithm may not find the exact solution for a crystal and it is not clear how much the generated structure is deviated from the ground truth structure. Usually previous works did not quantitatively report how good or bad a solution is. Also, if two algorithms fail to generate the exact crystal structure, these metrics do not describe which one is closer to finding the optimal solution. Recently, Wei et al. [47] proposed a set of performance metrics to measure CSP performance which alleviated this issue greatly. We used seven performance metrics from that work to measure the performance of our CSP algorithm and the baselines. The required data are the crystallographic information file (CIF) of both the the optimized and relaxed final structure generated by the CSP algorithm and its corresponding ground truth stable structure. Details about these performance metrics can be found in [47]. They are shortly listed below: 1. Energy distance (ED) 2. Wyckoff position fraction coordinate root mean squared error distance (W\({}_{rmse}\)) 3. Wyckoff position fraction coordinate root mean absolute error (W\({}_{mae}\)) 4. Sinkhorn distance (SD) 5. Chamfer distance (CD) 6. Hausdorff distance (HD) 7. Crystal fingerprint distance (FP) ## 3 Results Our objective is to demonstrate the effectiveness of ParetoCSP for crystal structure prediction by showing that the multi-objective AFPO GA enables a much more effective structure search method than the BO and PSO and that M3GNet IAP is a more powerful crystal energy predictor than the previous MEGNet model. ### Benchmark set description We selected a diverse set of \(55\) stable structures available in the Materials Project database [68] with no more than \(20\) atoms. Among them, \(20\) are binary crystals, \(20\) are ternary crystals, and \(15\) are quarternary crystals. We chose the benchmark set based on multiple factors such as diversity of elements, diversity of space groups, special type of materials (e.g., perovskites), and usage in previous CSP literature etc. Supplemental Fig. S1a shows the diversity of the elements used in the benchmark set. Table 1 shows the detailed information about the \(55\) chosen test crystals used in this work. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Composition** & **No. of** & **Space group** & **Formation energy** & **Final energy** & **M3GNet final energy** \\ & **atoms** & & **(eV/atom)** & **(eV/atom)** & **(eV/atom)** \\ \hline \hline \end{tabular} \end{table} Table 1: **Details of the \(55\) benchmark crystals used in this work.** The first \(20\) crystals are binary, second \(20\) crystals are ternary, and last \(15\) crystals are quarternary, and each of these types of crystals are separated by single horizontal lines. We can see that the ground truth final energies and the predicted final energies by M3GNet IAP are very close, demonstrating M3GNet’s effectiveness as an energy predictor. \begin{tabular}{l l l l l l} \hline TiCo & 2 & \(Pm-3m\) & \(-0.401\) & \(-7.9003\) & \(-7.8986\) \\ CrPd\({}_{3}\) & 4 & \(Pm-3m\) & \(-0.074\) & \(-6.3722\) & \(-6.4341\) \\ GaNNi\({}_{3}\) & 4 & \(Pm-3m\) & \(-0.291\) & \(-5.3813\) & \(-5.3806\) \\ ZrSe\({}_{2}\) & 3 & \(P-3m1\) & \(-1.581\) & \(-6.5087\) & \(-6.5077\) \\ MnAl & 2 & \(Pm-3m\) & \(-0.225\) & \(-6.6784\) & \(-6.7503\) \\ NiS\({}_{2}\) & 6 & \(P6_{3}/mmc\) & \(-0.4\) & \(-4.7493\) & \(-4.9189\) \\ TiO\({}_{2}\) & 6 & \(P4_{2}/mmm\) & \(-3.312\) & \(-8.9369\) & \(-8.9290\) \\ NiCl & 4 & \(P6_{3}mc\) & \(-0.362\) & \(-3.8391\) & \(-3.8899\) \\ AlNi\({}_{3}\) & 4 & \(Pm-3m\) & \(-0.426\) & \(-5.7047\) & \(-5.6909\) \\ CuBr & 4 & \(P6_{3}/mmc\) & \(-0.519\) & \(-3.0777\) & \(-3.0908\) \\ VPt\({}_{3}\) & 8 & \(I4/mmm\) & \(-0.443\) & \(-7.2678\) & \(-7.2638\) \\ MnCo & 2 & \(Pm-3m\) & \(-0.0259\) & \(-7.6954\) & \(-7.6963\) \\ BN & 4 & \(P6_{3}/mmc\) & \(-1.411\) & \(-8.7853\) & \(-8.7551\) \\ GeMo\({}_{3}\) & 8 & \(Pm-3n\) & \(-0.15\) & \(-9.4398\) & \(-9.3588\) \\ Ca\({}_{3}\)V & 8 & \(I4/mmm\) & \(0.481\) & \(-3.2942\) & \(-3.1638\) \\ Ga\({}_{2}\)Te\({}_{3}\) & 20 & \(Cc\) & \(-0.575\) & \(-3.4181\) & \(-3.4160\) \\ CoAs\({}_{2}\) & 12 & \(P2_{1}/c\) & \(-0.29\) & \(-5.8013\) & \(-5.7964\) \\ Li\({}_{2}\)Al & 12 & \(Cmcm\) & \(-0.163\) & \(-2.6841\) & \(-2.6623\) \\ VS & 4 & \(P6_{3}/mmc\) & \(-0.797\) & \(-7.1557\) & \(-7.3701\) \\ Ba\({}_{2}\)Hg & 6 & \(I4/mmm\) & \(-0.384\) & \(-1.7645\) & \(-1.7582\) \\ \hline SrTiO\({}_{3}\) & 5 & \(Pm-3m\) & \(-3.552\) & \(-8.0249\) & \(-8.0168\) \\ Al\({}_{2}\)FeCo & 4 & \(P4/mmm\) & \(-0.472\) & \(-6.2398\) & \(-6.2462\) \\ GaBN\({}_{2}\) & 4 & \(P-4m2\) & \(-0.675\) & \(-7.0893\) & \(-7.0918\) \\ AcMnO\({}_{3}\) & 5 & \(Pm-3m\) & \(-2.971\) & \(-7.1651\) & \(-7.8733\) \\ BaTiO\({}_{3}\) & 5 & \(Pm-3m\) & \(-2.995\) & \(-8.1070\) & \(-8.1012\) \\ CdCuN & 3 & \(P-6m2\) & \(0.249\) & \(-4.0807\) & \(-4.0228\) \\ HoHSe & 3 & \(P-6m2\) & \(-1.65\) & \(-5.2538\) & \(-5.2245\) \\ Li\({}_{2}\)ZnSi & 8 & \(P6_{3}/mmc\) & \(0.0512\) & \(-2.5923\) & \(-2.6308\) \\ Cd\({}_{2}\)AgPt & 16 & \(Fm-3m\) & \(-0.195\) & \(-2.8829\) & \(-2.8415\) \\ AlCrFe\({}_{2}\) & 4 & \(P4/mmm\) & \(-0.157\) & \(-7.7417\) & \(-7.6908\) \\ ZnCdPt\({}_{2}\) & 4 & \(P4/mmm\) & \(-0.444\) & \(-4.0253\) & \(-4.0164\) \\ EuAlSi & 3 & \(P-6m2\) & \(-0.475\) & \(-6.9741\) & \(-6.9345\) \\ Sc\({}_{3}\)TiC & 5 & \(Pm-3m\) & \(-0.622\) & \(-6.7381\) & \(-6.7419\) \\ GaSeCl & 12 & \(Pnnm\) & \(-1.216\) & \(-3.6174\) & \(-3.6262\) \\ CaAgN & 3 & \(P-6m2\) & \(-0.278\) & \(-4.5501\) & \(-4.7050\) \\ BaAlGe & 3 & \(P-6m2\) & \(-0.476\) & \(-3.9051\) & \(-3.9051\) \\ K\({}_{2}\)PdS\({}_{2}\) & 10 & \(Immm\) & \(-1.103\) & \(-4.0349\) & \(-4.0066\) \\ KCrO\({}_{2}\) & 8 & \(P6_{3}/mmc\) & \(-2.117\) & \(-6.4452\) & \(-6.4248\) \\ TiZnCu\({}_{2}\) & 4 & \(P4/mmm\) & \(-0.0774\) & \(-4.4119\) & \(-4.4876\) \\ Ta\({}_{2}\)N\({}_{3}\)O & 6 & \(P6/mmm\) & \(-0.723\) & \(-9.3783\) & \(-9.3848\) \\ \hline AgBiSeS & 4 & \(P4/mmm\) & \(-0.404\) & \(-3.7363\) & \(-3.8289\) \\ ZrTaNO & 4 & \(P-6m2\) & \(-1.381\) & \(-9.5450\) & \(-9.5429\) \\ \hline \end{tabular} ### Performance analysis of ParetoCSP The default version of ParetoCSP uses M3GNet universal IAP as the final energy evaluator for the candidate structures to guide the AFPO-based GA to identify the most stable structure with the minimum energy. Our algorithm ParetoCSP predicted the exact structures for \(17\) out \(20\) binary crystals (\(85\%\)), \(16\) out of \(20\) ternary crystals (\(80\%\)), and \(8\) out of \(15\) quarternary crystals (\(53.333\%\)) (see Table 2). Overall, ParetoCSP achieved an accuracy of \(74.55\%\) among all \(55\) test crystals for this research which is the highest among all evaluated algorithms (\(\approx 1.71\%\) the next best algorithm). Details on comparison with other algorithms and energy methods are discussed in Subsection 3.3 and 3.4. The exact accuracy results for all algorithms are presented in Table 2. All the structures were assigned \(\boldsymbol{\check{\mathsf{\mathsf{\prime}}}}\)(exact), or \(\boldsymbol{\check{\mathsf{\mathsf{\prime}}}}\)(non-exact) based on manual inspection which was predominantly done in the majority of the past literature [36; 15]. We observed that ParetoCSP successfully found the most stable structures of all cubic and hexagonal binary crystals and most tetragonal binary crystals in the benchmark dataset. The three unsuccessful binary crystals that ParetoCSP failed to identify their exact structures are Ga\({}_{2}\)Te\({}_{3}\) (monoclinic), Li\({}_{2}\)Al (orthorhombic), and Ba\({}_{2}\)Hg (tetragonal). For ternary crystals, ParetoCSP successfully determined the exact stable structures for all tetragonal crystals and most cubic and hexagonal crystals. However, there were four instances where the prediction failed, namely for Li\({}_{2}\)ZnSi (hexagonal), Cd\({}_{2}\)AgPt (cubic), GaSeCl (orthorhombic), and K\({}_{2}\)PdS (orthorhombic). In the case of quarternary crystals, ParetoCSP achieved dominance over most hexagonal and tetragonal structures. Li\({}_{2}\)MgCdP\({}_{2}\) (tetragonal), Sr\({}_{2}\)BBrN\({}_{2}\) (trigonal), ZrCuSiAs (tetragonal), NdNiSnH\({}_{2}\) (hexagonal), MnCoSnRh (cubic), Mg\({}_{2}\)ZnB\({}_{2}\)Ir\({}_{5}\) (tetragonal), Ba\({}_{2}\)CeTaO\({}_{6}\) (monoclinic) are the seven quarternary failure cases for ParetoCSP in terms of finding exact structures. Based on these observations, we can claim that ParetoCSP combined with M3GNet IAP demonstrated notable efficacy in predicting cubic, hexagonal, and tetragonal crystalline materials. However, its performance in predicting monoclinic and orthorhombic crystals is comparatively less successful. This can be accounted due to the higher number of degrees of freedom of monoclinic and orthorhombic crystal systems compared to simpler crystal systems like cubic or hexagonal. Also monoclinic and orthorhombic crystals have a varied range of complex structural motifs, which makes CSP algorithms difficult to predict their exact structures. However, this does not diminish the claim that our algorithm is the best among the four ML potential based CSP algorithms evaluated here. Later, we demonstrated that the other CSP algorithms also faced similar challenges. Ground truth and predicted structures of sample crystals are shown in Fig. 2 using the VESTA tool, which contains examples of both successful and unsuccessful predictions. Now, we analyze the performance of ParetoCSP in terms of the quantitative performance metrics. As mentioned before, we used a set of seven performance metrics to evaluate the prediction performance of different CSP algorithms. The values of each performance metrics for all \(55\) chosen crystal is shown in Table 3. Ideally, all the performance metric values should be zero if the predicted structure and the ground truth structure are exactly the same. We identified the values of the failure cases which indicate the _poor quality_ of the predictions. The process for determining them involved identifying the highest value for each performance metric among all successful predictions (we name them _satisfactory_ values), and then selecting the values that exceeded those for the failed predictions. We have highlighted these values in bold letters in Table 3. We noticed that with the exception of K\({}_{2}\)PdS\({}_{2}\) and ZrCuSiAs, all but \(12\) of the failed cases demonstrated higher energy distance values compared to the satisfactory energy distance value (\(0.7301\) eV/atom), indicating non-optimal predicted structures. Similarly, for Sinkhorn distance (SD), apart from ZrCuSiAs, the remaining \(13\) unsuccessful predictions exhibited significantly higher values than the satisfactory SD value (\(5.6727\)A), suggesting poor prediction quality. For W\({}_{rmse}\) and W\({}_{mae}\), we assigned a cross (\(\times\)) to indicate if the predicted structure and the target structure do not have similar wyckoff position configurations in the symmetrized structures and thus they cannot be calculated. We observed that, \(11\) out of \(14\) failed predictions (symmetrized) do not have a similar wyckoff position compared to the ground truth symmetrized structure, indicating unsuccessful predictions. However, for Chamfer distance (CD) metric, only \(6\) out of \(14\) failed predictions displayed higher quantities than the satisfactory CD value (\(3.8432\)A), indicating that CD was not the most suitable metric for measuring prediction quality in crystal structures for our algorithm. In contrast, Hausdorff distance (HD) showed that \(10\) out of \(14\) failed predictions had higher values than the satisfactory HD value (\(3.7665\)A). Notably, the only performance metric that consistently distinguished between optimal and non-optimal structures across all failed predictions is crystal fingerprint (FP) metric (satisfactory value: \(0.9943\)), demonstrating its effectiveness in capturing the differences between these structures. In conclusion, all the metrics provided strong evidence of the non-optimal nature of the \(14\) failed structures. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Composition**} & **ParetoCSP** & **ParetoCSP** & **GN-OA** & **GN-OA** \\ & **with M3GNet (Default)** & **with MEGNet** & **with M3GNet** & **with MEGNet (Default)** \\ \hline \hline TiCo & ✓ & ✓ & ✓ & ✗ \\ CrPd\({}_{3}\) & ✓ & ✓ & ✗ & ✗ \\ GaNi\({}_{3}\) & ✓ & ✓ & ✓ & ✓ \\ ZrSe\({}_{2}\) & ✓ & ✓ & ✓ & ✓ \\ MnAl & ✓ & ✓ & ✓ & ✓ \\ NiS\({}_{2}\) & ✓ & ✗ & ✓ & ✓ \\ TiO\({}_{2}\) & ✓ & ✓ & ✓ & ✓ \\ NiCl & ✓ & ✗ & ✗ & ✗ \\ AlNi\({}_{3}\) & ✓ & ✓ & ✓ & ✓ \\ CuBr & ✓ & ✗ & ✗ & ✗ \\ VPt\({}_{3}\) & ✓ & ✓ & ✓ & ✓ \\ MnCo & ✓ & ✓ & ✓ & ✓ \\ BN & ✓ & ✓ & ✓ & ✓ \\ GeMo\({}_{3}\) & ✓ & ✓ & ✓ & ✓ \\ Ca\({}_{3}\)V & ✓ & ✓ & ✗ & ✗ \\ Ga\({}_{2}\)Te\({}_{3}\) & ✗ & ✗ & ✗ & ✗ \\ CoAs\({}_{2}\) & ✓ & ✗ & ✗ & ✗ \\ Li\({}_{2}\)Al & ✗ & ✗ & ✗ & ✗ \\ VS & ✓ & ✗ & ✓ & ✗ \\ Ba\({}_{2}\)Hg & ✗ & ✗ & ✗ & ✗ \\ \hline SrTiO\({}_{3}\) & ✓ & ✓ & ✓ & ✓ \\ Al\({}_{2}\)FeCo & ✓ & ✓ & ✗ & ✗ \\ GaBN\({}_{2}\) & ✓ & ✓ & ✗ & ✗ \\ AcMnO\({}_{3}\) & ✓ & ✓ & ✓ & ✓ \\ PaTiO\({}_{3}\) & ✓ & ✓ & ✓ & ✓ \\ CdCuN & ✓ & ✗ & ✗ & ✗ \\ \hline \hline \end{tabular} \end{table} Table 2: **Performance comparison of ParetoCSP with baseline algorithms.** Successful and failed predictions via manual inspection are denoted by a ✓ and �, respectively. ParetoCSP with M3GNet achieved the highest success rate in finding the exact structures of these crystals, GN-OA with M3GNet achieved the second best success rate. ParetoCSP with MEGNet performed as the third-best, while GN-OA with MEGNet performed the poorest. These results highlight the significant impact of using M3GNet IAP as crystal final energy predictor and structure relaxer, and the effectiveness of the AFPO-based GA as a structure search function. \begin{tabular}{l c c c c} HoHSe & ✓ & ✗ & ✓ & ✗ \\ Li\({}_{2}\)ZnSi & ✗ & ✗ & ✗ & ✗ \\ Cd\({}_{2}\)AgPt & ✗ & ✗ & ✗ & ✗ \\ AlCrFe\({}_{2}\) & ✓ & ✗ & ✗ & ✗ \\ ZnCdPt\({}_{2}\) & ✓ & ✗ & ✗ & ✗ \\ EuAlSi & ✓ & ✗ & ✓ & ✓ \\ Sc\({}_{3}\)TIC & ✓ & ✓ & ✓ & ✓ \\ GaSeCl & ✗ & ✗ & ✗ & ✗ \\ CaAgN & ✓ & ✗ & ✓ & ✗ \\ BaAlGe & ✓ & ✓ & ✓ & ✗ \\ K\({}_{2}\)PdS\({}_{2}\) & ✗ & ✗ & ✗ & ✗ \\ KCrO\({}_{2}\) & ✓ & ✗ & ✗ & ✗ \\ TiZnCu\({}_{2}\) & ✓ & ✓ & ✓ & ✓ \\ Ta\({}_{2}\)N\({}_{3}\)O & ✓ & ✗ & ✗ & ✗ \\ \hline AgBiSeS & ✓ & ✓ & ✗ & ✗ \\ ZrTaNO & ✓ & ✗ & ✓ & ✗ \\ MnAlCuPd & ✓ & ✗ & ✗ & ✗ \\ CsNaCl & ✓ & ✗ & ✓ & ✗ \\ DyThCN & ✓ & ✗ & ✓ & ✗ \\ Li\({}_{2}\)MgCdP\({}_{2}\) & ✗ & ✗ & ✗ & ✗ \\ SrWNO\({}_{2}\) & ✓ & ✗ & ✗ & ✗ \\ Sr\({}_{2}\)BBrN\({}_{2}\) & ✗ & ✗ & ✗ & ✗ \\ ZrCuSiAs & ✗ & ✗ & ✗ & ✗ \\ NdNiSnH\({}_{2}\) & ✗ & ✗ & ✗ & ✗ \\ MnCoSnRh & ✗ & ✗ & ✗ & ✗ \\ Mg\({}_{2}\)ZnB\({}_{2}\)Ir\({}_{5}\) & ✗ & ✗ & ✗ & ✗ \\ AlCr\({}_{4}\)GaC\({}_{2}\) & ✓ & ✓ & ✗ & ✗ \\ Y\({}_{3}\)Al\({}_{3}\)NiGe\({}_{2}\) & ✓ & ✗ & ✗ & ✗ \\ Ba\({}_{2}\)CeTaO\({}_{6}\) & ✗ & ✗ & ✗ & ✗ \\ \hline \hline \multirow{4}{*}{Accuracy} & Overall: **74.55\%** & Overall: **40\%** & Overall: **43.636\%** & Overall: **29.091\%** \\ \cline{2-4} & Binary: **85\%** & Binary: **60\%** & Binary: **60\%** & Binary: **50\%** \\ \cline{1-1} & \multirow{2}{*}{Ternary: **80\%**} & \multirow{2}{*}{Ternary: **40\%**} & \multirow{2}{*}{Ternary: **45\%**} & \multirow{2}{*}{Ternary: **30\%**} \\ \cline{1-1} & & & & \\ \end{tabular} \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Crystal** & **ED** & \(W_{\mathbf{rmse}}\) & \(W_{\mathbf{mae}}\) & **SD** & **CD** & **HD** & **FP** \\ \hline \hline TiCo & \(0.0009\) & \(0.0\) & \(0.0\) & \(0.007\) & \(0.007\) & \(0.007\) & \(0.0\) \\ CrPd\({}_{3}\) & \(0.0071\) & \(0.0\) & \(0.0\) & \(0.0408\) & \(0.0204\) & \(0.0136\) & \(0.0\) \\ GaNN\({}_{3}\) & \(0.0355\) & \(0.0\) & \(0.0\) & \(0.0839\) & \(0.042\) & \(0.028\) & \(0.0\) \\ ZrSe\({}_{2}\) & \(0.0206\) & \(0.0062\) & \(0.0025\) & \(0.6353\) & \(0.4235\) & \(0.5848\) & \(0.3243\) \\ MnAl & \(0.0\) & \(0.0\) & \(0.0\) & \(0.0002\) & \(0.0002\) & \(0.0\) \\ NiS\({}_{2}\) & \(0.2016\) & \(0.2889\) & \(0.2303\) & \(5.6727\) & \(3.8432\) & \(3.7665\) & \(0.269\) \\ TiO\({}_{2}\) & \(0.6931\) & \(0.2304\) & \(0.1431\) & \(4.209\) & \(2.8535\) & \(1.8551\) & \(0.9793\) \\ NiCl & \(0.3284\) & \(0.2562\) & \(0.1723\) & \(1.3811\) & \(2.3407\) & \(1.1495\) & \(0.6431\) \\ AlNi\({}_{3}\) & \(0.0234\) & \(0.0\) & \(0.0\) & \(0.0727\) & \(0.0363\) & \(0.0242\) & \(0.0\) \\ \hline \end{tabular} \end{table} Table 3: **Quantitative performance metrics of ParetoCSP with M3GNet for the 55 benchmark crystals evaluated in this work**. For each metric and each failure cases, the values which are greater than the range of exact predictions are denoted by bold letters to mark as high values that quantitatively shows their non-optimality. Binary, ternary, and quarternary crystals are separated by single horizontal lines. Figure 2: **Sample structure prediction by ParetoCSP.** Every ground truth structure is followed by the predicted structure. (a) - (p) shows that the structures of MnAl, ZrSe\({}_{2}\), GeMo\({}_{3}\), SrTiO\({}_{3}\), Ta\({}_{2}\)N\({}_{3}\)O, and GaBN\({}_{2}\) were successfully predicted, while (q) - (t) shows that ParetoCSP was unable to predict the structures of GaSeCl, and NdNiSnH\({}_{2}\). All the structures were visualized using VESTA. For better visualization, we set the fractional coordinate ranges of all axis to a maximum of \(3\) for Ta\({}_{2}\)N\({}_{3}\)O, GaBN\({}_{2}\), and GaSeCl, and we used the space-filling style for Ta\({}_{2}\)N\({}_{3}\)O, and GaSeCl. Besides these, we set the fractional coordinate ranges of all axis to a maximum of \(1\) for all structures, and used the ball-and-stick style. \begin{tabular}{l c c c c c c c} CuBr & \(0.3225\) & \(0.2521\) & \(0.1784\) & \(1.8724\) & \(2.5043\) & \(1.0065\) & \(0.3054\) \\ VPt\({}_{3}\) & \(0.2415\) & \(0.3235\) & \(0.2411\) & \(1.3424\) & \(0.2395\) & \(0.2805\) & \(0.1772\) \\ MnCo & 0.0 & 0.0 & 0.0 & 0.0001 & 0.0001 & 0.0001 & 0.0 \\ BN & \(0.3643\) & \(0.4026\) & \(0.2454\) & \(2.513\) & \(1.947\) & \(2.608\) & \(0.8948\) \\ GeMo\({}_{3}\) & \(0.0401\) & 0.0 & 0.0 & 0.1894 & 0.0473 & 0.0325 & 0.0 \\ Ca\({}_{3}\)V & \(0.4592\) & \(0.2048\) & \(0.1149\) & \(3.3111\) & \(2.8356\) & \(3.6542\) & \(0.019\) \\ Ga\({}_{2}\)Te\({}_{3}\) & \(\mathbf{2.0112}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{53.3896}\) & \(\mathbf{4.6825}\) & \(\mathbf{4.8998}\) & \(\mathbf{1.7875}\) \\ CoAs\({}_{2}\) & \(0.4629\) & \(0.4389\) & \(0.2684\) & \(5.3617\) & \(2.8407\) & \(2.9208\) & \(0.9943\) \\ Li\({}_{2}\)Al & \(\mathbf{30.7051}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{61.9154}\) & \(\mathbf{3.9575}\) & \(\mathbf{4.8314}\) & \(\mathbf{2.1345}\) \\ VS & \(0.4204\) & \(0.2477\) & \(0.1806\) & \(1.9372\) & \(1.3665\) & \(1.8303\) & \(0.9189\) \\ Ba\({}_{2}\)Hg & \(\mathbf{5.206}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{8.7511}\) & \(\mathbf{4.9936}\) & \(\mathbf{7.3342}\) & \(\mathbf{1.2468}\) \\ \hline SrTiO\({}_{3}\) & \(0.0185\) & 0.0 & 0.0 & 0.0934 & 0.0374 & 0.0271 & 0.0 \\ Al\({}_{2}\)FeCo & \(0.0098\) & \(0.2357\) & \(0.112\) & \(0.137\) & \(0.0685\) & \(0.0658\) & \(0.1755\) \\ GaBN\({}_{2}\) & \(0.0041\) & \(0.3889\) & \(0.289\) & \(2.1663\) & \(1.5589\) & \(1.9171\) & \(0.0455\) \\ AcMnO\({}_{3}\) & \(0.0385\) & 0.0 & 0.0 & 0.116 & 0.0464 & 0.0336 & 0.0 \\ BaTiO\({}_{3}\) & \(0.0136\) & 0.0 & 0.0 & 0.0924 & 0.037 & 0.0268 & 0.0 \\ CdCuN & \(0.0031\) & \(0.441\) & \(0.4259\) & \(2.7337\) & \(2.9172\) & \(2.2949\) & \(0.0397\) \\ HoHSe & \(0.0033\) & \(0.3643\) & \(0.3148\) & \(2.859\) & \(1.906\) & \(1.9716\) & \(0.0575\) \\ Li\({}_{2}\)ZnSi & \(\mathbf{25.3593}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{34.3079}\) & \(2.9587\) & \(\mathbf{4.104}\) & \(\mathbf{1.8731}\) \\ Cd\({}_{2}\)AgPt & \(\mathbf{22.5447}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{16.9997}\) & \(3.5895\) & \(\mathbf{4.2417}\) & \(\mathbf{2.4137}\) \\ AlCrFe\({}_{2}\) & \(0.6621\) & \(0.2486\) & \(0.1507\) & \(3.6931\) & \(2.2245\) & \(2.2518\) & \(0.7886\) \\ ZnCdPt\({}_{2}\) & \(0.0384\) & \(0.4717\) & \(0.4503\) & \(3.2733\) & \(3.5537\) & \(2.0384\) & \(0.0643\) \\ EuAlSi & \(0.0495\) & \(0.3849\) & \(0.2963\) & \(4.5051\) & \(3.0034\) & \(2.2451\) & \(0.3419\) \\ Sc\({}_{3}\)TIC & \(0.0026\) & 0.0 & 0.0 & 0.0431 & 0.0173 & 0.0125 & 0.0 \\ GaSeCl & \(\mathbf{23.3337}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{38.0257}\) & \(\mathbf{8.615}\) & \(\mathbf{11.7449}\) & \(\mathbf{2.0172}\) \\ CaAgN & \(0.0064\) & \(0.441\) & \(0.4259\) & \(3.6479\) & \(3.1055\) & \(2.4023\) & \(0.0483\) \\ BaAlGe & \(0.002\) & \(0.4547\) & \(0.3889\) & \(3.0476\) & \(1.6942\) & \(2.5291\) & \(0.0326\) \\ K\({}_{2}\)PdS\({}_{2}\) & \(0.5466\) & \(0.2467\) & \(0.1377\) & \(\mathbf{22.0109}\) & \(3.7687\) & \(3.5226\) & \(\mathbf{1.3316}\) \\ KCrO\({}_{2}\) & \(0.0342\) & \(0.2740\) & \(0.1934\) & \(2.5233\) & \(1.9562\) & \(1.8946\) & \(0.6105\) \\ TiZnCu\({}_{2}\) & \(0.0188\) & \(0.4083\) & \(0.3344\) & \(3.8363\) & \(2.83\) & \(1.609\) & \(0.6861\) \\ Ta\({}_{2}\)N\({}_{3}\)O & \(0.4603\) & \(0.2357\) & \(0.1111\) & \(3.144\) & \(2.3813\) & \(1.4458\) & \(0.7499\) \\ \hline AgBiSeS & \(0.0154\) & 0.0 & 0.0 & 0.1914 & 0.0957 & 0.0808 & 0.1298 \\ ZrTaNO & \(0.0935\) & \(0.5182\) & 0.5 & 0.4704 & 0.2352 & 0.2191 & 0.4131 \\ MnAlCuPd & \(0.0187\) & \(0.1719\) & \(0.0865\) & \(3.3567\) & \(2.3023\) & \(2.219\) & \(0.7371\) \\ CsNaClCl & \(0.0046\) & \(0.5\) & 0.5 & 0.1822 & 0.0911 & 0.0848 & 0.1639 \\ DyThCN & \(0.0322\) & \(0.4082\) & \(0.3333\) & \(0.1057\) & \(0.0529\) & \(0.0451\) & \(0.0216\) \\ Li\({}_{2}\)MgCdP\({}_{2}\) & \(\mathbf{39.8356}\) & \(\mathbf{\times}\) ### Performance comparison with GN-OA As reported in [36], the GN-OA algorithm achieved the highest performance when utilizing Bayesian Optimization (BO) [69] as the optimization algorithm and MEGNet neural network model as the formation energy predictor to guide the optimization process (default GN-OA). Based on the data presented in Table 2, we observed that GN-OA showed a significantly lower success rate than that of ParetoCSP. In comparison to ParetoCSP, GN-OA achieved an accuracy of only \(50\%\) (\(10\) out of \(20\) crystals) in predicting structures of binary crystals, whereas ParetoCSP achieved \(85\%\) accuracy. For ternary crystals, GN-OA achieved a success rate of \(30\%\) (\(6\) out of \(20\) crystals) compared to ParetoCSP's \(80\%\). In the case of quarternary crystals, GN-OA did not achieve a single success, whereas ParetoCSP achieved a success rate of \(53.333\%\). Overall, the success rate of GN-OA was only \(29.091\%\), which is approximately \(2.562\) times lower than the accuracy achieved by ParetoCSP. Moreover, GN-OA could not predict any structure that ParetoCSP could not predict. These clearly establish the dominance of ParetoCSP over GN-OA, highlighting the higher quality of structure searching provided by AFPO-based GA compared to BO, and the effectiveness of M3GNet IAP-based final energy prediction compared to MEGNet's formation energy prediction. To understand the deteriorated performance of GN-OA in our benchmark study, firstly, we found that the CSP experiments conducted in the original study of GN-OA[36] primarily focused on small binary crystals, particularly those with a \(1\):\(1\) atoms ratio. Secondly, a majority of these binary crystals belonged to four groups, namely oxide, sulfide, chloride, and fluoride, that demonstrates the lack of diversity in the GN-OA's benchmark set (see Supplementary Fig. S1b). Moreover, most of the crystals examined had the cubic crystal system (mostly belonging to the \(Fm-3m\) space group). It merely explored other crystal systems or space group. This choice of test structures for experimentation was insufficient in terms of CSP where only a few crystals possess all these specific properties. A more thorough exploration of diverse crystal systems and space groups was necessary to demonstrate GN-OA's CSP performance. Our study effectively demonstrated that the optimization algorithms used in GN-OA are inadequate for predicting more complex crystals (such as quarternary crystals). Furthermore, our empirical findings highlighted the shortcomings of using MEGNet as formation energy predictor in guiding the optimization algorithm towards the optimal crystal structures. In summary, we established that ParetoCSP outperformed GN-OA by achieving a staggering \(256.2\%\) higher performance in terms of success rates than that of GN-OA, and the AFPO-based multi-objective GA proved to be a much better structure search algorithm than BO. Additionally, M3GNet IAP provided more accurate energy estimations for effective CSP compared to the MEGNet used in GN-OA. ParetoCSP also performs a further structure refinement using M3GNet IAP after obtaining the final optimized structure from the GA, which contributed to its higher accuracy compared to GN-OA where this is entirely absent. Fig. 3 shows performance metric value comparison for some sample crystals. For better visualization, we limited the \(y\)-axis values to \(20\) for Fig. 3a and 3b, and to \(10\) for Fig. 3c and 3d. We found that the default ParetoCSP with M3GNet achieved lower (better) performance metric values for all the chosen sample crystals in terms of the metrics of ED, HD, and FP and for the majority of the cases for SD, and CD, compared to the default GN-OA. For some crystals (e.g., Ta\({}_{2}\)N\({}_{3}\)O), AgBiSeS, MnAlCuPd, SrWNO\({}_{2}\)) the differences in the performance metric quantities are huge, indicating ParetoCSP's strong dominance over the default GN-OA. ### Performance comparison of CSP algorithms with different energy models As discussed in the previous section, M3GNet universal IAP proved to be a better energy predictor than MEGNet. To fairly and objectively evaluate and compare our algorithm's performance, we replaced ParetoCSP's final energy calculator (M3GNet) with the MEGNet GNN for formation energy evaluation. Subsequently, we also replace MEGNet with M3GNet in GN-OA to show that the M3GNet IAP performs better than MEGNet for predicting the most stable energy for CSP. As a result, we ran experiments on four algorithms - ParetoCSP with M3GNet (default ParetoCSP), ParetoCSP with MEGNet, GN-OA with MEGNet (default GN-OA), and GN-OA with M3GNet. The results of ParetoCSP with M3GNet have been discussed in detail in Section 3.2. ParetoCSP with MEGNet outperformed the default GN-OA by a factor of \(\approx 1.31\) in terms of exact structure prediction accuracy. Individually, ParetoCSP with MEGNet achieved \(60\%\) (\(12\) out of \(20\)), \(40\%\) (\(8\) out of \(20\)), and \(13.333\%\) (\(2\) out of \(15\)) accuracy in predicting structures of binary, ternary, and quarternary crystals, respectively. In comparison, GN-OA with MEGNet achieved accuracies of \(50\%\), \(30\%\), and \(0\%\) for binary, ternary, and quarternary crystals, respectively. This comparison clearly demonstrated that the AFPO-based GA is a more effective structure search method than BO. NiS\({}_{2}\) and EuAlSi are the only two crystals (both hexagonal) that GN-OA with MEGNet could predict the exact structures of but ParetoCSP with MEGNet could not. But the opposite is true for \(8\) crystals including GaN\({}_{3}\), GaN\({}_{2}\), BaAlGe, AgBiSeS, etc., predominantly belonging to the tetragonal crystal system. Additionally, ParetoCSP with MEGNet were not successful in predicting any structure that ParetoCSP with M3GNet could not, strongly indicating the necessity for M3GNet as the energy predicting function (outperformed ParetoCSP with MEGNet by a factor of \(\approx 1.86\)). From Fig. 3, we can see that ParetoCSP with M3GNet achieved much lower performance metric values than ParetoCSP with MEGNet for the majority of the cases, indicating its better prediction caliber. Based on the analysis conducted so far, two hypotheses were formulated: firstly, that GN-OA with M3GNet would outperform the default GN-OA, and secondly, that ParetoCSP with M3GNet would outperform GN-OA with M3GNet. As anticipated, GN-OA with M3GNet outperformed the default GN-OA (by a factor of \(\approx 1.5\)), again demonstrating M3GNet IAP as a much better energy model than MEGNet. For binary, ternary, and quarternary crystals, respectively, GN-OA with M3GNet (GN-OA with MEGNet) achieved \(60\%\) (\(50\%\)), \(35\%\) (\(30\%\)), and \(13.333\%\) (\(0\%\)), respectively. Moreover, the default GN-OA did not achieve superiority over GN-OA with MEGNet on any chosen crystal, but the opposite is true for \(8\) crystals including TiCo, VS, HoHSe, CsNaLiC, etc., and a majority of them belongs to the hexagonal crystal system. However, despite the improved performance of GN-OA with M3GNet, it's efficiency still fell short in comparison to ParetoCSP with M3GNet due to the more effective structure search function of the latter, proving both hypothesis true. ParetoCSP with M3GNet outperformed GN-OA with M3GNet by a factor of \(\approx 1.71\). Furthermore, the default ParetoCSP accurately predicted every structure that GN-OA with M3GNet successfully predicted. Again from Fig. 3, we can see that ParetoCSP with M3GNet achieved smaller performance metric values than GN-OA with M3GNet for the majority of the crystals. In fact, for some crystals such as Al\({}_{2}\)FeCo, Ta\({}_{2}\)N\({}_{3}\)O, AgBiSeS, and SrWNO\({}_{2}\), the differences of metric values are enormous. To report the final outcomes, ParetoCSP with M3GNet outperformed all algorithms (\(\approx 1.71\times\) the second best, and \(\approx 1.86\times\) the third best). GN-OA with M3GNet ranked second best, exceeding the performance of the third best ParetoCSP with MEGNet by a small margin (by a factor of \(\approx 1.09\)). The default GN-OA demonstrated the lowest performance compared to all other algorithms. ### Parametric study of ParetoCSP As a multi-objective GA, there are several hyper-parameters to set before running our ParetoCSP algorithm for CSP. Here we conducted experiments with our ParetoCSP algorithm with different parameter settings to evaluate their effect. We selected \(8\) crystals for this study containing both successful and unsuccessful predictions, namely TiCo, Ba\({}_{2}\)Hg, HoHSe, Cd\({}_{2}\)AgPt, SrTiO\({}_{3}\), GaBN\({}_{2}\), MnAlCuPd, and AgBiSeS. The hyper-parameters chosen for the study include population size, crossover probability, mutation probability, and total number of generations used. The default parameter set is mentioned in Supplementary Note S1. All the performance results are presented in Table 4. Figure 3: **Performance metric comparison of different CSP algorithms evaluated over the sample benchmark crystals.** The metric values of ParetoCSP with M3GNet is much smaller (better) than those of other baseline algorithms, which quantitatively shows its superiority. In most cases, GN-OA with MEGNet’s metric values are the highest (worst) which is aligned with the observation that it demonstrated the poorest performance among all CSP algorithms. First, we examined the effect of different population sizes on the selected crystals. We ran the experiments with five different population sizes. The results in Table 4 shows that our algorithm performed best with a population size of \(100\). Conversely, it could not accurately predict the structures of any crystal with a population size of \(30\), except for SrTiO\({}_{3}\). ParetoCSP consistently performed poorly for Ba\({}_{2}\)Hg and Cd\({}_{2}\)AgPt with every population size, while the results of SrTiO\({}_{3}\) showed the opposite trend. Second, we analyzed the performance of our algorithm with varying crossover probabilities. The results indicated that the best performance was achieved with a probability of \(0.8\), and this was the only probability for which ParetoCSP identified the exact structure of MnAlCuPd. Except GaBN\({}_{2}\) and AgBiSeS, for all five other crystals, ParetoCSP showed consistent performance with other crossover probabilities. We observed that our algorithm performed well with higher crossover probabilities for GaBN\({}_{2}\), and poorly for AgBiSeS with probability \(<0.2\) Next, we evaluated ParetoCSP's performance with different mutation probabilities. and observed that ParetoCSP performed best with a mutation probability of \(0.01\). Only MnAlCuPd and AgBiSeS had their exact structure successfully predicted with this mutation probability, while for other crystals except GaBN\({}_{2}\), ParetoCSP performed consistently with other probabilities. Our algorithm successfully predicted the structure of GaBN\({}_{2}\) for mutation probabilities \(\geq 0.01\). Finally, we ran experiments with different generations to investigate the impact on algorithm performance. In [36], all experiments were run for \(5000\) steps for the BO. However, our results from Table 4 showed that \(1000\) generations were sufficient for ParetoCSP to achieve the optimal results for all \(8\) crystals. Except for GaBN\({}_{2}\), and AgBiSeS, for all five other crystals, ParetoCSP achieved optimal solutions within \(250\) generations. We would like to mention that we did not evaluate for \(<250\) generations, so it is possible that ParetoCSP could perform optimally for these crystals even with a smaller number of generations. None of the above mentioned hyper-parameters could accurately predict the ground truth structures of Ba\({}_{2}\)Hg, and Cd\({}_{2}\)AgPt. \begin{table} \begin{tabular}{||l|c c c c c c c||} \cline{2-9} & **TiCo** & **Ba\({}_{2}\)Hg** & **HoHSe** & **Cd\({}_{2}\)AgPt** & **SrTiO\({}_{3}\)** & **GaBN\({}_{2}\)** & **MnAlCuPd** & **AgBiSeS** \\ \hline \hline Pop \(30\) & ✗ & ✗ & ✗ & ✗ & ✓ & ✗ & ✗ & ✗ \\ Pop \(60\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & ✗ & ✓ \\ Pop \(100\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ \\ Pop \(200\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & ✗ & ✓ \\ Pop \(300\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ \\ \hline CP \(0.1\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & ✗ & ✗ \\ CP \(0.2\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & ✗ & ✓ \\ CP \(0.4\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & ✗ & ✓ \\ CP \(0.6\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✗ & ✓ \\ CP \(0.8\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline MP \(0.0001\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & ✗ & ✗ \\ MP \(0.001\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & ✗ & ✗ \\ MP \(0.01\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ \\ MP \(0.1\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ \\ MP \(0.5\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✗ & ✗ \\ \hline Gen \(250\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ \\ Gen \(500\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✗ \\ Gen \(1000\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ \\ Gen \(2000\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ \\ Gen \(5000\) & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 4: Performance results with different hyper-parameters of ParetoCSP with M3GNet. ### Failure case study ParetoCSP successfully predicted the structures for \(41\) out of \(55\) benchmark crystals in this research. Here we conducted a further thorough investigation of the \(14\) unsuccessful predictions. For this, we calculated performance metric values of these \(14\) structures for all four algorithms discussed in this paper and then experimentally showed the quality of each algorithms' output. We excluded the W\({}_{rmse}\) and W\({}_{mae}\) for this study as all four algorithms failed to predict these structures accurately. The results are presented in Fig. 4 (only two of them are shown here in the main text, and the rest are shown in the Supplementary File.) The comparison results for energy distance metric (ED) is presented in Supplementary Fig. S2a. We limited the \(y\)-axis value to \(80\) for better visualization. ParetoCSP with M3GNet dominated all other algorithms for ED, achieving the lowest errors for \(9\) out of \(14\) crystals. ED is related to the final energy difference between the ground truth and the predicted structure, indicating that predicted structures by ParetoCSP are more energetically closer to the target structures' energy than those by other algorithms. The only failure case where the ParetoCSP had the highest ED value among all algorithms was Li\({}_{2}\)Al. The three performance metrics SD, CD, and HD, are all related to the atomic sites of the ground truth and predicted crystal. ParetoCSP with M3GNet again outperformed all other algorithms, achieving lowest distance scores for a majority of the failure cases, suggesting that the structures predicted by the ParetoCSP algorithms have the closest atomic site configurations compared to the target structures among all algorithms. We presented the results in Supplementary Fig. S2b, Supplementary Fig. S2c, and Fig. 4a, respectively with the \(y\)-axis of Supplementary Fig. S2b limited to \(200\) for visualization purposes. Finally for the fingerprint metric (FP), which is related to the crystal atomic site fingerprint, ParetoCSP with M3GNet achieved the lowest distance errors for \(11\) out of \(14\) crystals among all algorithms, proving better atomic site prediction quality. The results are shown in Fig. 4b. Li\({}_{2}\)Al again is the only crystal where the default ParetoCSP's FP value is the highest among all. The observation that Li\({}_{2}\)Al had the highest ED and FP values for ParetoCSP suggests that the combination of AFPO-based GA and M3GNet might not be the optimal choice for predicting this crystal. On the contrary, ParetoCSP with M3GNet achieved \(4\) out of \(5\), or \(5\) out of \(5\) lowest performance metric values for Ga\({}_{2}\)Te\({}_{3}\), K\({}_{2}\)PdS\({}_{2}\), Sr\({}_{2}\)BBrN\({}_{2}\), ZrCuSiAs, MnCoSnRh, and Ba\({}_{2}\)CeTaO\({}_{6}\) indicating that we are on the right track to predict structures of these crystals. In summary, each of the performance metrics is related to some specific features of the ground truth crystals, and ParetoCSP with M3GNet outperforms all other algorithms, which indicates that it predicts structures with better quality (more closer to the ground truth structures) than other algorithms despite none of them are exact solutions. ### Trajectory study To further understand why ParetoCSP works better than GN-OA algorithm, we utilized the multi-dimensional performance metrics of CSP [47] to examine the search patterns of both optimization algorithms employed in ParetoCSP and GN-OA. For most of the crystals, the number of valid structures generated by ParetoCSP is enormous. For better visualization, we selected six crystals for this study which had comparatively smaller number of valid structures: SrTiO\({}_{3}\), MnAlCuPd, GaN\({}_{3}\), Al\({}_{2}\)FeCo, Sc\({}_{3}\)TIC, and SrWNO\({}_{2}\). ParetoCSP predicted exact structures of all these crystals, whereas GN-OA failed to predict the structures of MnAlCuPd, Al\({}_{2}\)FeCo, and SrWNO\({}_{2}\). We used a population size of \(100\), and total \(250\) generations for ParetoCSP. For comparing fairly, we ran a total of \(15000\) steps with both Figure 4: **Performance metric comparison of structure prediction of different algorithms for the \(14\) failure cases of ParetoCSP with M3GNet. Despite inaccurate, ParetoCSP with M3GNet generated structures closer to the corresponding ground truth structures than any other algorithms.** GN-OA with MEGNet and M3GNet (GN-OA stopped making progress after \(5000\) steps for all of our targets). To analyze the structure search process, we computed the distance metrics between the valid structures and the ground truth structure. These distance features were then mapped into two-dimensional points using t-distributed stochastic neighbor embedding (t-SNE) [70]. The purpose of t-SNE is to map data points from a higher-dimensional space to a lower-dimensional space, typically 2D or 3D, while preserving the pairwise distances between the points. The intuition is that data points that are closer to each other in the higher dimension will remain close to each other after the mapping to the lower dimension. Subsequently, we visualized the trajectories of the structures during the search by connecting consecutive points if the latter structure had a lower energy than the former one. We presented the trajectories for SrTiO\({}_{3}\) and MnAlCuPd in Fig. 5, and the rest are shown in Supplemental Fig. S3 (see Supplementary Fig. S4 and S5 for trajectory figures without arrows for better visualization of structure mapping). The initial points are represented by green triangles, while the ground truth structures are denoted by red stars. First, the distributions of the generated valid structures over the search by ParetoCSP and GN-OA are very different (Fig. 5a and 5d versus Fig.5b, 5c, 5e, 5f). ParetoCSP's distribution are much more diverse while the GN-OA's generated structures tend to be located in a shallow region (Fig.5g), indicating that the algorithm can only generate valid structures in a focused path. This is presumably due to the single point search characteristic of the BO algorithm. While a focused search is good when the direction is correct, it runs a high risk of getting trapped in the channeled path and thus loses its structure search capability. These assumptions become more visible from closely looking at Fig. 5g and 5h where t-SNE for all three algorithms are drawn in the same figure (see Supplementary Fig. S6 for combined t-SNE for other chosen crystals). We can see that points generated by ParetoCSP are more spread out and have more diverse search directions than other algorithms which ensures its more higher structure search performance. This may explain ParetoCSP's success and GN-OA's failure in predicting structures of MnAlCuPd, Al\({}_{2}\)FeCo, and SrWNO\({}_{2}\). Another way to understand the structure search efficiency of ParetoCSP and GN-OA is to check the number of valid structures during the search process. ParetoCSP generated \(2492\), \(1518\), \(2248\), \(2873\), \(1843\), and \(1633\) valid structures in predicting SrTiO\({}_{3}\), MnAlCuPd, GaN\({}_{3}\), Al\({}_{2}\)FeCo, Sc\({}_{3}\)TIC, and SrWNO\({}_{2}\), respectively, while the original GN-OA with MEGNet generated only \(1003\), \(681\), \(1701\), \(1350\), \(1499\), and \(1066\) valid structures for the same three targets, respectively. GN-OA with M3GNet, instead generated a little bit more valid structures for SrTiO\({}_{3}\) (1049), GaN\({}_{3}\) (2044), and Al\({}_{2}\)FeCo (1475) but fewer for MnAlCuPd (\(569\)), Sc\({}_{3}\)TIC (\(1165\)), and SrWNO\({}_{2}\) (\(955\)). The number of valid structures generated by both GN-OA algorithms are significantly smaller compared to those of our ParetoCSP, indicating that the superiority of ParetoCSP may lie in its capability to make effective search by generating more valid structures. According to the findings of [36], this showed that our ParetoCSP's AFPO-based GA search function performed much better than BO. Overall, GN-OA struggled to generate valid structures during the search process and wasted a majority of the search dealing with invalid structures. Moreover, the higher percentage of valid structures generated and more diverse search function of ParetoCSP may have contributed to its higher probability of finding the exact structures. Figure 5: Trajectories of the traversed structure during search of different CSP algorithms. (a) - (c) shows the trajectory for SrTiO\({}_{3}\), and (d) - (f) shows the trajectory for MnAlCuPd. The trajectories were drawn by calculating the distance metrics for the valid structures during the search and mapping them into 2D space using t-SNE. Two consecutive points were connected if the latter structure had a lower energy than the former one. (g) and (h) show the t-SNE for all three algorithms in the same figure for SrTiO\({}_{3}\) and MnAlCuPd, respectively. The initial and optimal structures for all algorithms are marked with different colors and shapes. The points in ParetoCSP’s trajectory are more spread out and have more diverse search directions than the other algorithms. ## 4 Discussion We present ParetoCSP, a CSP algorithm which combines an AFPO enhanced multi-objective GA as an effective structure search function and M3GNet universal IAP as a constructive final energy predictor to achieve efficient structure search for CSP. The objective is to effectively capture the complex relationships between atomic configurations and their corresponding energies. Firstly, ParetoCSP uses the age of a population as a separate optimization criterion. This leads the algorithm to treat the age as a separate dimension in the multi-objective Pareto front where the GA aims to generate structures to minimize the final energy per atom, as well as having low genotypic age. According to the finding of [59], this provides a more extensive search process which enables the NSGA-III to perform better as shown in the trajectory results in Section 3.7, where we see that ParetoCSP generated a lot more valid structures during the search process than other evaluated CSP algorithms. This demonstrates the effective exploration of the crystal structure space by ParetoCSP and efficient identification of the most stable structures. Overall, we found that ParetoCSP remarkably outperforms the GN-OA algorithm by a factor of \(2.562\) and overall achieved \(74.55\%\) accuracy. The comprehensive experimentation was carried out on \(55\) benchmark sets consisting of diverse space groups, which shows that the algorithm can efficiently handle a wide range of crystal systems, including complex ternary and quarternary compounds, whereas GN-OA performed poorly on the quarternary crystals, and most of the ternary crystals. Moreover, a majority of them belongs to the cubic crystals system, proving GN-OA's lack of capability of explore the structure space of diverse crystal systems. However, all the algorithms show poor performance for crystals belonging to the orthorhombic and monoclinic crystal systems. This performance limits of ParetoCSP can be attributed to either the optimization algorithm or the ML potential. First we found that for both ParetoCSP and GN-OA, the search process tends to generate a majority of invalid structures even though ParetoCSP works much better than GN-OA. These invalid structures are a waste of search time. Better algorithms that consider crystal symmetry or data-driven generative models may be developed to improve the percentage of valid structures and increase the search efficiency during the search process. In ParetoCSP, the M3GNet IAP is used as the final energy predictor during the search process and structure relaxer after finishing the search process. Compared to MEGNet, M3GNet IAP is proven to be a better choice since after replacing GN-OA's MEGNet with M3GNet IAP, its performance can be improved by a factor of \(1.5\). Overall, our results suggest the importance of developing stronger universal ML potentials in modern CSP algorithm development. Other IAP models such as TeaNet [50] can be experimented to check whether better performance can be achieved with ParetoCSP and can be compared to the results with M3GNet. Unlike GN-OA, ParetoCSP performs a further refinement of the output structure which helped generate exact structures. We used M3GNet IAP for the structure relaxation. More advanced structure relaxation methods can be tested instead to get better performance. For the first time, we have used a set of seven quantitative performance metrics to compare and investigate algorithm performances of ParetoCSP and the baselines. We can see from Table 3 that each of the unsuccessful predictions had at least one of the performance metrics value larger than the ground truth value. Additionally, Fig. 3 shows that ParetoCSP with M3GNet generated better solutions than any other baseline CSP algorithms as they had much lower performance metric distances (errors) than others. Furthermore, the performance metrics also show that even though ParetoCSP was unable to predict \(14\) crystal structures, it still produced better quality structures compared to other CSP algorithms. They can also be used to show for a specific crystal whether the algorithm is on the right track to predict its structure or not. Inspired by the great success of AlphaFold2 [45] for protein structure prediction, which does not rely first principles calculations, we believe that data-driven CSP algorithms based on ML deep neural network energy models have big potential and can reach the same level as AlphaFold2. For this reason, we have focused on the performance comparison with the state-of-the-art GN-OA, a ML potential based CSP algorithm and we did not compare our results with CALYPSO [15] and USPEX [14], despite that USPEX also utilizes evolutionary algorithms like ours. These algorithms are extremely slow and are not scalable to complex crystals as they depend on ab-initio energy calculations, which is computationally very expensive and slow. Currently, they can only deal with simple chemical systems or relatively small crystals (\(<10\) atoms in the unit cell) which is a major disadvantage. ## 5 Conclusion We have introduced an innovative CSP algorithm named ParetoCSP, which synergizes two key components: the multi-objective GA employing age-fitness Pareto optimization and the M3GNet IAP, for predicting the most stable crystalline material structures. The AFPO-based GA effectively functions as a structure search algorithm, complemented by the M3GNet IAP's role as an efficient final energy predictor that guides the search process. Through comprehensive experimentation involving \(55\) benchmark crystals, our algorithm's potency has been demonstrated, notably surpassing GN-OA with MEGNet and GN-OA with M3GNet by substantial factors of \(2.562\) and \(1.71\), respectively. Utilizing benchmark performance metrics, we have provided an in-depth analysis of the quality of structures generated by our algorithm. Furthermore, we have quantitatively depicted deviations from the ground truth structure for failure cases across all algorithms, highlighting ParetoCSP's superior performance in this aspect as well. By means of a trajectory analysis of the generated structures, we have established that ParetoCSP produces a greater percentage of valid structures compared to GN-OA during the search process due to its enhanced search algorithm. Given these significant progress, we believe that ML potential based CSP algorithms such as ParetoCSP hold immense promise for advancing CSP's boundaries and facilitating the discovery of novel materials with desired properties. ## Contribution Conceptualization, J.H.; methodology,S.O., J.H., L.W.; software,S.O., J.H. ; resources, J.H.; writing-original draft preparation, S.O., J.H., L.W.; writing-review and editing, J.H and L.W.; visualization, S.O. ; supervision, J.H.; funding acquisition, J.H. ## Acknowledgement The research reported in this work was supported in part by National Science Foundation under the grant 10013216 and 2311202. The views, perspectives, and content do not necessarily represent the official views of the NSF.
2306.07937
Gibbs-Duhem-Informed Neural Networks for Binary Activity Coefficient Prediction
We propose Gibbs-Duhem-informed neural networks for the prediction of binary activity coefficients at varying compositions. That is, we include the Gibbs-Duhem equation explicitly in the loss function for training neural networks, which is straightforward in standard machine learning (ML) frameworks enabling automatic differentiation. In contrast to recent hybrid ML approaches, our approach does not rely on embedding a specific thermodynamic model inside the neural network and corresponding prediction limitations. Rather, Gibbs-Duhem consistency serves as regularization, with the flexibility of ML models being preserved. Our results show increased thermodynamic consistency and generalization capabilities for activity coefficient predictions by Gibbs-Duhem-informed graph neural networks and matrix completion methods. We also find that the model architecture, particularly the activation function, can have a strong influence on the prediction quality. The approach can be easily extended to account for other thermodynamic consistency conditions.
Jan G. Rittig, Kobi C. Felton, Alexei A. Lapkin, Alexander Mitsos
2023-05-31T07:36:45Z
http://arxiv.org/abs/2306.07937v2
**Gibbs-Duhem-Informed Neural Networks** ## Abstract We propose Gibbs-Duhem-informed neural networks for the prediction of binary activity coefficients at varying compositions. That is, we include the Gibbs-Duhem equation explicitly in the loss function for training neural networks, which is straightforward in standard machine learning (ML) frameworks enabling automatic differentiation. In contrast to recent hybrid ML approaches, our approach does not rely on embedding a specific thermodynamic model inside the neural network and corresponding prediction limitations. Rather, Gibbs-Duhem consistency serves as regularization, with the flexibility of ML models being preserved. Our results show increased thermodynamic consistency and generalization capabilities for activity coefficient predictions by Gibbs-Duhem-informed graph neural networks and matrix completion methods. We also find that the model architecture, particularly the activation function, can have a strong influence on the prediction quality. The approach can be easily extended to account for other thermodynamic consistency conditions. ## 1 Introduction Predicting activity coefficients of mixtures with machine learning (ML) has recently attracted great attention, outperforming well-established thermodynamic models. Several ML methods such as graph neural networks (GNN), matrix completion methods (MCM), and transformers have shown great potential for predicting a wide variety of thermophysical properties with high accuracy. This includes both pure component and mixture properties such as solvation free energies (Vermeire and Green, 2021), liquid densities (Felton et al., 2023) and viscosities (Bilodeau et al., 2023), vapor pressures (Felton et al., 2023; Lansford et al., 2023), solubilities (Vermeire et al., 2022), and fuel ignition indicators (Schweidtmann et al., 2020) A particular focus has recently been placed on using ML for predicting activity coefficients of mixtures due to their high relevance for chemical separation processes. Here, activity coefficients at infinite dilution (Jirasek et al., 2020; Jirasek and Hasse, 2021; Sanchez Medina et al., 2022), varying temperature (Damay et al., 2021; Chen et al., 2021; Winter et al., 2022; Rittig et al., 2023; Sanchez Medina et al., 2023; Damay et al., 2023), and varying compositions (Felton et al., 2022; Qin et al., 2023; Winter et al., 2023), while considering a wide spectrum of molecules, have been targeted with ML, consistently outperforming well-established models such as UNIFAC (Fredenslund et al., 1975) and COSMO-RS (Klamt, 1995; Klamt et al., 2010). Given the high accuracy achieved, ML will therefore play an increasingly important role in activity coefficient prediction. To further advance ML for activity coefficients and bring it into practical application, accounting for thermodynamic consistency is of great importance: by enforcing consistency, the number of required training data is minimized and the quality of the predictions is improved. Putting the prior information into the data-driven model results in a hybrid model. In the context of activity coefficient prediction, several hybrid model forms have recently emerged. The hybrid models connect ML and mechanistic models in a sequential or a parallel fashion, and integrate ML into mechanistic models and vice versa (see, e.g., the reviews in (Carranza-Abaid et al., 2023; Jirasek and Hasse, 2023)). For example, Focke (Focke, 2006) proposed a hybrid neural network structure that embeds the Wilson model (Wilson, 1964). Developing hybrid ML structures following thermodynamic models such as Wilson (Wilson, 1964) or nonrandom two-liquid (NRTL) (Renon and Prausnitz, 1968) was further investigated in (Argatov and Kocherbitov, 2019; Toikka et al., 2021; Carranza-Abaid et al., 2023; Di Caprio et al., 2023). A recent prominent example covering a diverse mixture spectrum is the sequential hybrid ML model by Winter et al. (Winter et al., 2023), who combined a transformer with the NRTL model (Renon and Prausnitz, 1968) (i.e., the transformer predicting NRTL parameters) called SPT-NRTL. As the NRTL model fulfills the Gibbs-Duhem equation, the hybrid SPT-NRTL model by design exhibits thermodynamic consistency for the composition-dependency of the activity coefficients. However, using a specific thermodynamic model also introduces predictive limitations. For example, the NRTL model suffers from high correlation of the pure-component liquid interaction parameters (Gebreyohannes et al., 2014), which results in poor modeling of highly interactive systems (Hanks et al., 1978). In general, approaches imposing a thermodynamic model are restricted by the theoretical assumptions and corresponding limitations. Therefore, we herein focus on a physics-informed ML approach that does not rely on a specific thermodynamic model; rather, thermodynamic consistency is imposed in the training. Physics-informed ML provides a hybrid approach that integrates mechanistic knowledge as a regularization term into the loss function for training an ML model (Karniadakis et al., 2021; von Rueden et al., 2021). A prominent example are physics-informed neural networks (PINNs) (Raissi et al., 2019) that are typically employed to predict solutions of partial differential equations (PDEs). In PINNs, gradient information of the network's output with respect to the input(s) is obtained via automatic differentiation and added as a regularization term to the loss function accounting for the PDE. In this way, PINNs learn to predict solutions that are consistent with the governing PDE. Note that, in contrast to hybrid models that embed mechanistic equations, PINNs do not necessarily yield exact mechanistic consistency as it needs to be learned and may be in trade-off with learning the provided data. On the other hand, the flexibility of neural networks is preserved, and no modeling assumptions are imposed, as in the aforementioned hybrid thermodynamic models. Utilizing differential thermodynamic relationships, the concept of PINNs has been applied to molecular and material property prediction (Teichert et al., 2019; Masi et al., 2021; Hernandez et al., 2022; Rosenberger et al., 2022; Monroe et al., 2023). For instance, Masi et al. (Masi et al., 2021) proposed thermodynamics-based artificial neural networks building on the idea that material properties can be expressed as differential relationships of the Helmholtz free energy and the dissipation rate, which can be directly integrated into the network structure and allows for training with automatic differentiation. Similarly, Rosenberger et al. (Rosenberger et al., 2022) utilized differential relationships of thermophysical properties to the Helmholtz free energy to fit equations of states with thermodynamic consistency. They showed that predicting properties such as pressure or chemical potential by training neural networks to model the Helmholtz free energy and use its differential relationships to the target properties is advantageous over learning these properties directly, for both accuracy and consistency. However, using PINN-based models for predicting thermodynamic mixture properties for a wide molecular spectrum, particularly activity coefficients, has not been investigated so far. We introduce Gibbs-Duhem-informed neural networks that are inspired by PINNs and learn thermodynamic consistency of activity coefficient predictions. We add a regularization term related to the Gibbs-Duhem equation to the loss function during the training of a neural network, herein GNNs and MCMs. Specifically, we use automatic differentiation to calculate the gradients of the respective binary activity coefficient predictions by a neural network with respect to the mixture's input composition. We can then evaluate the Gibbs-Duhem consistency and add the deviation to the loss function. The loss that typically contains the prediction error on the activity coefficient value only is thus extended by thermodynamic insights, inducing the neural network to consider and utilize known thermodynamic relations in the learning process. We emphasize that our approach allows for the integration of further thermodynamic insights that can be described by (differential or algebraic) relations to the activity coefficient; herein, we use the Gibbs-Duhem equation as a prime example. Our results show that Gibbs-Duhem-informed neural networks can effectively increase Gibbs-Duhem consistency at high prediction accuracy. The manuscript is structured as follows: First, we present the concept of Gibbs-Duhem-informed neural network training including a data augmentation strategy in Section 2. In Section 3, we then test our approach on two neural network architectures, GNNs and MCMs, using a database of 280,000 binary activity coefficients that consists of 40,000 mixtures covering pair-wise combinations of 700 molecules at 7 different compositions and was calculated with COSMO-RS (Klamt, 1995; Klamt et al., 2010) by Qin et al. (Qin et al., 2023). We analyze and compare the prediction accuracy and thermodynamic consistency of GNNs and MCMs trained without (Section 3.1) and with Gibbs-Duhem loss (Section 3.2). This also includes studying corresponding vapor-liquid equilibrium predictions (Section 3.2.2). We further analyze generalization capabilities to new compositions (Section 3.2.3) and mixtures (Section 3.2.4). The last Section 4 concludes our work. ## 2 Methods & Modeling In this section, we introduce Gibbs-Duhem-informed neural networks, propose a data augmentation strategy to facilitate training, and then describe GNNs and MCMs to which we apply our training approach. A schematic overview of the Gibbs-Duhem-informed GNNs and MCMs is provided in Figure 1. We further provide insights on the data set used for training/testing and the implementation with corresponding model hyperparameters. ### Gibbs-Duhem-informed training Our approach for Gibbs-Duhem-informed training combines prediction accuracy with thermodynamic consistency in one loss function. The approach is inspired by PINNs (Raissi et al., 2019; Karniadakis et al., 2021), that is, utilizing physical knowledge as a regularization term in the loss. For the application of composition-dependent activity coefficients, we can calculate the gradients of the predicted logarithmic activity coefficient value, denoted by \(\ln(\hat{\gamma_{i}})\), with respect to the compositions of the mixture, \(x_{i}\), as illustrated in Figure 1. We can then use this gradient information to evaluate the consistency of the Gibbs-Duhem differential constraint, which has the following form for binary mixtures for constant temperature T and pressure p: \[x_{1}\cdot\left(\frac{\partial\ln(\hat{\gamma_{1}})}{\partial x_{1}}\right)_{ T,p}+x_{2}\cdot\left(\frac{\partial\ln(\hat{\gamma_{2}})}{\partial x_{1}} \right)_{T,p}=0 \tag{1}\] Figure 1: Schematic model structure and loss function of Gibbs-Duhem-informed GNN and MCM for predicting composition-dependent activity coefficients. Please note that Equ. 1 can equivalently be formulated for the partial derivative with respect to \(x_{2}\) and can also be described analogously by using \(dx_{1}=-dx_{2}\). We propose to add the deviation from the Gibbs-Duhem differential constraint as a term to the loss function. The loss function for training a neural network on activity coefficient prediction typically accounts for the deviation of the predicted value, \(\ln(\hat{\gamma_{i}})\), from the data, \(\ln(\gamma_{i})\); often the mean squared error (MSE) is used. By adding the deviation from the Gibbs-Duhem equation (cf. Equ. 1) in the form of the MSE, the loss function for Gibbs-Duhem-informed training of a mixture's binary activity coefficients at a specific composition \(k\) equals \[\begin{split}\text{LOSS}^{k}=&\left(\ln(\hat{ \gamma_{1}}^{k})-\ln(\gamma_{1}^{k})\right)^{2}+\left(\ln(\hat{\gamma_{2}}^{k}) -\ln(\gamma_{2}^{k})\right)^{2}\\ &+\lambda\cdot\left(x_{1}^{k}\cdot\frac{\partial\ln(\hat{\gamma_ {1}}^{k})}{\partial x_{1}^{k}}+x_{2}^{k}\cdot\frac{\partial\ln(\hat{\gamma_{2 }}^{k})}{\partial x_{1}^{k}}\right)^{2},\end{split} \tag{2}\] with \(\lambda\) being a weighting factor to balance the prediction and the Gibbs-Duhem loss. The logarithmic activity coefficient is typically used in the loss function for normalization purposes. We also include the infinite dilution case which is formally defined for compositions \(x_{i}\to 0\) and \(x_{j}\to 1\) with the infinite dilution activity coefficient \(y_{i}\to y_{i}^{\infty}\) of the solute and activity coefficient of the solvent \(y_{j}\to 1\). Herein, we use \(x_{i}=0\) and \(x_{j}=1\) to represent infinite dilution, similarly to other recent publications (Qin et al., 2023; Winter et al., 2023). We stress that compositions of \(0\) and \(1\) are only used for the infinite dilution case and that the Gibbs-Duhem consistency also needs to be satisfied for this case. Note that in thermodynamics some properties are problematic for \(x\to 0\), e.g., infinite derivative of the ideal mixing enthalpy with respect to the mole fraction; however, since we directly predict activity coefficients, we do not run in any numerical issues. The proposed Gibbs-Duhem-informed loss function can directly be integrated into standard ML frameworks. Since modern neural networks frameworks enable automatic differentiation and \(\ln(\gamma_{i})\) is the output and \(x_{i}\) is one input of the network, the partial derivatives in Equ. 2 can directly be calculated in the backpropagation pass. Therefore, the practical application of Gibbs-Duhem-informed training is straightforward. When applying the presented Gibbs-informed training approach, thermodynamic consistency is only induced for the mixture compositions for which activity coefficient data is readily available. To facilitate learning at compositions for which no data is available, we present a data augmentation strategy in the next session. ### Data augmentation for Gibbs-Duhem-informed training We propose a data augmentation strategy for training Gibbs-Duhem-informed neural networks by randomly perturbing the mixtures' compositions between \(0\) and \(1\). We create additional data samples that consist of the binary mixtures in the training data set but at other (arbitrary) compositions \(x\in[0,1]\); we use random sampling from a uniform distribution in \(x\). Indeed, the activity coefficients for these compositions are not known. Yet, we can evaluate the Gibbs-Duhem consistency of the model predictions at these compositions and add only the Gibbs-Duhem error to the loss during training. That is, for training data samples created with the data augmentation, we only consider the second term of the loss function, the Gibbs-Duhem loss. We can therefore use additional data for training Gibbs-Duhem-informed neural networks on compositions of mixtures for which no experimental data is available. When using data augmentation, it is important to consider that additional training data results in an increased expense of calculating the loss and its derivative, i.e., requires more training resources. Further, adding too many augmented data samples to the training, can result in an imbalanced loss focusing too much on the Gibbs-Duhem term and neglecting the prediction accuracy. We therefore set the amount of augmented data to equal the number of data points in the training set for which activity coefficient data are available. ### Machine learning property prediction methods We investigate the thermodynamic consistency and test the Gibbs-Duhem-informed training approach for two different machine learning methods: GNNs and MCMs. Both methods have recently been investigated in various studies for thermodynamic property prediction of mixtures (Jirasek et al., 2020; Damay et al., 2021; Felton et al., 2022; Sanchez Medina et al., 2022; Rittig et al., 2023a). While a third ML method, namely transformer which works on string representation of molecules, has also been very recently utilized for predicting mixture properties with very promising results (Winter et al., 2022, 2023), they typically require extensive pretraining with millions of data points, which is out of the scope of this work. The structure of Gibbs-Duhem-informed GNNs and MCMs for activity coefficient prediction at different compositions is shown in Figure 1. GNNs utilize a graph representation of molecules and learn to encode the structure of two molecular graphs within a binary mixture to a vector representation that can be mapped to the activity coefficients. In contrast, MCMs learn directly from the property data without further information about the molecular structures. Rather a matrix representation is used in which the rows and columns each represent a molecule in the binary mixture as a one-hot encoding and the matrix entries correspond to the activity coefficients. With the available activity coefficient data filling some entries of the matrix, MCMs learn to predict the missing entries. For further details about GNNs and MCMs, we refer to the reviews in (Gilmer et al., 2017; Rittig et al., 2022; Reiser et al., 2022) and (Jirasek and Hasse, 2021, 2023). We herein use a GNN based on the model architecture developed by Qin et al. (Qin et al., 2023) for predicting activity coefficients of binary mixtures at different compositions, referred to as SolvGNN. The GNN first employs graph convolutional layers to encode the molecular graph of each component into a molecular embedding vector - often referred to as molecular fingerprint. Then, a mixture graph is constructed: Each node represents a component and includes the corresponding molecular embedding and composition within the mixture; each edge represents interactions between components using Hydrogen-bond information as additional features. The mixture graph passes a graph convolutional layer such that each molecular embedding is updated based on the presence of other components in the mixture, thereby accounting for intermolecular interactions. Each updated molecular embedding is then passed through hidden layers of a multilayer perceptron (MLP) which predicts the logarithmic activity coefficient \(\ln(\gamma_{i})\) of the respective components present in the mixture; the same MLP is applied for all components. The GNN's model structure can be trained end-to-end, i.e., from the molecular graphs to the activity coefficients. For the MCM model, we use a neural network structure that was recently proposed by Chen et al. (Chen et al., 2021) and further investigated in our work for prediction of infinite dilution activity coefficients of solutes in ionic liquids (Rittig et al., 2023a). The MCM model employs several hidden layers to map the one-hot encoding of the components to a continuous vector representation - analogous to the molecular embedding/fingerprint in GNNs. The resulting mixture vector is then concatenated with the composition and enters two MLPs to obtain the respective predictions for the logarithmic activity coefficients \(\ln(\gamma_{1})\) and \(\ln(\gamma_{2})\). It is important to note, that in contrast to GNNs, the MCM inherently does not preserve permutation invariance with respect to the representation order of the components in the mixture. For example, the predictions for 90% ethanol- 10% water and 10% water - 90% ethanol are not necessarily identical when using the MCM, whereas the GNN results in the same activity coefficient values. To address the permutation variance of the MCM, future work could consider data augmentation, i.e., training on the same mixture with different order of the components (cf. (Winter et al., 2023)), or an extension of the model structure by a permutation invariant operator as used in GNNs. We also note that further formulations of MCMs, e.g., based on Bayesian inference, are frequently investigated, cf. (Jirasek et al., 2020; Damay et al., 2021). We herein focus on neural architectures, also referred to as neural collaborative filtering (He et al., 2017; Chen et al., 2021). In future work, it would be interesting to investigate if our Gibbs-Duhem-informed approach is also transferable to other MCM formulations. ### Data set and splitting We use the data set of binary activity coefficients at different compositions and a constant temperature of 298 K calculated with COSMO-RS (Klamt, 1995; Klamt et al., 2010) for 40,000 different binary mixtures and covering 700 different compounds, which was created by Qin et al. (Qin et al., 2023). The activity coefficients were calculated at seven different compositions: \(\{0,0.1,0.3,0.5,0.7,0.9,1\}\), thus including infinite dilution, cf. Section 2.1). Thus, the total number of data points amounts to 280,000. Since COSMO-RS was used for data generation, all data points are Gibbs-Duhem-consistent, thereby providing a solid basis for testing our approach. We consider three evaluation scenarios when splitting our data: Composition interpolation (comp-inter) and composition extrapolation (comp-extra) as well as system extrapolation (system-extra). _Comp-inter_ refers to the case of predicting the activity coefficient of a specific binary mixture at a composition not used in training for this mixture but for other mixtures. This evaluation scenario was also used by Qin et al. (Qin et al., 2023); in fact, we use the same 5-fold stratified split based on the polarity features of individual mixtures (i.e., 5 different splits into 80% training and 20% test data, c.f. SI (Qin et al., 2023)). Comp-inter thus allows us to evaluate if the models can learn the composition-dependency of the activity coefficient for a mixture from other mixtures in the data with thermodynamic consistency. _Comp-extra_ describes the case of predicting the activity coefficient of a specific binary mixture at a composition that was not used in training for any of the mixtures. We specifically exclude the data for the compositions of a respective set of \(x\in\{\{0.0,\,1.0\}\), \(\{0.1,\,0.9\}\), \(\{0.3,\,0.7\}\), \(\{0.5\}\}\) from training and use it as a test set. This results in four different comp-extra splits, one for each excluded set of \(x\). With the comp-extra splits, we can evaluate whether the models can extrapolate to compositions not present in the training data at all, referred to as generalization, thereby capturing the underlying composition-dependency of the activity coefficient. _Mixture-extra_ aims to test the capability of a prediction model to generalize to binary mixtures not seen during training but constituting molecules that occurred in other combinations, i.e., in other binary mixtures, during training. We separate the data set into training and test sets of unique binary mixtures by using a 5-fold stratified split based on polarity features (cf. (Qin et al., 2023)). In contrast to comp-inter, where only individual compositions of mixtures were excluded from the training data for testing, mixture-extra excludes all available compositions of a mixture for testing and thus allows to test generalization to new mixtures. ### Evaluation metrics for prediction accuracy and consistency To evaluate the predictive quality of models, we consider both the prediction accuracy and the thermodynamic consistency. The prediction accuracy is calculated based on the match between predicted values and the data values for the test set. We consider standard metrics for the prediction accuracy, i.e., root mean squared error (RMSE), mean absolute error (MAE), and coefficient of determination (R\({}^{2}\)). Thermodynamic consistency is assessed by calculating the deviation of the Gibbs-Duhem differential equation from zero. We refer to the Gibbs-Duhem root mean squared error (GD-RMSE) for predictions \(y_{i}^{k}\) of the test data by \[\text{GD-RMSE}_{\text{test}}=\sqrt{\frac{1}{N_{test}}\cdot\sum_{k}^{N_{test}} \left(x_{1}^{k}\cdot\frac{\partial\ln(\hat{y_{1}^{k}})}{\partial x_{1}^{k}}+x _{2}^{k}\cdot\frac{\partial\ln(\hat{y_{2}^{k}})}{\partial x_{1}^{k}}\right)^{ 2}} \tag{3}\] Since the Gibbs-Duhem equation can be evaluated at any composition in the range between 0 and 1 without requiring activity coefficient data, we further test the thermodynamic consistency for compositions outside the data set (cf. Section 2.4) in 0.05 steps, i.e., \(x_{i,val-ext}\in\{0.05,\,0.15,\,0.2,\,0.25,\,0.35,\,0.4,\,0.45,\,0.55,\,0.6,\,0.65, \,0.75,\,0.8,\,0.85,\,0.95\}\), to which we refer to as "test-ext". ### Implementation & Hyperparameters We implement all models and training and evaluation scripts in Python using PyTorch and provide our code openly accessible at (Rittig et al., 2023b). The GNN implementation is adapted from Qin et al. (2023b). al. (Qin et al., 2023) using the Deep Graph Library (DGL) (Wang et al., 2019) and RDKit (Landrum, 2023). We use the same model hyperparameters as in the original implementation, i.e., two shared graph convolutional layers are applied for the molecule embedding, then the compositions are concatenated, followed by a single-layer GNN for the mixture embedding and a prediction MLP with two hidden layers. For the MCM, we use the re-implementation of the architecture by Chen et al. (Chen et al., 2021) from our previous work (Rittig et al., 2023a). We take the hyperparameters from the original model, but we adapt the model structure to allow for composition-dependent prediction. The MCM has a shared molecular embedding MLP with four hidden layers, after which the compositions are concatenated and two subsequent prediction MLPs constituting two hidden layers are applied. All training runs are conducted with the ADAM optimizer, an initial learning rate of 0.001, and a learning rate scheduler with a decay factor of 0.8 and a patience of 3 epochs based on the training loss. We train all models for 100 epochs and a batch size of 100, as in Qin et al. (Qin et al., 2023); we could robustly reproduce their results for the GNN. The quality of the final models is then assessed based on the test set. We executed all runs on the High Performance Computing Cluster of RWTH Aachen University using one NVIDIA Tesla V100-SXM2-16GB GPU. ## 3 Results & Discussion We first investigate the Gibbs-Duhem consistency of GNNs and MCMs trained in a standard manner, i.e., on the prediction loss only, in Section 3.1. Then, in Section 3.2, we present the results with Gibbs-Duhem-informed training. This includes a comparison of different model architectures and activation functions trained with Gibbs-Duhem loss to those trained on the prediction loss only. We also analyse the effects of Gibbs-Duhem-informed training on vapor-liquid equilibria predictions in Section 3.2.2. Lastly, we test the generalization capabilities of Gibbs-Duhem-informed neural networks to unseen compositions in Section 3.2.3 as well as to unseen mixtures in Section 3.2.4. ### Benchmark: Evaluation of Gibbs-Duhem consistency with standard training We first evaluate the prediction accuracy and Gibbs-Duhem consistency of GNNs and MCMs for predicting activity coefficients of a binary mixture at a specific composition with the comp-inter split (cf. Section 2.4). The models are trained with a standard approach, i.e., minimizing the deviation of predicted versus data activity coefficients and not using Gibbs-Duhem loss. Fig. 2 shows the error distribution of the absolute prediction errors and absolute Gibbs-Duhem errors for the GNN (2a) and MCM (2b) model. We also report the errors for specific compositions according to the composition intervals in the data set (cf. Section 2.4) for both prediction accuracy (2c) and Gibbs-Duhem (2d) consistency. Fig. 1(a) shows high prediction accuracy of the GNN, with the MCM model performing slightly worse but still at a high level. The low MAEs of 0.03 and 0.04 and high \(R^{2}\) values of 0.99 and 0.98 for the GNN and the MCM, respectively, indicate strong prediction capabilities. Please note that the GNN prediction results are a reproduction of the study by Qin et al. (Qin et al., 2023), who reported an MAE of 0.03 and an RMSE of 0.10, which are very similar to our results. The composition-dependent errors shown Fig. 1(c) highlight that activity coefficient predictions for solvents with lower compositions have higher errors, which is expected. Infinite dilution activity coefficients with \(x_{i}\to 0\) represent the limiting case with MAEs of 0.077 for the GNN and 0.093 for the MCM. In contrast, at high compositions \(x_{i}\to 1\), the activity coefficient converges to 1 for all solvents, which is well captured by the GNN with an MAE of 0.002 and the MCM with an MAE of 0.006. Overall, we find strong prediction quality for both models. For the Gibbs-Duhem consistency shown in Fig. 1(b), the GNN again performs better than the MCM. Notably, the distribution for the GNN is more left-skewed than the MCM distribution and shows a peak fraction of deviations close to 0, i.e., with high Gibbs-Duhem consistency. However, it can also be observed that both models have many errors significantly greater than 0, with an MAE of about 0.1 for the GNN and 0.14 for the MCM. Considering the composition-dependent Gibbs-Duhem consistency illustrated in Fig. 1(d), we can observe similar behavior for the GNN and the MCM: At the boundary conditions, i.e., infinite dilution, the models yield slightly higher consistencies than at intermediate compositions, with Figure 2: Absolute prediction error and absolute deviation from Gibbs-Duhem differential equation are illustrated in histograms (a,b) and composition-dependent plots (c,d) for the GNN and the MCM trained with a standard loss function based on the prediction error and MLP activation function: ReLU. The outlier thresholds (a,b) are determined based on the top 1 % of the highest errors for the GNN. the GNN overall resulting in a slightly favorable consistency. Interestingly, we find the opposite behavior when changing the structure of the prediction MLP to be a single MLP with two outputs, i.e., predicting both activity coefficients with one MLP at the same time (cf. SI). Without any form of regularization, we find that the predictions from both models often exhibit Gibbs-Duhem inconsistencies. To further analyze the Gibbs-Duhem deviations, we show activity coefficient predictions and composition-dependent gradients with the corresponding thermodynamic consistency for exemplary mixtures in Figure 2(a) for the GNN and Figure 2(b) for the MCM. We selected mixtures that have different activity coefficient curves, contain well-known solvents, and for which Antoine parameters are readily available (cf. Section 3.2.2). Specifically, we show the predictions and Gibbs-Duhem consistency with the gradient information for three mixtures that were included in the training (1-3) and three mixtures that were not included in the training at all (4-6). Here, the predictions of the five models trained in the cross-validation of comp-inter are averaged, referred to as ensemble model (cf. (Breiman, 1996, 2006; Dietterich, 2000)). Note we can calculate the Gibbs-Duhem consistency of the ensemble by first averaging the five models' partial derivatives of the logarithmic activity coefficients with respect to the composition and then applying Equ. 1. Further ensemble features like the variance are not considered. For the exemplary mixtures in Fig. 3, the predictions exhibit a high level of accuracy but also striking thermodynamic inconsistencies. For the first two mixtures as part of the training set, the predictions are at high accuracy. However, particularly for chloroform-hexane, the prediction curves for each component show some significant changes in their slope at varying compositions, causing high thermodynamic inconsistencies. For example, the \(\ln(\gamma_{2})\)-curve for the GNN at \(x_{1}=0.2\) or for the MCM at \(x_{1}=0.4\) exhibits a step-like behavior, with the \(\ln(\gamma_{1})\)-curve not changing the slope at these compositions, yielding a high Gibbs-Duhem error. This behavior is also reflected in the gradients, which highly fluctuate and have a discontinuous curve over the composition. Notably, within some composition ranges, the gradient is a constant value, e.g., for chloroform-hexane for \(\ln(\gamma_{2})\) from \(x_{1}\) between 0 and 0.4 and for \(\ln(\gamma_{1})\) from \(x_{1}\) between 0.7 to 1. For the mixture of 2-thiabutane and butyleneoxide, discontinuities in the gradients causing high Gibbs-Duhem errors are even more prominent. We additionally find the prediction curves both have either positive or negative gradients for specific compositions, i.e., both increasing or both decreasing, which strictly violates thermodynamic principles. For two of the mixtures not used in the training at all, i.e., chloroform-acetone and ethanol-water, both models overall match the data but also show prediction errors at low compositions of the respective component. Especially for the GNN predictions of the chloroform-acetone mixture, the \(\ln(\gamma_{2})\)-curve exhibits a change in the gradient within the composition range from 0.6 to 0.8 which is not reflected in \(\ln(\gamma_{1})\). For the last mixture, ethanol-benzene, also not being in the training set, the predictions match the data values well, but for both models, Gibbs-Duhem deviations occur at low compositions of the respective component and for the MCM also at intermediate compositions. The gradient curves of the three mixtures not being part of the training set are again discontinuous, resulting in further thermodynamic inconsistencies. Figure 3 further shows that the magnitude of the activity coefficient values for a specific system influences the metrics of Gibbs-Duhem consistencies. Since mixtures with large absolute activity coefficient values naturally tend to have higher gradients, they often show larger absolute deviations from the Gibbs-Duhem differential equation than mixtures with low absolute activity coefficients. Future work could consider weighting Gibbs-Duhem deviations for individual mixtures based on the magnitude of the activity coefficients, e.g., dividing the Gibbs-Duhem error by the sum of absolute values of \(\ln(\gamma_{1})\) and \(\ln(\gamma_{2})\), which was out the scope of our investigations. We additionally show the results of the individual models in the SI, where the thermodynamic inconsistencies become even more prominent and visible. In fact, for the ensemble model results shown in Fig. 3, some inconsistencies partly average out. Using ensembles can thus, in addition to higher prediction accuracy (Sanchez Medina et al., 2022; Rittig et al., 2023), also increases thermodynamic consistencies. It would thus be interesting to systematically study ensemble effects in combination with Gibbs-Duhem-informed neural networks, which we leave for future work. Overall, we find the ML models with standard training on the prediction loss to provide highly accurate activity coefficient predictions, but they also exhibit notable thermodynamic inconsistencies, which can be related to the ML model structure. Particularly, we find the gradient curves of the activity coefficient with respect to the composition to be discontinuous, resulting in high Gibbs-Duhem errors. Figure 3: Activity coefficient predictions and their corresponding gradients with respect to the composition with the associated Gibbs-Duhem deviations for exemplary mixtures by (a) the GNN ensemble and (b) MCM ensemble trained with a standard loss function based on the prediction error and MLP activation function: ReLU. Results are averaged from the five model runs of the comp-inter split. The discontinuities of the gradients are inherent to the non-smooth activation functions typically used in ML models, e.g., ReLU. Specifically, the gradient of ReLU changes from 1 for inputs \(>0\) to 0 for inputs \(<0\), which we find to yield non-smooth gradients of the \(ln(\gamma_{i})\)-curves, thereby promoting violations of the Gibbs-Duhem consistency. This motivates us to investigate the incorporation of the thermodynamic consistency into the training of ML models with different activation functions and an adapted loss function accounting for the Gibbs-Duhem-equation, which we refer to as Gibbs-Duhem-informed neural networks. ### Proposal: Gibbs-Duhem-informed training We apply Gibbs-Duhem-informed training according to Equ. 2 for the GNN and MCM models. Since, in the previous section, we found the non-smoothness of ReLU activation to have an impact on the thermodynamic consistency of the predictions, we investigate two additional activation functions, namely ELU and softplus. In contrast to ReLU, ELU exhibits first-order continuity and softplus is smooth. The smoothness of softplus has already been utilized in models for molecular modeling by Schuett et al. (Schutt et al., 2020). In addition, we investigate an adapted GNN architecture, which we refer to as _GNNxMLP_, where we concatenate the composition to the output of the mixture embedding instead of the input of the mixture embedding, cf. Section 2.3. Using the composition after the mixture embedding and applying a smooth activation function for the prediction MLP results in a smooth relation between the activity coefficient predictions and the compositions. It also has computational benefits since we avoid calculating gradients through the graph convolutional layers used for mapping molecular to mixture embeddings. Furthermore, we investigate the proposed data augmentation strategy (cf. Section 2.2) by adding pseudo gradient data at random compositions to the Gibbs-Duhem-informed training. #### 3.2.1 Effect on predictive quality and thermodynamic consistency Table 1 shows the results of Gibbs-Duhem-informed the GNN, MCM, and GNNxMLP aggregated for the five comp-inter splits. We compare different activation functions in the MLP and different weighting factors of the Gibbs-Duhem loss (cf. Equ. 2), "lambda", with \(\lambda=~{}0\) representing training without Gibbs-Duhem loss, i.e., standard training on the prediction error from the previous Section 3.1. We also indicate whether data augmentation is applied. First comparing the prediction accuracy and thermodynamic consistency of the activation function without Gibbs-Duhem-informed training, i.e., \(\lambda=0\), in Table 1, we find for the GNN, GNNxMLP, and MCM comparable prediction accuracies, with softplus being slightly favorable for the MCM. For the thermodynamic consistency calculated by GD-RMSE, we can observe a consistent improvement from ReLU over ELU to softplus across all models for the test data. We thus find the choice of the activation function to highly influence the thermodynamic consistency, with ELU and softplus being favorable over ReLU. Now, we consider the results of Gibbs-Duhem-informed neural networks using different weighting factors \(\lambda\) in Table 1. We observe that for all cases except the MCM and the \(\text{GNN}_{\text{xMLP}}\) with ReLU activation, Gibbs-Duhem-informed training increases the thermodynamic consistency. Higher \(\lambda\) factors generally lead to lower GD-RMSE. The prediction accuracy mostly stays at a similar level for the Gibbs-Duhem-informed neural networks when using \(\lambda\) factors of 0.1 and 1. For higher \(\lambda\) factors, i.e. 10 and 100, the prediction accuracy starts to decrease consistently, indicating an imbalanced loss with too much focus on thermodynamic consistency. Generally, we observe that \(\lambda=1\) yields a significant increase in thermodynamic consistency compared to training without Gibbs-Duhem loss, e.g., for the GNN with softplus from a GD-RMSE\({}_{\text{test}}\) from 0.140 to 0.061. The prediction accuracy stays at a similar level, sometimes even slightly improving: For the example of the GNN with softplus, we observe an \(\text{RMSE}_{\text{test}}\) of 0.89 vs. 0.83 with and without Gibbs-Duhem loss, respectively, thereby indicating a suitable balance between accuracy and consistency. Notably, for the cases of the MCM and the \(\text{GNN}_{\text{xMLP}}\) with ReLU activation and the Gibbs-Duhem loss, we observe high prediction errors. For these cases, we find the loss not improving after the first epochs during training and the gradients being mostly constant for all compositions - 0 for high lambdas. Interestingly, the GNN, which, in contrast to the MCM and \(\text{GNN}_{\text{xMLP}}\), employs graph convolutions after adding the compositions, does not suffer from these training instabilities. Future work should further investigate this phenomenon, e.g., by considering the dying ReLU problem and second-order vanishing gradients that can occur when using gradient information in the loss function, cf. (Masi et al., 2021). For ELU and softplus, Gibbs-Duhem-informed training results in higher thermodynamic consistency for all models. In fact, Gibbs-Duhem-informed neural networks with softplus lead to the most consistent improvement of thermodynamic consistency with high prediction accuracy across all models. Lastly, we analyze the effect of data augmentation by considering the GD-RMSE\({}_{\text{test}}^{\text{ext}}\), i.e., the Gibbs-Duhem consistency evaluated at compositions that are not used in training for any mixture at all, which indicates the generalization for thermodynamic consistency. Table 1 shows that without data augmentation the thermodynamic consistency on the external test set is significantly higher than for the test set. We show the errors at specific compositions in the SI, where we find the largest errors occur at low compositions, which is expected since the corresponding gradients naturally tend to be higher. The model thus learns thermodynamic consistency for compositions present in the training but does not transfer this consistency to other compositions. When using data augmentation, as shown for \(\lambda\) factors of 1 and 10, the GD-RMSE\({}_{\text{test}}^{\text{ext}}\) decreases to the same level as the GD-RMSE\({}_{\text{test}}\). Data augmentation additionally reduces the GD-RMSE\({}_{\text{test}}\) in most cases, thus further increases thermodynamic consistency in general. Data augmentation without the requirement of further activity coefficient data (cf. Section 2.2) therefore effectively increases the generalization capabilities of Gibbs-Duhem-informed neural networks for thermodynamic consistency. Overall, Gibbs-Duhem-informed neural networks can significantly increase the thermodynamic consistency of the predictions. Using the softplus activation function, a \(\lambda\) factor of 1, and employing data augmentation leads to the most consistent improvement of thermodynamic consistency with high prediction accuracy across all Gibbs-Duhem-informed neural network models. Hence, we focus on the models with these settings in the following. Comparing the three different models, we find similar prediction accuracies and consistencies for the GNN and the \(\text{GNN}_{\text{xMLP}}\), with the \(\text{GNN}_{\text{xMLP}}\), reaching the highest consistency. The MCM exhibits comparable consistency but a slightly lower prediction accuracy compared to the GNNs. Interestingly, the Gibbs-Duhem-informed MCM shows higher prediction accuracy compared to the standard MCM. The runtimes averaged over the five training runs of comp-inter split are 231 minutes for the GNN, 108 min for the MCM, and 177 minutes for the \(\text{GNN}_{\text{xMLP}}\). Hence, we find the \(\text{GNN}_{\text{xMLP}}\) to be computationally more efficient than the GNN. The MCM, which has the simplest architecture without any graph convolutions, shows the highest computational efficiency. Figure 4: Activity coefficient predictions and their corresponding gradients with respect to the composition and the associated Gibbs-Duhem deviations for exemplary mixtures by (a) \(\mathrm{GNN_{xMLP}}\) ensemble and (b) MCM ensemble trained with Gibbs-Duhem-informed (GDI) loss function and following hyperparameters: MLP activation function: softplus, weighting factor \(\lambda=1\), data augmentation: true. Results are averaged from the five model runs of the comp-inter split. In Figure 4, we further show the predictions for the same mixtures as in Figure 3 for the \(\text{GNN}_{\text{xMLP}}\), which exhibits the highest thermodynamic consistency, and the MCM; further results for the GNN and the individual model runs can be found in the SI. We now observe smooth predictions and gradients of \(\ln(\gamma_{i})\) induced by the softplus activation, which results in significantly reduced GD-deviations from zero in comparison to the standard training shown in Figure 3. We also find notably less fluctuations and less large changes of the gradients, e.g., for 2-thiabutane and butyleneoxide the predictions curves are visibly more consistent. For some mixtures, slight inconsistencies are still notable yet, e.g., for the MCM predicting ethanol-water at high \(x_{1}\) compositions. Regarding accuracy, the match of the predictions and the data remains at a very high level for the presented mixtures. We also find prediction improvements for some mixtures, e.g., the \(\text{GNN}_{\text{xMLP}}\) model now predicts \(\ln(\gamma_{2})\) for the ethanol-water mixtures at high accuracy. The exemplary mixtures thus highlight the overall highly increased thermodynamic consistency of the activity coefficient predictions with high accuracy by Gibbs-Duhem-informed neural networks. #### 3.2.2 Effect on vapor-liquid equilibrium predictions We further study the effect of Gibbs-Duhem-informed neural networks on estimated vapor-liquid equilibria (VLE). To calculate VLEs, we use modified Raoult's law, with vapor pressures estimated by using Antoine parameters obtained from the National Institute of Standards and Technology (NIST) Chemistry webbook (Linstrom and Mallard, 2001), similar to Qin et al. (Qin et al., 2023; Contreras, 2019). Figure 5 shows the isothermal VLEs at 298 K for the exemplary mixtures investigated in the two previous sections. Specifically, the VLEs for the GNN (a) and MCM (c) trained with ReLU activation and standard loss (cf. Section 3.1) and the Gibbs-Duhem-informed (GDI-) \(\text{GNN}_{\text{xMLP}}\) (c) and MCM (d) with softplus activation, \(\lambda=1\), and data augmentation (cf. Section 3.2) are illustrated. For the models without Gibbs-Duhem loss, we observe abrupt changes in the slopes of the bubble and dew point curves caused by the non-smooth gradients of the \(\ln(\gamma_{i})\) predictions, cf. Section 3.1. For both the GNN and MCM, these inconsistent slope changes are particularly visible for 2-thiabutane and butyleneoxide and for chloroform and acetone, and can also be observed, for example, for \(x_{1}\) compositions between 0.1 and 0.4 for ethanol-benzene. The thermodynamic inconsistencies in the activity coefficient predictions are therefore reflected in the VLEs. Comparing the \(\text{GDI-GNN}_{\text{xMLP}}\) and \(\text{GDI-MCM}\) to the standard GNN and MCM, we observe that the consistency of the bubble and dew point curves are vastly improved; in fact, we do not find visible inconsistencies. Gibbs-Duhem-informed ML models therefore also show notably increased consistency in VLEs. Our results so far show that Gibbs-Duhem-informed training of GNNs and MCMs with smooth activation functions such as softplus greatly increases the thermodynamic consistency of activity coefficient predictions compared to standard training on the basis of prediction loss only, while prediction accuracy remains at a similar, very high level. The higher consistency also reflects in predicted VLEs. Next, we investigate whether the increase of thermodynamic consistency also transfers to higher generalization capability of Gibbs-Duhem-informed neural networks. #### 3.2.3 Generalization to unseen compositions We first test the generalization to unseen compositions, representing an extreme case of predicting the binary activity coefficient at compositions that are not present for any mixture in the training data. Specifically, we use the comp-extra split (cf. Section 2.4), i.e., in each run, the data for the compositions of a respective set of \(x\in\{\{0.0,\,1.0\}\), \(\{0.1,\,0.9\}\), \(\{0.3,\,0.7\}\), \(\{0.5\}\}\) is excluded from training and used for testing. The results for the respective runs of the ML models without and with Gibbs-Duhem loss are shown in Table 2. The thermodynamic consistency evaluated by the \(\text{GD-RMSE}_{\text{test}}\) is generally higher for all models trained with Gibbs-Duhem loss. Particularly, if data augmentation is used, the consistency is significantly increased, often in the order of one magnitude with respect to the RMSE. Interestingly, we find for low and high compositions, i.e., excluding \(x_{i}\in\{0.1,0.9\}\) and \(x_{i}\in\{0,1\}\), that models trained with Gibbs-Duhem loss but without data augmentation sometimes do not result in higher consistency, which indicates that the model is not able to transfer consistency learned from other compositions, hence overfits. For these cases, data augmentation is particularly effective. Figure 5: Isothermal vapor-liquid-equilibrium plots at 298 K based on activity coefficient predictions by (a) GNN and (c) MCM trained with standard loss based on the prediction error and MLP activation function: ReLU; (b) GDI-GNN\({}_{\text{xMLP}}\) ensemble and (d) GDI-MCM ensemble trained with Gibbs-Duhem-informed (GDI) loss function and following hyperparameters: MLP activation function: softplus, weighting factor \(\lambda=1\), data augmentation: true. Results are averaged from the five model runs of the comp-inter split. For the prediction accuracy, we first observe higher RMSEs for more extreme compositions, which is expected, cf. Section 3.1. Notably, for all runs, the Gibbs-Duhem-informed models achieve a higher accuracy than models trained only on the prediction loss. We find the strongest increase in accuracy for the case of excluding \(x_{i}\in\{0.1,0.9\}\), e.g., the GNN with ReLU activation and without Gibbs-Duhem loss has an RMSE of 0.302, hence failing to predict the activity coefficients with high accuracy, whereas the Gibbs-Duhem-informed GNN with softplus and data augmentation shows an RMSE of 0.075 corresponding to an accuracy increase by a factor of 4. For these compositions, the gradients of the activity coefficient with respect to the compositions tend to be relatively high, and thus accounting for these insights during training seems to be very valuable for prediction. For the boundary conditions, i.e., \(x_{i}\in\{0,1\}\) the accuracy increase is rather minor considering that the overall RMSE of approximately 0.3 is at a high level. Since the Gibbs-Duhem differential constraint is not sensitive to the gradient at \(x_{i}\to 0\), the regularization has less effect on the network predictions at infinite dilution. Hence, predicting the infinite dilution activity coefficient thus benefits less from Gibbs-Duhem information and remains a challenging task. Providing further thermodynamic insights for infinite dilution activity coefficients would thus be interesting for future work. Overall, we find Gibbs-Duhem-informed neural networks to increase generalization capabilities for unseen compositions. #### 3.2.4 Generalization to unseen mixtures For computer-aided molecular and process design applications, predicting the activity coefficients of new mixtures, i.e., for which no data is readily available, is highly relevant. We thus systematically investigate the generalization to unseen mixtures by Gibbs-Duhem-informed neural networks, beyond the exemplary mixtures from Figures 3, 4. Specifically, we now consider the mixture-extra split (cf. Section 2.4), where we exclude all data samples for a set of mixtures from the training set and use them for testing. Table 3 shows the results for different ML models trained without and with Gibbs-Duhem loss ag \begin{table} \begin{tabular}{l l l|c c c|c c c|c c c} \hline \hline \multicolumn{3}{c|}{model setup} & \multicolumn{3}{c|}{CNN} & \multicolumn{3}{c|}{MCM} & \multicolumn{3}{c}{GNN\({}_{\text{attr}}\)} \\ MLP set. & \(\lambda\) & data augm. & RMSE\({}_{\text{out}}\) & GD-RMSE\({}_{\text{out}}\) & GD-RMSE\({}_{\text{out}}\) & RMSE\({}_{\text{out}}\) & GD-RMSE\({}_{\text{out}}\) & RMSE\({}_{\text{out}}\) & GD-RMSE\({}_{\text{out}}\) & GD-RMSE\({}_{\text{out}}\) & GD-RMSE\({}_{\text{out}}\) \\ \hline \hline \multirow{2}{*}{\(\text{FN}_{\text{k}}\)} & 0.0 & False & 0.114 & 0.206 & 0.311 & 0.148 & 0.249 & 0.274 & 0.117 & 0.227 & 0.277 \\ & & 0.0 & False & 0.114 & 0.214 & 0.210 & 0.125 & 0.140 & 0.142 & 0.117 & 0.146 & 0.125 \\ & & 1.0 & False & 0.108 & 0.036 & 0.197 & 0.123 & 0.040 & 0.095 & 0.114 & 0.031 & 0.073 \\ & & 1.0 & True & 0.105 & 0.040 & 0.038 & 0.120 & 0.039 & 0.036 & 0.113 & 0.035 & 0.030 \\ \hline \hline \end{tabular} \end{table} Table 3: Prediction accuracies and thermodynamic consistencies measured by root mean squared error (RMSE) for mixture-extra split, i.e., generalization to unseen mixtures, by the GNN, MCM, and \(\text{GNN}_{\text{xMLP}}\). The models are trained with different hyperparameters: MLP activation function, Gibbs-Duhem loss weighting factor \(\lambda\), and data augmentation. gregated from the five mixture-extra splits. We observe that Gibbs-Duhem-informed neural networks using data augmentation yield notably higher thermodynamic consistency for all models. The prediction accuracy remains at a mostly similar, in some cases slightly higher, level of prediction accuracy. In comparison to the comp-inter split (cf. Table 1), the prediction accuracy decreases from about 0.08 RMSE to 0.11 RMSE, which is expected, since predicting activity coefficients for new mixtures is more difficult than predicting the values of a known mixture but at different composition. Overall, the prediction quality remains at a very high level. Therefore, Gibbs-Duhem-informed neural networks also provide high accuracy and greatly increase thermodynamic consistency for predicting activity predictions for new mixtures. The generalization studies emphasize that Gibbs-Duhem-informed neural networks enable high prediction accuracies with significantly increased thermodynamic consistency, cf. Section 3.2. Additionally, generalization capabilities for unseen compositions can be enhanced. We therefore demonstrate that using thermodynamic insights for training neural networks for activity coefficient predicting is highly beneficial. Including further thermodynamic relations, next to the Gibbs-Duhem equation, is thus very promising for future work. ## 4 Conclusion We present Gibbs-Duhem-informed neural networks that learn to predict composition-dependent activity coefficients of binary mixtures with Gibbs-Duhem consistency. Recently developed hybrid ML models focused on enforcing thermodynamic consistency by embedding thermodynamic models in ML models. We herein propose an alternative approach: utilizing constraints of thermodynamic consistency as regularization during training. We present the results for the choice of the Gibbs-Duhem differential constraint, as this has particular significance. We also present a data augmentation strategy in which data points are added to the training set for evaluation of the Gibbs-Duhem equation at unmeasured compositions, hence without the need to collect additional activity coefficient data. Gibbs-Duhem-informed neural networks strongly increase the thermodynamic consistency of activity coefficient predictions compared to models trained on prediction loss only. Our results show that GNNs and MCMs trained with a standard loss, i.e., on the prediction error only, exhibit notable thermodynamic inconsistencies. For instance, \(\gamma_{1}\) and \(\gamma_{2}\) both increase for changing compositions or the derivatives of the activity coefficient with respect to the composition having discontinuities caused by ReLU activation. By using Gibbs-Duhem loss during training with the proposed data augmentation strategy and employing a smooth activation function, herein softplus, the thermodynamic consistency effectively increases for both model types at the same level of prediction accuracy and is therefore highly beneficial. The higher consistency also reflects in predicted vapor-liquid equilibria. Furthermore, we test the generalization capability by respectively excluding specific mixtures and compositions from training and using them for testing. We find that Gibbs-Duhem-informed GNNs and MCMs allow for generalization to new mixtures with high thermodynamic consistency and a similar level of prediction accuracy as standard GNNs and MCMs. They further enable generalization to new compositions with higher consistency, additionally enhancing the prediction accuracy. Future work could extend Gibbs-Duhem-informed neural networks by including other relations for thermodynamic consistency, e.g., the Gibbs-Helmholtz relation for the temperature-dependency of the activity coefficient, cf. (Damay et al., 2021; Sanchez Medina et al., 2023). Since our investigations are based on activity coefficients obtained from COSMO-RS by (Qin et al., 2023), it would also be interesting to fine-tune our models on experimental databases, e.g., Dortmund Data Bank (Dortmund Data Bank, 2023). Further ML model types such as transformers (Winter et al., 2022) or MCMs based on Bayesian inference (Jirasek et al., 2020) could also be extended by Gibbs-Duhem insights using our approach. Furthermore, additional thermodynamic constraints could be added to the loss function for regularization, which might also enable transferring the concept of Gibbs-Duhem-informed neural networks to predict further thermophysical properties with increased consistency. ## Acknowledgments This project was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 466417970 - within the Priority Programme "SPP 2331: Machine Learning in Chemical Engineering". This work was also performed as part of the Helmholtz School for Data Science in Life, Earth and Energy (HDS-LEE). K.C.F acknowledges funding from BASF SE and the Cambridge-Trust Marshall Scholarship. Simulations were performed with computing resources granted by RWTH Aachen University under project "rwth1232". We further gratefully acknowledge Victor Zavala's research group at the University of Wisconsin-Madison for providing the SolvGNN implementation and the COSMO-RS activity coefficient data openly accessible. ## Authors contributions J.G.R. developed the concept of Gibbs-Duhem-informed neural networks, implemented them, set up and conducted the computational experiments including the formal analysis and visualization, and wrote the original draft of the manuscript. K.C.F. supported the development of the computational experiments and the analysis of the results, provided additional COSMO-RS calculations, and edited the manuscript. A.A.L. and A.M. acquired funding, provided supervision, and edited the manuscript.
2309.10370
Geometric structure of shallow neural networks and constructive ${\mathcal L}^2$ cost minimization
In this paper, we approach the problem of cost (loss) minimization in underparametrized shallow neural networks through the explicit construction of upper bounds, without any use of gradient descent. A key focus is on elucidating the geometric structure of approximate and precise minimizers. We consider shallow neural networks with one hidden layer, a ReLU activation function, an ${\mathcal L}^2$ Schatten class (or Hilbert-Schmidt) cost function, input space ${\mathbb R}^M$, output space ${\mathbb R}^Q$ with $Q\leq M$, and training input sample size $N>QM$ that can be arbitrarily large. We prove an upper bound on the minimum of the cost function of order $O(\delta_P)$ where $\delta_P$ measures the signal to noise ratio of training inputs. In the special case $M=Q$, we explicitly determine an exact degenerate local minimum of the cost function, and show that the sharp value differs from the upper bound obtained for $Q\leq M$ by a relative error $O(\delta_P^2)$. The proof of the upper bound yields a constructively trained network; we show that it metrizes a particular $Q$-dimensional subspace in the input space ${\mathbb R}^M$. We comment on the characterization of the global minimum of the cost function in the given context.
Thomas Chen, Patricia Muñoz Ewald
2023-09-19T07:12:41Z
http://arxiv.org/abs/2309.10370v2
Geometric structure of shallow neural networks and constructive \(\mathcal{L}^{2}\) cost minimization ###### Abstract. In this paper, we provide a geometric interpretation of the structure of shallow neural networks, characterized by one hidden layer, a ramp activation function, an \(\mathcal{L}^{2}\) Schatten class (or Hilbert-Schmidt) cost function, input space \(\mathbb{R}^{M}\), output space \(\mathbb{R}^{Q}\) with \(Q\leq M\), and training input sample size \(N>QM\). We prove an upper bound on the minimum of the cost function of order \(O(\delta_{P})\) where \(\delta_{P}\) measures the signal to noise ratio of training inputs. We obtain an approximate optimizer using projections adapted to the averages \(\overline{x_{0,j}}\) of training input vectors belonging to the same output vector \(y_{j},j=1,\ldots,Q\). In the special case \(M=Q\), we explicitly determine an exact degenerate local minimum of the cost function; the sharp value differs from the upper bound obtained for \(Q\leq M\) by a relative error \(O(\delta_{P}^{2})\). The proof of the upper bound yields a constructively trained network; we show that it metrizes the \(Q\)-dimensional subspace in the input space \(\mathbb{R}^{M}\) spanned by \(\overline{x_{0,j}}\), \(j=1,\ldots,Q\). We comment on the characterization of the global minimum of the cost function in the given context. ## 1. Introduction The analysis of neural networks, especially Deep Learning (DL) networks, occupies a central role across a remarkably broad range of research disciplines, and the technological impact of DL algorithms in recent years has been rather monumental. However, despite of its successes, the fundamental conceptual reasons underlying their functioning are at present still insufficiently understood, and remain the subject of intense investigation, [2, 5]. The current paper is the first in a series of works in which we investigate the geometric structure of DL networks. Here, as a starting point, we analyze shallow neural networks, exhibiting one single hidden layer, and address the minimization of the cost function directly, without invoking the gradient descent flow. First, we derive an upper bound on the minimum of the cost function. As a by-product of our proof, we obtain a constructively trained shallow network which allows for a transparent geometric interpretation. In the special case of the dimensions of the input and output spaces being equal, we construct an exact degenerate local minimum of the cost function, and show that it differs from the upper bound by a subleading term. For some thematically related background, see for instance [1, 6, 5, 7, 8, 9] and the references therein. We will address an extension of our analysis to multilayer DL networks in a separate work, [3], which will invoke various results and arguments developed here. In this paper, we will focus on the geometric structure of the system in discussion using standard mathematical terminology, and will make minimal use of specialized nomenclatures specific to the neural networks literature. In this introductory section, we summarize the main results of this paper. We consider a shallow network for which \(y_{j}\in\mathbb{R}^{Q}\) denotes the \(j\)-th output vector, and define the output matrix \[Y:=[y_{1},\dots,y_{Q}]\in\mathbb{R}^{Q\times Q}\,, \tag{1.1}\] which we assume to be invertible. We assume the input space to be given by \(\mathbb{R}^{M}\) with \(M\geq Q\), and for \(j=1,\dots,Q\), we let \[X_{0,j}:=[x_{0,j,1}\cdots x_{0,j,i}\cdots x_{0,j,N_{j}}]\in\mathbb{R}^{M\times N _{j}}\,. \tag{1.2}\] denote the matrix of training input vectors \(x_{0,j,1}\), \(i=1,\dots,N_{j}\), which belong to the output \(y_{j}\). Letting \(N:=\sum_{j=1}^{Q}N_{j}\), the full matrix of training inputs is given by \[X_{0}:=[X_{0,1}\cdots X_{0,j}\cdots X_{0,Q}]\in\mathbb{R}^{M\times N}\,. \tag{1.3}\] Typically, \(Q\leq M\leq MQ<N\). We define the average of all training input vectors belonging to the output \(y_{j}\), \[\overline{x_{0,j}}:=\frac{1}{N_{j}}\sum_{i=1}^{N_{j}}x_{0,j,i}\in\mathbb{R}^{M} \tag{1.4}\] for \(j=1,\dots,Q\), and \[\Delta x_{0,j,i}:=x_{0,j,i}-\overline{x_{0,j}}\,. \tag{1.5}\] Moreover, we let \[\delta:=\sup_{i,j}|\Delta x_{0,j,i}|\,. \tag{1.6}\] We define \[\Delta X_{0,j}:=[\Delta x_{0,j,1}\cdots\Delta x_{0,j,i}\cdots\Delta x_{0,j,N_{ j}}]\in\mathbb{R}^{M\times N_{j}}\,, \tag{1.7}\] and \[\Delta X_{0}:=[\Delta X_{0,1}\cdots\Delta X_{0,j}\cdots\Delta X_{0,Q}]\in \mathbb{R}^{M\times N}\,. \tag{1.8}\] Moreover, we define \[\overline{X_{0}^{red}}:=[\overline{x_{0,1}}\cdots\overline{x_{0,Q}}]\in \mathbb{R}^{M\times Q} \tag{1.9}\] where the superscript indicates that this is the column-wise reduction of the matrix \(\overline{X_{0}}\) in (3.4), below. We may then define the orthoprojector \(P=P^{2}=P^{T}\) onto the span of the family \(\{\overline{x_{0,j}}\}_{j=1}^{Q}\) which we assume to be linearly independent. We let \(P^{\perp}:=\mathbf{1}-P\). We define a shallow network with hidden layer \[x\mapsto\sigma(W_{1}x+b_{1}) \tag{1.10}\] where \(W_{1}\in\mathbb{R}^{M\times M}\) is a weight matrix, and \(b_{1}\in\mathbb{R}^{M}\) is a shift (or bias). The activation function \(\sigma\) is assumed to act component-wise as a ramp function \((a)_{+}:=\max\{0,a\}\). We note that the requirement of \(\sigma\) acting component-wise singles out the given coordinate system in \(\mathbb{R}^{M}\) as distinct. We define the output layer with the map \(\mathbb{R}^{M}\to\mathbb{R}^{Q}\) given by \[x^{\prime}\mapsto W_{2}x^{\prime}+b_{2} \tag{1.11}\] where \(W_{2}\in\mathbb{R}^{Q\times M}\) and \(b_{2}\in\mathbb{R}^{Q}\). Here, we do not include an activation function. Then, the \(\mathcal{L}^{2}\) cost function \(\mathcal{C}[W_{j},b_{j}]\) is defined as \[\mathcal{C}[W_{j},b_{j}]:=\sqrt{\frac{1}{N}\sum_{j=1}^{Q}\sum_{i=1}^{N_{j}}|W_{2 }\sigma(W_{1}x_{0,j,i}+b_{1})+b_{2}-y_{j}|_{\mathbb{R}^{Q}}^{2}} \tag{1.12}\] where \(|\cdot|_{\mathbb{R}^{Q}}\) denotes the Euclidean norm on \(\mathbb{R}^{Q}\). In Theorem 3.1, we prove an upper bound on the minimum of the cost function with the following construction. Let \(R\in O(M)\) denote an orthogonal matrix that diagonalizes \(P\). That is, the orthoprojectors \[P_{R}:=RPR^{T}\ \ \ \text{and}\ P_{R}^{\perp}:=RP^{\perp}R^{T} \tag{1.13}\] are diagonal in the given coordinate system; this is important for compatibility with the fact that \(\sigma\) acts component-wise. Namely, \(R\) rotates the input data in such a way that \(\text{range}(P)\) is made to align with the coordinate axes; this decouples the action of \(\sigma\) on the rotated \(\text{range}(P)\) from its action on the rotated \(\text{range}(P^{\perp})\). That is, \(\sigma(v)=\sigma(P_{R}v)+\sigma(P_{R}^{\perp}v)\) for all \(v\in\mathbb{R}^{M}\). Letting \[u_{M}:=(1,1,\cdots,1)^{T}\in\mathbb{R}^{M} \tag{1.14}\] we may choose \(\beta_{1}\geq 0\) sufficiently large (\(\beta_{1}\geq 2\max_{j,i}|x_{0,j,i}|\) is sufficient), so that the projected, translated, and rotated training input vectors \(RPx_{0,j,i}+\beta_{1}u_{M}\) are component-wise non-negative for all \(i=1,\ldots,N_{j}\), and \(j=1,\ldots,Q\). Then, we construct an upper bound on the cost function by use of the following weights and shifts. We choose \[W_{1}^{*}=R \tag{1.15}\] and \(b_{1}^{*}=P_{R}b_{1}^{*}+P_{R}^{\perp}b_{1}^{*}\) where \[P_{R}b_{1}^{*}=\beta_{1}P_{R}u_{Q}\ \ \,\ \ P_{R}^{\perp}b_{1}^{*}=-\delta P_{R}^{ \perp}u_{M}\,, \tag{1.16}\] with \(\beta_{1}\geq 0\) large enough to ensure component-wise non-negativity of \(RPX_{0}+P_{R}B_{1}^{*}\), where \(B_{1}^{*}=b_{1}^{*}u_{N}^{T}\), so that \[\sigma(RPX_{0}+P_{R}B_{1}^{*})=RPX_{0}+P_{R}B_{1}^{*}\,. \tag{1.17}\] On the other hand, the \(M-Q\) rows of \(RP^{\perp}X_{0}+P_{R}^{\perp}B_{1}^{*}\) are eliminated by way of \[\sigma(RP^{\perp}X_{0}+P_{R}^{\perp}B_{1}^{*})=0\,. \tag{1.18}\] We thus obtain a reduction of the dimension of the input space from \(M\) to \(Q\). Passing to the output layer, we require \(W_{2}^{*}\) to solve \[W_{2}^{*}R\overline{X_{0}^{red}}=Y\,, \tag{1.19}\] which yields \[W_{2}^{*}=Y\ \text{Pen}[\overline{X_{0}^{red}}]PR^{T}\,. \tag{1.20}\] Here, \[\text{Pen}[\overline{X_{0}^{red}}]:=((\overline{X_{0}^{red}})^{T}\overline{X _{0}^{red}})^{-1}(\overline{X_{0}^{red}})^{T} \tag{1.21}\] denotes the Penrose inverse of \(\overline{X_{0}^{red}}\), which satisfies \[\text{Pen}[\overline{X_{0}^{red}}]\overline{X_{0}^{red}}=\mathbf{1}_{Q\times Q }\ \,\ \ \overline{X_{0}^{red}}\text{Pen}[\overline{X_{0}^{red}}]=P\,. \tag{1.22}\] Finally, we find that \[b_{2}^{*}=-W_{2}^{*}P_{R}b_{1}^{*}\,, \tag{1.23}\] that is, it reverts the translation by \(P_{R}b_{1}^{*}\) in the previous layer. A pictorial way of thinking about this construction is that \(W_{1}^{*}=R\) orients the training input data with respect to the given coordinate system, in order to align it with the component-wise action of \(\sigma\). This allows for a maximal rank reduction via \(\sigma\), whereby the maximal possible amount of insignificant information is eliminated: \(P_{R}b_{1}^{*}\) pulls the significant information (in the range of \(P_{R}\)) out of the kernel of \(\sigma\), while \(P_{R}^{\perp}b_{1}^{*}\) pushes the insignificant information (in the range of \(P_{R}^{\perp}\)) into the kernel of \(\sigma\) whereby the latter is eliminated. Subsequently, \(b_{2}^{*}\) places the significant information back into its original position, and \(W_{2}^{*}\) matches it to the output matrix \(Y\) in the sense of least squares, with respect to the \(\mathcal{L}^{2}\)-norm. We refer to the shallow network defined with these specific weights and shifts \(W_{i}^{*},b_{i}^{*}\), \(i=1,2\), as the _constructively trained network_. The parameter \[\delta_{P}:=\sup_{j,i}\left|\mathrm{Pen}[\overline{X_{0}^{red}}]P\Delta x_{0, j,i}\right|\,, \tag{1.24}\] measures the relative size between \(\Delta X_{0}\) and \(\overline{x_{0,j}}\), \(j=1,\ldots,Q\); in particular, \(\left|\mathrm{Pen}[\overline{X_{0}^{red}}]P\Delta x_{0,j,i}\right|\) scales like \(\frac{|\Delta x|}{|x|}\), and is scaling invariant under \(X_{0}\to\lambda X_{0}\). We then obtain \[\min_{W_{j},b_{j}}\mathcal{C}[W_{j},b_{j}] \leq \mathcal{C}[W_{i}^{*},b_{i}^{*}] \tag{1.25}\] \[\leq \frac{1}{\sqrt{N}}\|Y\mathrm{Pen}[\overline{X_{0}^{red}}]P\Delta X _{0}\|_{\mathcal{L}^{2}}\] \[\leq \|Y\|_{op}\;\delta_{P}\;\;.\] In order to match an arbitrary non-training input \(x\in\mathbb{R}^{M}\) with one of the output vectors \(y_{j}\), \(j\in\{1,\ldots,Q\}\), we let for \(x\in\mathbb{R}^{M}\), \[\mathcal{C}_{j}[x]:=|W_{2}^{*}\sigma(W_{1}^{*}x+b_{1}^{*})+b_{2}^{*}-y_{j}| \tag{1.26}\] with the weights and shifts of the constructively trained network. Then, \[j^{*}=\mathrm{argmin}_{j}\mathcal{C}_{j}[x] \tag{1.27}\] implies that \(x\) matches the \(j^{*}\)-th output \(y_{j^{*}}\). We define the metric \[d_{\widetilde{W}_{2}}(x,y):=|\widetilde{W}_{2}P(x-y)|_{\mathbb{R}^{Q}}\quad \text{for }x,y\in\mathrm{range}(P)\subset\mathbb{R}^{M} \tag{1.28}\] where \(\widetilde{W}_{2}:=W_{2}^{*}R\), on the \(Q\)-dimensional linear subspace \(\mathrm{range}(P)\subset\mathbb{R}^{M}\). In Theorem 3.3, we prove that \[\mathcal{C}_{j}[x]=d_{\widetilde{W}_{2}}(Px,x_{0,j})\,. \tag{1.29}\] Therefore, matching an input \(x\in\mathbb{R}^{M}\) with an output \(y_{j^{*}}\) via the constructively trained network is equivalent to solving the metric minimization problem \[j^{*}=\mathrm{argmin}_{j\in\{1,\ldots,Q\}}(d_{\widetilde{W}_{2}}(Px,\overline{ x_{0,j}})) \tag{1.30}\] on the range of \(P\). In other words, \(x\in\mathbb{R}^{M}\) is matched with the output \(y_{j^{*}}\in\mathbb{R}^{Q}\) by determining that \(Px\) is closest to \(\overline{x_{0,j^{*}}}\) among all \(\{\overline{x_{0,j}}\in\mathrm{range}(P)\subset\mathbb{R}^{M}\mid j=1,\ldots,Q\}\) in the \(d_{\widetilde{W}_{2}}\) metric. In Theorem 3.2, we focus on the special case \(M=Q\), and present the explicit construction of a degenerate local minimum of the cost function (more precisely, a weighted version of the cost function), and obtain an improvement on the upper bound 1.25 by a factor \(1-C_{0}\delta_{P}^{2}\) for some constant \(C_{0}\geq 0\). Here, degeneracy means that for all weights and shifts \(W_{1},b_{1}\) satisfying (1.17), the corresponding minimum of the cost function attains the same value. In particular, this implies that in this range of weights and shifts, (1.25) differs from the sharp value with a relative error of order \(O(\delta_{P}^{2})\), when \(Q=M\). In Theorem 3.5, we prove a result for \(Q=M\) closely related to Theorem 3.2, but for the case where (1.17) is not satisfied. We expect our results for the case \(Q=M\) to be extensible to \(Q<M\), but leave a detailed analysis for future work. This paper is organized as follows. In Section 2, we give a detailed introduction of the mathematical model describings the shallow network. In Section 3, we present the main results, namely Theorem 3.1, which we prove in Section 4, Theorem 3.3, which we prove in Section 7, and Theorem 3.5. In Section 6, we provide a detailed description of the constructively trained network. ## 2. Definition of the mathematical model In this section, we introduce a shallow neural network, consisting of the input layer, one hidden layer, the output layer, and an \(\mathcal{L}^{2}\) cost function. Let \(Q\in\mathbb{N}\) denote the number of distinct output values. We define the output matrix \[Y:=[y_{1},\ldots,y_{Q}]\in\mathbb{R}^{Q\times Q} \tag{2.1}\] where \(y_{j}\in\mathbb{R}^{Q}\) is the \(j\)-th output vector. We assume that the family \(\{y_{j}\}\) is linearly independent, so that \(Y\) is invertible. We let \(\mathbb{R}^{M}\) denote the input space with coordinate system defined by the orthonormal basis vectors \(\{e_{\ell}=(0,\ldots,0,1,0,\ldots,0)^{T}\in\mathbb{R}^{M}\,|\,\ell=1,\ldots,M\}\). For \(j\in\{1,\ldots,Q\}\), and \(N_{j}\in\mathbb{N}\), \[x_{0,j,i}\in\mathbb{R}^{M}\ \,\ \ i\in\{1,\ldots,N_{j}\} \tag{2.2}\] denotes the \(i\)-th training input vector corresponding to the \(j\)-th output vector \(y_{j}\). We define the matrix of all training inputs belonging to \(y_{j}\) by \[X_{0,j}:=[x_{0,j,1}\cdots x_{0,j,i}\cdots x_{0,j,N_{j}}]\,. \tag{2.3}\] Correspondence of a training input vector to the same output \(y_{j}\) defines an equivalence relation, that is, for each \(j\in\{1,\ldots,Q\}\) we have \(x_{0,j,i}\sim x_{0,j,i^{\prime}}\) for any \(i,i^{\prime}\in\{1,\ldots,N_{j}\}\). Accordingly, \(X_{0,j}\) labels the equivalence class of all inputs belonging to the \(j\)-th output \(y_{j}\). Then, we define the matrix of training inputs \[X_{0}:=[X_{0,1}\cdots X_{0,j}\cdots X_{0,Q}]\in\mathbb{R}^{M\times N} \tag{2.4}\] where \(N:=\sum_{j=1}^{Q}N_{j}\). We introduce weight matrices \[W_{1}\in\mathbb{R}^{M\times M}\ \,\ \ W_{2}\in\mathbb{R}^{Q\times M} \tag{2.5}\] and shift matrices \[B_{1} := b_{1}u_{N}^{T}=[b_{1}\cdots b_{1}]\in\mathbb{R}^{M\times N}\] \[B_{2} := b_{2}u_{N}^{T}=[b_{2}\cdots b_{2}]\in\mathbb{R}^{Q\times N} \tag{2.6}\] where \(b_{1}\in\mathbb{R}^{M}\) and \(b_{2}\in\mathbb{R}^{Q}\) are column vectors, and \[u_{N}=(1,1,\ldots,1)^{T} \in\mathbb{R}^{N}\,. \tag{2.7}\] Moreover, we choose an activation function \[\sigma:\mathbb{R}^{M\times N} \rightarrow \mathbb{R}^{M\times N}_{+}\] \[A=[a_{ij}] \mapsto [(a_{ij})_{+}] \tag{2.8}\] where \[(a)_{+}:=\max\{0,a\} \tag{2.9}\] is the standard ramp function. We define \[X^{(1)}:=\sigma(W_{1}X_{0}+B_{1})\in\mathbb{R}^{M\times N}_{+} \tag{2.10}\] for the hidden layer, and \[X^{(2)}:=W_{2}X^{(1)}+B_{2}\in\mathbb{R}^{Q\times N} \tag{2.11}\] for the terminal layer. We also define the extension of the matrix of outputs, \(Y^{ext}\), by \[Y^{ext}:=[Y_{1}\cdots Y_{Q}]\in\mathbb{R}^{Q\times N} \tag{2.12}\] where \[Y_{j}:=[y_{j}\cdots y_{j}]\in\mathbb{R}^{Q\times N_{j}} \tag{2.13}\] with \(N_{j}\) copies of the same output column vector \(y_{j}\). Clearly, \(Y^{ext}\) has full rank \(Q\). Then, we consider the \(\mathcal{L}^{2}\) Schatten class (or Hilbert-Schmidt) cost function \[\mathcal{C}[W_{j},b_{j}]:=\frac{1}{\sqrt{N}}\|X^{(2)}-Y^{ext}\|_{ \mathcal{L}^{2}} \tag{2.14}\] with \[\|A\|_{\mathcal{L}^{2}}\equiv\sqrt{\operatorname{Tr}(AA^{T})}\,, \tag{2.15}\] where \(A^{T}\) is the transpose of the matrix \(A\). \(\mathcal{C}[W_{j},b_{j}]\) is a function of the weights and shifts, while the training inputs in \(X_{0}\) are considered to be a fixed set of parameters. Clearly, (2.14) is equivalent to (1.12). We note that we are not introducing an activation function in the terminal layer. Training this shallow network amounts to finding the optimizers \(W^{**}_{j},b^{**}_{j}\) for the minimization problem \[\mathcal{C}[W^{**}_{j},b^{**}_{j}]=\min_{W_{j},b_{j}}\mathcal{C}[W_ {j},b_{j}]\,. \tag{2.16}\] Before we present our main results in the next section, we provide a conceptual description of our strategy. ### Discussion of the minimization process Finding the solution to the minimization problem (2.16), \[\mathcal{C}[W_{j}^{**},b_{j}^{**}]=\min_{W_{j},b_{j}}\frac{1}{\sqrt{N}}\|W_{2} \sigma(W_{1}X_{0}+B_{1})+B_{2}-Y^{ext}\|_{\mathcal{L}^{2}}\,, \tag{2.17}\] one would aim at determining the minimizers \(W_{j}^{**},b_{j}^{**}\), in order to arrive at the optimally trained network. Finding the minimum via rigorous theoretical arguments has to be contrasted with numerical minimization via the gradient descent algorithm. The gradient descent method is based on studying the dynamical system \((\mathcal{W},\Phi_{\tau})\) where \[\mathcal{W}:=\mathbb{R}^{M\times M}\times\mathbb{R}^{Q\times M}\times\mathbb{ R}^{M}\times\mathbb{R}^{Q}\ \ \text{with}\ \ Z:=(W_{1},W_{2},b_{1},b_{2})\in\mathcal{W} \tag{2.18}\] is the space of weights and shifts, and \(\Phi_{\tau}\) denotes the flow generated by the gradient \(G_{\mathcal{C}}[Z]:=-\nabla_{Z}\mathcal{C}[Z]\) of the cost function (or of its square), that is, \[\partial_{\tau}Z(\tau)=G_{\mathcal{C}}[Z(\tau)]\ \,\ \ Z(\tau):=\Phi_{\tau}[Z_{0}]\ \ \,\ \ Z_{0}\in\mathcal{W}\,, \tag{2.19}\] where \(\tau\in\mathbb{R}_{+}\). Because of \[\partial_{\tau}\mathcal{C}[Z(\tau)]=G_{\mathcal{C}}[Z(\tau)]\cdot\partial_{t}Z (\tau)=-|G_{\mathcal{C}}[Z(\tau)]|^{2}\leq 0\,, \tag{2.20}\] the cost function \(\mathcal{C}[Z(\tau)]\) decreases monotonically along orbits of \(\Phi_{\tau}\). Hence, the goal is to find its global minimum by numerically simulating orbits of the gradient flow. This method is highly sensitive to the choice of initial data \(Z_{0}\), which is, in practice, often picked at random, and vaguely informed by structural insights into the problem. We make the following observations regarding the algorithmic approach. 1. If \(W_{1},b_{1}\) are given suitable initial values so that throughout the gradient descent recursion, \(W_{1}X_{0}+B_{1}\) have non-negative matrix components, then the activation function \(\sigma\), simply acts as the identity \(\mathbf{1}_{M\times M}\). Accordingly, (2.17) can be determined by solving the least squares problem \[\min_{W_{2},b_{2}}\|W_{2}X_{0}+B_{2}-Y^{ext}\|_{\mathcal{L}^{2}}\,.\] (2.21) This is because we can redefine \(W_{2}W_{1}\to W_{2}\) and \(b_{2}\to b_{2}-W_{2}b_{1}\), as \(W_{1}\) and \(b_{1}\) are unknown. The solution is given by \[W_{2}X_{0}=Y^{ext}\mathcal{P}\ \ \,\ b_{2}=0\] (2.22) where \(\mathcal{P}^{T}\) is the projector onto the range of \(X_{0}^{T}\) (which has rank \(Q\)), and \(\mathcal{P}^{\perp}=\mathbf{1}_{N\times N}-\mathcal{P}\). 2. The opposite extreme case is given when in the gradient descent algorithm, \(W_{1}\) and \(B_{1}\) are given initial values where all matrix components of \(W_{1}X_{0}+B_{1}\in\mathbb{R}^{M\times N}\) are \(\leq 0\), so that \[\sigma(W_{1}X_{0}+B_{1})=0\,.\] (2.23) Then, the cost function becomes independent of \(W_{1},b_{1}\), and its gradient in these variables is zero. One is left with the minimization problem \[\min_{b_{2}}\|B_{2}-Y^{ext}\|_{\mathcal{L}^{2}}\,,\] (2.24) which has the solution \[(b_{2})_{i}=\frac{1}{N}\sum_{j=1}^{N}[Y^{ext}]_{ij}=\sum_{j=1}^{Q}\frac{N_{j}} {N}y_{j}\,.\] (2.25) The corresponding trained network is not capable of matching inputs \(x\in\mathbb{R}^{M}\) to outputs \(y_{j}\). 3. An intermediate variant of the previous two cases is obtained when \(W_{1}\) and \(B_{1}\) attain values where some components \((b_{1})_{i_{\ell}}\), \(\ell\in\mathcal{L}\subset\{1,\ldots,M\}\), render the corresponding rows of \([W_{1}X_{0}+B_{1}]_{i_{\ell}j}\leq 0\) for \(j=1,\ldots,N\). Then, the cost function is independent of \((b_{1})_{i_{\ell}}\) and \([W_{1}]_{i_{\ell}j}\), \(j=1,\ldots,N\), \(\ell\in\mathcal{L}\). Because by assumption, \(\operatorname{rank}(Y^{ext})=Q\), we find that if \[\operatorname{rank}(\sigma(W_{1}X_{0}+B_{1}))=Q\,,\] (2.26) the possibility of finding a solution to (2.17) remains intact, based on the surviving components of \(W_{1},b_{1}\) that occupy the non-zero rows. In this paper, we will investigate aspects of this scenario. 4. On the other hand, in the same situation as in case (3), if \[\operatorname{rank}(\sigma(W_{1}X_{0}+B_{1}))<Q\,,\] (2.27) then although one can minimize the cost function depending on the surviving variables, one will not find the global minimum (2.17). 5. Even if the initial values for \(W_{1},b_{1}\) lead to case (1), the gradient descent flow might steer them into one of the scenarios in the cases (2) \(\sim\) (4) where a rank loss is induced by \(\sigma\). Once this occurs, the gradient flow becomes independent of the components of \(W_{1},b_{1}\) with respect to which a rank loss is obtained; those components remain stationary (or plateau) at the values at which they are first eliminated by \(\sigma\). We note that a priori, only the bound \[\operatorname{rank}(\sigma(W_{1}X_{0}+B_{1}))\leq\operatorname{rank}((W_{1}X_ {0}+B_{1})\leq M\,, \tag{2.28}\] is available. In applications of shallow networks, one often finds \(M>Q\) or \(M\gg Q\). The scenario described in case (3) yields the maximal reduction of complexity to still ensure compatibility with the minimization problem (2.17). ### Outline of approach Thus, it is an interesting question to explore whether \(W_{1}\) and \(b_{1}\) can be analytically determined in a manner that a maximal rank reduction from \(M\) to \(Q\) is accomplished, as in (2.26), while maintaining the solvability of (2.17). Our analysis is based on the construction of accurate upper bounds on the cost function, and makes no recourse to gradient flow algorithms. Our main result in Theorem 3.1, below, is an upper bound on the minimum of the cost function, obtained through the construction of a trial state as follows. Let \(P=P^{T}\) denote the orthoprojector onto the span of \(\{\overline{x_{0,j}}\in\mathbb{R}^{M}\,|\,j=1,\ldots,Q\}\), and let \(P^{\perp}\) denote its complement. We can diagonalize it with an orthogonal matrix \(R\in O(M)\) \[P=RP_{R}R^{T}\ \,\ P^{\perp}=RP_{R}^{\perp}R^{T}\ \,\ P_{R},P_{R}^{\perp}\ \text{diagonal}, \tag{2.29}\] where we note that \(R\) is not unique. It can be composed with any element of \(O(M)\) that leaves the ranges of \(P\) and of its complementary projector \(P^{\perp}\) invariant. Since \(P_{R}\) is diagonal, the activation function has the additivity property \[\sigma(v)=\sigma(P_{R}v)+\sigma(P_{\overline{R}}^{\perp}v) \tag{2.30}\] for all \(v\in\mathbb{R}^{M}\). We split \[b_{1}=P_{R}b_{1}+P_{\overline{R}}^{\perp}b_{1} \tag{2.31}\] and choose \[P_{R}b_{1}=\beta_{1}\,P_{R}u_{M}\ \ \in\mathbb{R}^{M} \tag{2.32}\] We choose \(\beta_{1}\geq 0\) large enough that \[\sigma(RPx_{0,j,i}+P_{R}b_{1})=RPx_{0,j,i}+P_{R}b_{1} \tag{2.33}\] that is, \(P_{R}b_{1}\) translates the projected and rotated training inputs in the direction of \(P_{R}u_{M}\) sufficiently far from the origin, to render all components non-negative. For this purpose, \(\beta_{1}\geq 2\max_{j,i}|x_{0,j,i}|\) is sufficient. Then, we let \(W_{1}=R\), so that \[\sigma(W_{1}X_{0}+B_{1})=\sigma(RPX_{0}+P_{R}B_{1})+\sigma(RP^{ \perp}X_{0}+P_{R}^{\perp}B_{1})\,. \tag{2.34}\] Here we used that \(RP=P_{R}RP\) and \(RP^{\perp}=P_{R}^{\perp}RP^{\perp}\), and (2.30). For the second term on the r.h.s., we observe that \(P^{\perp}X_{0}=P^{\perp}\Delta X_{0}\), therefore choosing \[P_{R}^{\perp}b_{1}=-\delta P_{R}^{\perp}u_{M}\,, \tag{2.35}\] where we recall \(\delta\) from (1.6) and \(u_{M}\) from (1.14), we find \[\sigma(RP^{\perp}\Delta X_{0}+P_{R}^{\perp}B_{1})=0 \tag{2.36}\] as all components in its argument are shifted to non-positive values by (2.35). This reduces the rank of (2.34) from \(M\) to \(Q\). Therefore, (2.34) reduces to \[\sigma(W_{1}X_{0}+B_{1})=\sigma(RPX_{0}+P_{R}B_{1})=RPX_{0}+P_{R}B_{1}\,. \tag{2.37}\] This is because \(R\) and \(P_{R}b_{1}\) have been chosen in a manner that \(RPX_{0}+P_{R}B_{1}\) has non-negative matrix components. Therefore, minimization of the cost function reduces to determining \[\min_{W_{2},b_{1},b_{2}}\frac{1}{\sqrt{N}}\|W_{2}RPX_{0}+W_{2}P_{ R}B_{1}+B_{2}-Y^{ext}\|_{\mathcal{L}^{2}} \tag{2.38}\] We may redefine \(b_{2}\to b_{2}-W_{2}P_{R}b_{1}\), which further reduces the problem to finding \[\min_{W_{2},b_{2}}\frac{1}{\sqrt{N}}\|W_{2}RPX_{0}+B_{2}-Y^{ext} \|_{\mathcal{L}^{2}}\,. \tag{2.39}\] This is a least squares problem from which we deduce an upper bound on the minimum of the cost function in Theorem 3.1. Theorem 3.3 provides a very natural and transparent geometric characterization of the shallow network trained with these values of \(W_{i},b_{i}\); namely, it metrizes the range of \(P\) in the input space \(\mathbb{R}^{M}\), and given an arbitrary input \(x\), it measures proximity of \(Px\) to the vectors \(\overline{x_{0,j}}\), \(j=1,\dots,Q\). In the special case \(M=Q\), we are able to explicitly determine a degenerate local minimum of the cost function, and arrive at an improvement on the upper bound obtained in Theorem 3.2, by a factor \(1-C_{0}\delta_{P}^{2}\) for some constant \(C_{0}\geq 0\). In this construction, the activation function does not truncate the training input matrix. In Theorem 3.5, we prove a result for \(Q=M\) similar to Theorem 3.2, but for the case in which the activation function truncates the training input matrix in a manner that (2.34) does not hold. ## 3. Statement of Main Results In this section, we present the main results of this paper, an explicit upper bound on the cost function in Theorem 3.1 which leads to the construction of an optimized constructively trained shallow network, and its natural geometric interpretation in Theorem 3.3. In Theorem 3.2, we construct an explicit degenerate local minimum of the cost function, for the special case of input and output space having equal dimension, \(M=Q\). In Theorem 3.5, we address the effects of truncation of the training input matrix due to the activation function, in the case \(Q=M\). ### Upper bound on minimum of cost function for \(M>q\) Let \[\overline{x_{0,j}}:=\frac{1}{N_{j}}\sum_{i=1}^{N_{j}}x_{0,j,i}\in\mathbb{R}^{M} \tag{3.1}\] denote the average over the equivalence class of all training input vectors corresponding to the \(j\)-th output, and \[\Delta x_{0,j,i}:=x_{0,j,i}-\overline{x_{0,j}}\,. \tag{3.2}\] Moreover, let \[\overline{X_{0,j}}:=[\overline{x_{0,j}}\cdots\overline{x_{0,j}}]\in\mathbb{R} ^{M\times N_{j}} \tag{3.3}\] and \[\overline{X_{0}}:=[\overline{X_{0,1}}\cdots\overline{X_{0,Q}}]\in\mathbb{R}^{ M\times N}\,. \tag{3.4}\] Similarly, let \[\Delta X_{0,j}:=[\Delta x_{0,j,1}\cdots\Delta x_{0,j,N_{j}}]\in\mathbb{R}^{M \times N_{j}}\,. \tag{3.5}\] and \[\Delta X_{0}:=[\Delta X_{0,1}\cdots\Delta X_{0,Q}]\in\mathbb{R}^{M\times N}\,. \tag{3.6}\] Then, \[X_{0}=\overline{X_{0}}+\Delta X_{0}\,, \tag{3.7}\] and we let \[\delta:=\sup_{j,i}|\Delta x_{0,j,i}|\,\,\,. \tag{3.8}\] Next, we define the reduction of \(\overline{X_{0}}\) as \[\overline{X_{0}^{red}}:=[\overline{x_{0,1}}\cdots\overline{x_{0,Q}}]\in \mathbb{R}^{M\times Q} \tag{3.9}\] and assume that its rank is maximal, \(\operatorname{rank}(\overline{X_{0}^{red}})=Q\). The latter condition is necessary for the averages of training inputs to be able to distinguish between the different output values \(y_{j}\), \(j=1,\dots,Q\). Because of \(\operatorname{rank}(\overline{X_{0}^{red}})=Q\), the matrix \[(\overline{X_{0}^{red}})^{T}\overline{X_{0}^{red}}\in\mathbb{R}^{Q\times Q} \tag{3.10}\] is invertible, and we can define the Penrose inverse of \(\overline{X_{0}^{red}}\), \[\operatorname{Pen}[\overline{X_{0}^{red}}]:=\big{(}(\overline{X_{0}^{red}})^{ T}\overline{X_{0}^{red}}\big{)}^{-1}(\overline{X_{0}^{red}})^{T}\in\mathbb{R}^{Q \times M} \tag{3.11}\] The orthoprojector onto the range of \(\overline{X_{0}^{red}}\) is then given by \[P:=\overline{X_{0}^{red}}\operatorname{Pen}[\overline{X_{0}^{red}}]\in \mathbb{R}^{M\times M} \tag{3.12}\] and we note that \(\mathrm{Pen}[\overline{X_{0}^{red}}]\overline{X_{0}^{red}}=\mathbf{1}_{Q\times Q}\). The projector property \(P^{2}=P\) is thus easily verified, and orthogonality with respect to the Euclidean inner product \((\cdot,\cdot)\) on \(\mathbb{R}^{M}\) holds due to \(P^{T}=P\), whereby \((v,Pw)=(Pv,w)\) for any \(v,w\in\mathbb{R}^{M}\). In particular, we have \[P\overline{X_{0}^{red}}=\overline{X_{0}^{red}}\ \ \text{and}\ \ \ P\overline{X_{0}}= \overline{X_{0}}\,, \tag{3.13}\] by construction. Moreover, we define \[\delta_{P}:=\sup_{i,j}\left|\mathrm{Pen}[\overline{X_{0}^{red}}]P\Delta x_{0, j,i}\right|\,, \tag{3.14}\] which measures the relative size between \(\overline{X_{0}^{red}}\) and \(P\Delta X_{0}\), as \(\left|\mathrm{Pen}[\overline{X_{0}^{red}}]P\Delta x_{0,j,i}\right|\) scales like the noise to signal ratio \(\frac{|\Delta x|}{|x|}\) of training inputs. **Theorem 3.1**.: _Let \(Q\leq M\leq QM<N\). Assume that \(R\in O(M)\) diagonalizes \(P,P^{\perp}\), and let \(\beta_{1}\geq 2\max_{j,i}|x_{0,j,i}|\)._ _Let \(\mathcal{C}[W_{i}^{*},b_{i}^{*}]\) be the cost function evaluated for the trained shallow network defined by the following weights and shifts,_ \[W_{1}^{*}=R\,, \tag{3.15}\] _and_ \[W_{2}^{*}=\widetilde{W}_{2}R^{T}\ \ \text{where}\ \widetilde{W}_{2}=Y\ \mathrm{Pen}[\overline{X_{0}^{red}}]P\,. \tag{3.16}\] _Moreover, \(b_{1}^{*}=P_{R}b_{1}^{*}+P_{R}^{\perp}b_{1}^{*}\) with_ \[P_{R}b_{1}^{*}=\beta_{1}u_{M}\ \,\ P_{R}^{\perp}b_{1}^{*}=-\delta P_{R}^{\perp}u_{M} \ \in\mathbb{R}^{M}\,, \tag{3.17}\] _for \(u_{M}\in\mathbb{R}^{M}\) as in (1.14), and_ \[b_{2}^{*}=-W_{2}^{*}P_{R}b_{1}^{*}\ \ \in\mathbb{R}^{Q}\,. \tag{3.18}\] _Then, the minimum of the cost function satisfies the upper bound_ \[\min_{W_{j},b_{j}}\mathcal{C}[W_{j},b_{j}]\leq\mathcal{C}[W_{i}^{*},b_{i}^{*}] \leq\frac{1}{\sqrt{N}}\|Y\ \mathrm{Pen}[\overline{X_{0}^{red}}]P\Delta X_{0}\|_{\mathcal{L}^{2}}\,, \tag{3.19}\] _which implies_ \[\min_{W_{j},b_{j}}\mathcal{C}[W_{j},b_{j}]\ \leq\ \|Y\|_{op}\ \delta_{P} \tag{3.20}\] _where \(N=\sum_{\ell=1}^{Q}N_{\ell}\)._ We observe that, for any \(\lambda>0\), the rescaling of training inputs \[x_{0,j,i}\to\lambda x_{0,j,i}\ \ \forall j=1,\ldots,Q\ \,\ i=1,\ldots,N_{j}\,, \tag{3.21}\] induces \[\mathrm{Pen}[\overline{X_{0}^{red}}]\to\lambda^{-1}\mathrm{Pen}[\overline{X_{ 0}^{red}}]\ \ \,\ \Delta X_{0}\to\lambda\Delta X_{0}\,. \tag{3.22}\] Therefore, the upper bound in (3.19) is scaling invariant, as it only depends on the signal to noise ratio of training input data, which is controlled by \(\delta_{P}\). ### Exact degenerate local minimum in the case \(M=q\) We are able to explicitly determine an exact local degenerate minimum of the cost function in the case \(Q=M\) where the input and output spaces have the same dimension; here, we show that the upper bound obtained above differs from the sharp value by a relative error of order \(O(\delta_{P}^{2})\). We note that in this situation, \(P=\mathbf{1}_{Q\times Q}\) and \(P^{\perp}=0\). Moreover, \(\overline{X_{0}^{red}}\) is invertible, and therefore, \(\mathrm{Pen}[\overline{X_{0}^{red}}]=(\overline{X_{0}^{red}})^{-1}\). We prove a result adapted to a weighted variant of the cost function defined as follows. Let \(\mathcal{N}\in\mathbb{R}^{N\times N}\) be the block diagonal matrix given by \[\mathcal{N}:=\mathrm{diag}(N_{j}\mathbf{1}_{N_{j}\times N_{j}}\,|\,j=1,\dots, Q)\,. \tag{3.23}\] We introduce the inner product on \(\mathbb{R}^{Q\times N}\) \[(A,B)_{\mathcal{L}_{\mathcal{N}}^{2}}:=\mathrm{Tr}(A\mathcal{N}^{-1}B^{T}) \tag{3.24}\] and \[\|A\|_{\mathcal{L}_{\mathcal{N}}^{2}}:=\sqrt{(A,A)_{\mathcal{L}_{\mathcal{N} }^{2}}}\,. \tag{3.25}\] We define the weighted cost function \[\mathcal{C}_{\mathcal{N}}[W_{i},b_{i}]:=\|X^{(2)}-Y^{ext}\|_{\mathcal{L}_{ \mathcal{N}}^{2}}\,. \tag{3.26}\] This is equivalent to \[\mathcal{C}_{\mathcal{N}}[W_{i},b_{i}]:=\sqrt{\sum_{j=1}^{Q}\frac{1}{N_{j}} \sum_{i=1}^{N_{j}}|W_{2}\sigma(W_{1}x_{0,j,i}+b_{1})+b_{2}-y_{j}|_{\mathbb{R}^ {Q}}^{2}}\,. \tag{3.27}\] The weights \(\frac{1}{N_{j}}\) ensure that the contributions to the cost function belonging to inputs \(y_{j}\) do not depend on their sample sizes \(N_{j}\). We note that for uniform sample sizes, where \(\frac{N_{j}}{N}=\frac{1}{Q}\ \forall j\), we have that \(\mathcal{C}_{\mathcal{N}}[W_{i},b_{i}]=\sqrt{Q}\mathcal{C}[W_{i},b_{i}]\). **Theorem 3.2**.: _Assume \(M=Q<MQ<N\). Then, if \(\mathrm{rank}(\overline{X_{0}})=Q\), let_ \[\mathcal{P}:=\mathcal{N}^{-1}X_{0}^{T}(X_{0}\mathcal{N}^{-1}X_{0}^{T})^{-1}X_{ 0}\ \in\mathbb{R}^{N\times N}\,. \tag{3.28}\] _Its transpose \(\mathcal{P}^{T}\) is an orthonormal projector in \(\mathbb{R}^{N}\) with respect to the inner product \(\langle u,v\rangle_{\mathcal{N}}:=(u,\mathcal{N}^{-1}v)\). In particular, \((A\mathcal{P},B)_{\mathcal{L}_{\mathcal{N}}^{2}}=(A,B\mathcal{P})_{\mathcal{ L}_{\mathcal{N}}^{2}}\) for all \(A,B\in\mathbb{R}^{Q\times N}\). Letting \(\mathcal{P}^{\perp}=\mathbf{1}_{N\times N}-\mathcal{P}\), the weighted cost function satisfies the upper bound_ \[\min_{W_{j},b_{j}}\mathcal{C}_{\mathcal{N}}[W_{j},b_{j}] \leq \mathcal{C}_{\mathcal{N}}[W_{i}^{*},b_{i}^{*}] \tag{3.29}\] \[= \|Y^{ext}\mathcal{P}^{\perp}\|_{\mathcal{L}_{\mathcal{N}}^{2}}\] \[= \left\|Y|\Delta_{2}^{rel}|^{\frac{1}{2}}\big{(}1+\Delta_{2}^{rel} \big{)}^{-\frac{1}{2}}\right\|_{\mathcal{L}^{2}}\] \[\leq (1-C_{0}\delta_{P}^{2})\ \|Y\ (\overline{X_{0}^{red}})^{-1}\Delta X_{0}\|_{ \mathcal{L}_{\mathcal{N}}^{2}}\,,\] _for a constant \(C_{0}\geq 0\), and where_ \[\Delta_{2}^{rel}:=\Delta_{1}^{rel}\mathcal{N}^{-1}(\Delta_{1}^{rel})^{T}\ \,\ \mathrm{with}\ \Delta_{1}^{rel}:=(\overline{X_{0}^{red}})^{-1}\Delta X_{0}\,. \tag{3.30}\] _The weights and shifts realizing the upper bound are given by_ \[W_{1}^{*}=\mathbf{1}_{Q\times Q} \tag{3.31}\] _and_ \[W_{2}^{*}=Y(\overline{X_{0}^{red}})^{T}(X_{0}\mathcal{N}^{-1}X_{0}^{T})^{-1} \in\mathbb{R}^{Q\times Q}\,. \tag{3.32}\] _Moreover,_ \[b_{1}^{*}=\beta_{1}u_{Q}\ \,\ b_{2}^{*}=-W_{2}^{*}b_{1}^{*}\,, \tag{3.33}\] _with \(u_{Q}\in\mathbb{R}^{Q}\) as in (1.14), and where \(\beta_{1}\geq 2\max_{j,i}|x_{0,j,i}|\) so that_ \[\sigma(W_{1}^{*}X_{0}+B_{1}^{*})=W_{1}^{*}X_{0}+B_{1}^{*}\,. \tag{3.34}\] _Notably,_ \[W_{2}^{*}=\widetilde{W}_{2}+O(\delta_{P}^{2}) \tag{3.35}\] _differs by \(O(\delta_{P}^{2})\) from \(\widetilde{W}_{2}\) in (3.16) used for the upper bound (3.19)._ _In particular, \(\mathcal{C}_{\mathcal{N}}[W_{i}^{*},b_{i}^{*}]\) is a local minimum of the cost function; it is degenerate, assuming the same value for all \(W_{i},b_{i}\) such that \(W_{1},b_{1}\) satisfy the condition (3.34)._ _Moreover, \(\mathcal{C}_{\mathcal{N}}[W_{i}^{*},b_{i}^{*}]\) is invariant under reparametrizations of the training inputs \(X_{0}\to KX_{0}\), for all \(K\in GL(Q)\)._ The proof is given in Section 5. ### Geometric interpretation To match an arbitrary, non-training input \(x\in\mathbb{R}^{M}\) with one of the output vectors \(y_{j}\), \(j\in\{1,\ldots,Q\}\), let for \(x\in\mathbb{R}^{M}\), \[\mathcal{C}_{j}[x]:=|W_{2}^{*}(W_{1}^{*}x+b_{1}^{*})_{+}+b_{2}^{*}-y_{j}|\,. \tag{3.36}\] Here, \((\cdot)_{+}\) denotes the ramp function (2.9), acting component-wise on a vector. Given an arbitrary non-training input \(x\in\mathbb{R}^{M}\), \[j^{*}=\operatorname{argmin}_{j}\mathcal{C}_{j}[x] \tag{3.37}\] implies that \(x\) matches the \(j^{*}\)-th output \(y_{j^{*}}\). The proof of Theorem 3.1 provides us with a particular set of weights \(W_{1}^{*},W_{2}^{*}\) and shifts \(b_{1}^{*},b_{2}^{*}\) through an explicit construction, which yield the upper bound (3.19). We will refer to the shallow network trained with this choice of \(W_{i}^{*},b_{i}^{*}\) as the _constructively trained shallow network_. A detailed discussion is provided in Section 6. **Theorem 3.3**.: _Assume \(Q\leq M\leq QM<N\). Let \(W_{i}^{*},b_{i}^{*}\), \(i=1,2\), denote the weights and shifts determined in Theorem 3.1. Let_ \[\widetilde{W}_{2}:=Y\operatorname{Pen}[\overline{X_{0}^{red}}]\in\mathbb{R}^ {Q\times M}\,, \tag{3.38}\] _and define the metric_ \[d_{\widetilde{W}_{2}}(x,y):=|\widetilde{W}_{2}P(x-y)|\ \ \text{ for }x,y\in\operatorname{range}(P) \tag{3.39}\] _on the \(Q\)-dimensional linear subspace \(\operatorname{range}(P)\subset\mathbb{R}^{M}\), where \(|\cdot|\) denotes the Euclidean norm on \(\mathbb{R}^{Q}\). Then,_ \[\mathcal{C}_{j}[x]=d_{\widetilde{W}_{2}}(Px,\overline{x_{0,j}}) \tag{3.40}\] _and matching an input \(x\in\mathbb{R}^{M}\) with an output \(y_{j^{*}}\) via the constructively trained shallow network is equivalent to the solution of the metric minimization problem_ \[j^{*}=\operatorname{argmin}_{j\in\{1,\ldots,Q\}}(d_{\widetilde{W}_{2}}(Px, \overline{x_{0,j}})) \tag{3.41}\] _on the range of \(P\)._ The proof is given in Section 7. First of all, we note that \(\widetilde{W}_{2}P=\widetilde{W}_{2}\) has full rank \(Q\), therefore \(x\mapsto|\widetilde{W}_{2}Px|^{2}\) is a non-degenerate quadratic form on the range of \(P\). Therefore, (3.39) indeed defines a metric on the range of \(P\). The constructively trained shallow network thus obtained matches a non-training input \(x\in\mathbb{R}^{M}\) with an output vector \(y_{j^{*}}\) by splitting \[x=Px+P^{\perp}x\,, \tag{3.42}\] and by determining which of the average training input vectors \(\overline{x_{0,j}}\), \(j=1,\ldots,Q\), is closest to \(Px\) in the \(d_{\widetilde{W}_{2}}\) metric. The trained shallow network cuts off the components \(P^{\perp}x\). This procedure is geometrically natural and transparent. ### Dependence on truncation Our next result addresses the effect of truncations enacted by the activation function \(\sigma\) in the case \(M=Q\). For the results in Theorem 3.2, we assumed that \(W_{1},b_{1}\) are in the region (depending on \(X_{0}\)) such that \[\sigma(W_{1}X_{0}+B_{1})=W_{1}X_{0}+B_{1} \tag{3.43}\] holds. Here we discuss the situation in which \(\sigma\) acts nontrivially. In this case, we assume that \(W_{1},b_{1}\) are given, with \(W_{1}\in\mathbb{R}^{Q\times Q}\) invertible, and observe that all matrix components of \[X^{(1)}=\sigma(W_{1}X_{0}+B_{1}) \tag{3.44}\] are non-negative, as \(X^{(1)}\) lies in the image of \(\sigma\), and hence, \[\sigma(X^{(1)})=X^{(1)}\,. \tag{3.45}\] We define the truncation map \(\tau_{W_{1},b_{1}}\) as follows. **Definition 3.4**.: _Let \(W_{1}\in GL(Q)\), \(b_{1}\in\mathbb{R}^{Q}\). Then, the truncation map \(\tau_{W_{1},b_{1}}:\mathbb{R}^{Q\times N}\to\mathbb{R}^{Q\times N}\) is defined by_ \[\tau_{W_{1},b_{1}}(X_{0}) := W_{1}^{-1}(\sigma(W_{1}X_{0}+B_{1})-B_{1}) \tag{3.46}\] \[= W_{1}^{-1}(X^{(1)}-B_{1})\,.\] _That is, \(\tau_{W_{1},b_{1}}=a_{W_{1},b_{1}}^{-1}\circ\sigma\circ a_{W_{1},b_{1}}\) under the affine map \(a_{W_{1},b_{1}}:X_{0}\mapsto W_{1}X_{0}+B_{1}\)._ _We say that \(\tau_{W_{1},b_{1}}\) is rank preserving (with respect to \(X_{0}\)) if both_ \[\mathrm{rank}(\tau_{W_{1},b_{1}}(X_{0})) = \mathrm{rank}(X_{0})\] \[\mathrm{rank}(\overline{\tau_{W_{1},b_{1}}(X_{0})}) = \mathrm{rank}(\overline{X_{0}}) \tag{3.47}\] _hold, and that it is rank reducing otherwise._ Then, we verify that \[\sigma(W_{1}\tau_{W_{1},b_{1}}(X_{0})+B_{1}) = \sigma(X^{(1)})=X^{(1)} \tag{3.48}\] \[= W_{1}\tau_{W_{1},b_{1}}(X_{0})+B_{1}\,.\] This means that while the condition (3.43) does not hold for \(X_{0}\), it does hold for \(\tau_{W_{1},b_{1}}(X_{0})\). Accordingly, we can define the matrices \[\overline{\tau_{W_{1},b_{1}}(X_{0})}\,\,\,,\,\overline{(\tau_{W_{1},b_{1}}(X_ {0}))^{red}}\,\,\,,\,\Delta(\tau_{W_{1},b_{1}}(X_{0})) \tag{3.49}\] in analogy to \(\overline{X_{0}},\overline{X_{0}^{red}},\Delta X_{0}\). If no rank reduction is induced by the truncation, \(\overline{(\tau_{W_{1},b_{1}}(X_{0}))^{red}}\) is invertible, and we obtain the following theorem. **Theorem 3.5**.: _Let \(Q=M\), and assume that the truncation map \(\tau_{W_{1},b_{1}}(X_{0})\) is rank preserving. Then, for any fixed \((W_{1},b_{1})\in GL(Q)\times\mathbb{R}^{Q}\),_ \[\min_{W_{2},b_{2}}\mathcal{C}_{\mathcal{N}}[W_{i},b_{i}]=\|Y^{ext}\mathcal{P}_ {\tau_{W_{1},b_{1}}(X_{0})}^{\perp}\|_{\mathcal{L}_{\mathcal{N}}^{2}} \tag{3.50}\] _where the projector \(\mathcal{P}_{\tau_{W_{1},b_{1}}(X_{0})}\) is obtained from (3.28) by substituting the truncated training inputs \(\tau_{W_{1},b_{1}}(X_{0})\) for \(X_{0}\). In particular, one explicitly has_ \[\min_{W_{2},b_{2}}\mathcal{C}_{\mathcal{N}}[W_{i},b_{i}] = \left\|Y|\Delta_{2}^{rel,tr}|^{\frac{1}{2}}\big{(}1+\Delta_{2}^{ rel,tr}\big{)}^{-\frac{1}{2}}\right\|_{\mathcal{L}^{2}} \tag{3.51}\] \[\leq (1-C_{0}\delta_{P,tr}^{2})\|Y\Delta_{1}^{rel,tr}\|_{\mathcal{L}_ {\mathcal{N}}^{2}}\] _where_ \[\Delta_{2}^{rel,tr} := \Delta_{1}^{rel,tr}\mathcal{N}^{-1}(\Delta_{1}^{rel,tr})^{T}\;\;, \;\text{with}\] \[\Delta_{1}^{rel,tr} := (\overline{(\tau_{W_{1},b_{1}}(X_{0}))^{red}})^{-1}\Delta(\tau_{ W_{1},b_{1}}(x_{0,j,i}))\,, \tag{3.52}\] _in analogy to (3.29), and for some \(C_{0}\geq 0\), where_ \[\delta_{P,tr}:=\sup_{j,i}\left|(\overline{(\tau_{W_{1},b_{1}}(X_{0}))^{red}}) ^{-1}\Delta(\tau_{W_{1},b_{1}}(x_{0,j,i}))\right| \tag{3.53}\] _measures the signal to noise ratio of the truncated training input data._ Proof.: Due to (3.48), we may follow the proof of Theorem 3.2 verbatim, and we find that for the given fixed choice of \(W_{1},b_{1}\), minimization in \(W_{2},b_{2}\) yields (3.50) and (3.51). In general, (3.51) does not constitute a stationary solution with respect to \(W_{1},b_{1}\), as the right hand sides of (3.50), (3.51) generically depend nontrivially on \(W_{1},b_{1}\). It thus remains to determine the infimum of (3.50), (3.51) with respect to \(W_{1},b_{1}\), in order to find a candidate for the global minimum. The latter, however, might depend sensitively on the detailed properties of \(\Delta X_{0}\), which are random. On the other hand, in the situation addressed in Theorem 3.2, \(W_{1},b_{1}\) are contained in the parameter region \(\mathcal{F}_{X_{0}}\) (depending on \(X_{0}\)) defined by the fixed point relation for the truncation map (equivalent to (3.43)), \[\mathcal{F}_{X_{0}}:=\left\{(W_{1},b_{1})\in GL(Q)\times\mathbb{R}^{Q}\mid \tau_{W_{1},b_{1}}(X_{0})=X_{0}\right\}. \tag{3.54}\] As a result, we indeed obtained a degenerate stationary solution with respect to \(W_{1},b_{1}\), \[\min_{(W_{1},b_{1})\in\mathcal{F}_{X_{0}};W_{2},b_{2}}\mathcal{C}_{\mathcal{N }}[W_{i},b_{i}]=\|Y^{ext}\mathcal{P}^{\perp}\|_{\mathcal{L}_{\mathcal{N}}^{2}} \tag{3.55}\] in (3.29), with \(\mathcal{P}\) given in (3.28). Hence, \(\mathcal{F}_{X_{0}}\) parametrizes an invariant manifold of equilibria of the gradient descent flow, which corresponds to a level surface of the cost function. **Remark 3.6**.: _The global minimum of the cost function has either the form (3.50) for an optimizer \((W_{1}^{**},b_{1}^{**})\) (with a rank preserving truncation), or (3.55). This is because any other scenario employs a rank reducing truncation. Since \(Y^{ext}\) has rank \(Q\), minimization of (3.26) under the constraint \(\operatorname{rank}(\tau_{W_{1},b_{1}}(X_{0}))<Q\) cannot yield a global minimum._ _The key point in minimizing (3.50) with respect to \((W_{1},b_{1})\) is to determine a rank preserving truncation which minimizes the signal to noise ratio of the truncated training inputs. Whether for a local or global minimum of the form (3.50) or (3.55), respectively, a shallow network trained with the corresponding weights and shifts \(W_{i},b_{i}\), \(i=1,2\), will be able to match inputs \(x\in\mathbb{R}^{Q}\) to outputs \(y_{j}\), through a straightforward generalization of Theorem 3.3._ We expect these considerations to carry over to the general case \(Q<M\), but leave a detailed discussion to future work. ## 4. Proof of Theorem 3.1 Let \(R\in O(M)\) diagonalize \(P,P^{\perp}\) (which are symmetric), \[P_{R}:=RPR^{T}\quad,\ \ P_{R}^{\perp}:=RP^{\perp}R^{T}\,, \tag{4.1}\] where \(P_{R}\) has \(Q\) diagonal entries of \(1\), and all other components are \(0\), while \(P_{R}^{\perp}\) has \(M-Q\) diagonal entries of \(1\), and all other components are \(0\). This does not characterize \(R\) uniquely, as we may compose \(R\) with an arbitrary \(R^{\prime}\in O(M)\) that leaves the ranges of \(P,P^{\perp}\) invariant. Diagonality of \(P_{R},P_{R}^{\perp}\) implies that the activation function, which acts component-wise as the ramp function (2.9), satisfies \[\sigma(P_{R}v+P_{R}^{\perp}v)=\sigma(P_{R}v)+\sigma(P_{R}^{\perp}v) \tag{4.2}\] for any \(v\in\mathbb{R}^{M}\), and where of course, \(P_{R}+P_{R}^{\perp}=\mathbf{1}_{M\times M}\). We write \[b_{1}=P_{R}b_{1}+P_{R}^{\perp}b_{1} \tag{4.3}\] where we choose \[P_{R}b_{1}=\beta_{1}u_{Q}\,, \tag{4.4}\] and we pick \[\beta_{1}\geq 2\rho\ \,\ \ \rho:=\max_{j,i}|x_{0,j,i}|\,. \tag{4.5}\] This ensures that \(RPx_{0,j,i}+P_{R}b_{1}\in B_{\rho}(\beta_{1}u_{Q})\subset P_{R}\mathbb{R}_{+}^ {M}\) so that \[\sigma(RPx_{0,j,i}+P_{R}b_{1})=RPx_{0,j,i}+P_{R}b_{1}\quad,\ \ j=1,\ldots,Q\ \,,\ \forall\ i=1,\ldots,N_{j} \tag{4.6}\] To construct the upper bound, we choose \[W_{1}=R\,. \tag{4.7}\] Then, \[X^{(1)} = \sigma(W_{1}X_{0}+B_{1}) \tag{4.8}\] \[= \sigma((P_{R}+P_{R}^{\perp})RX_{0}+(P_{R}+P_{R}^{\perp})B_{1})\] \[= \sigma(P_{R}RX_{0}+P_{R}B_{1})+\sigma(P_{R}^{\perp}RX_{0}+P_{R}^ {\perp}B_{1})\] \[= \sigma(RPX_{0}+P_{R}B_{1})+\sigma(RP^{\perp}\Delta X_{0}+P_{R}^{ \perp}B_{1})\,.\] This is because by construction, \[P\overline{X_{0}}=\overline{X_{0}}\ \,\ \ P^{\perp}\overline{X_{0}}=0\,. \tag{4.9}\] Moreover, we used \[P_{R}R=RP\ \,\ \ P_{R}^{\perp}R=RP^{\perp}\,, \tag{4.10}\] from (4.1). Going to the next layer, we have \[X^{(2)}=W_{2}X^{(1)}+B_{2}\,. \tag{4.11}\] Our goal is to minimize the expression on the right hand side of \[\sqrt{N}{\mathcal{C}}[W_{j},b_{j}] \leq \|W_{2}\sigma(RPX_{0}+P_{R}B_{1})+B_{2}-Y^{ext}\|_{{\mathcal{L}}^{ 2}} \tag{4.12}\] \[\qquad\qquad+\|W_{2}\sigma(RP^{\perp}\Delta X_{0}+P_{R}^{\perp}B_ {1})\|_{{\mathcal{L}}^{2}}\,.\] We note that because \({\rm rank}(Y^{ext})=Q\), and \[{\rm rank}(\sigma(\underbrace{RPX_{0}}_{=P_{R}RPX_{0}}+P_{R}B_{1}))\leq Q\,, \tag{4.13}\] minimization of the first term on the r.h.s. of (4.12) requires that the latter has maximal rank \(Q\). Due to our choice of \(R\) and \(P_{R}b_{1}\) which imply that (4.6) holds, we find that \[\sigma(RPX_{0}+P_{R}B_{1}) = RPX_{0}+P_{R}B_{1} \tag{4.14}\] \[= P_{R}R(\overline{X_{0}}+P\Delta X_{0})+P_{R}B_{1}\,.\] To minimize the first term on the r.h.s. of (4.12), one might want to try to solve \[W_{2}S=Y^{ext} \tag{4.15}\] where \[S:=P_{R}R(\overline{X_{0}}+P\Delta X_{0})+P_{R}B_{1}\in\mathbb{R}^{M\times N} \tag{4.16}\] has rank \(Q<M\) (due to \(P_{R}\)). However, this implies that the \(QM\) unknown matrix components of \(W_{2}\in\mathbb{R}^{Q\times M}\) are determined by \(QN>QM\) equations. This is an overdetermined problem which generically has no solution. However, we can solve the problem to zeroth order in \(\Delta X_{0}\), \[W_{2}S_{0}=Y^{ext} \tag{4.17}\] with \[S_{0}:=S|_{\Delta X_{0}=0}=P_{R}R\overline{X_{0}}=RP\overline{X_{0}} \tag{4.18}\] (recalling from (3.4) that \(\overline{X_{0}}\) only contains \(Q\) distinct column vectors \(\overline{x_{0,j}}\) and their identical copies) and subsequently minimize the resulting expression in the first term on the r.h.s. of (4.12) with respect to \(P_{R}B_{1}\). This is equivalent to requiring that \(W_{2}\in\mathbb{R}^{Q\times M}\) solves \[W_{2}RP\overline{X_{0}^{red}}=Y\,. \tag{4.19}\] To this end, we make the ansatz \[W_{2}=A(R\overline{X_{0}^{red}})^{T} \tag{4.20}\] for some \(A\in\mathbb{R}^{Q\times Q}\). Then, recalling that \(P\overline{X_{0}^{red}}=\overline{X_{0}^{red}}\), \[A(R\overline{X_{0}^{red}})^{T}(R\overline{X_{0}^{red}})=A(\overline{X_{0}^{red} })^{T}(\overline{X_{0}^{red}})=Y\,, \tag{4.21}\] so that solving for \(A\), we get \[W_{2} = Y((\overline{X_{0}^{red}})^{T}\overline{X_{0}^{red}})^{-1}(R \overline{X_{0}^{red}})^{T} \tag{4.22}\] \[= \widetilde{W}_{2}R^{T}\] with \(\widetilde{W}_{2}\) as defined in (3.38). Hereby, \(W_{2}\) is fully determined by \(Y\) and \(\overline{X_{0}^{red}}\) (as \(\overline{X_{0}^{red}}\) determines \(P\) and \(R\)). Next, we can choose \(P_{R}^{\perp}b_{1}\) in a suitable manner that \[\sigma(RP^{\perp}\Delta X_{0}+P_{R}^{\perp}B_{1})=0\,. \tag{4.23}\] This can be accomplished with \[P_{R}^{\perp}b_{1}=-\delta P_{R}^{\perp}u_{M}\in\mathbb{R}^{M} \tag{4.24}\] where \(\delta\) is defined in (3.8), and \[u_{M}:=(1,1,\ldots,1)^{T}\in\mathbb{R}^{M}\,. \tag{4.25}\] With \[E:=u_{M}u_{N}^{T}=[u_{M}\cdots u_{M}]\in\mathbb{R}^{M\times N}\,, \tag{4.26}\] this ensures that the argument on the l.h.s. of (4.23), given by \[RP^{\perp}\Delta X_{0}+P_{R}^{\perp}B_{1}=P_{R}^{\perp}R\Delta X_{0}-\delta P _{R}^{\perp}E\,, \tag{4.27}\] is a matrix all of whose components are \(\leq 0\). Therefore, \(\sigma\) maps it to \(0\). Consequently, we find \[\sqrt{N}\mathcal{C}[W_{j},b_{j}]\leq\|W_{2}(RP\Delta X_{0}+P_{R}B_{1})+B_{2}\| _{\mathcal{L}^{2}}\,. \tag{4.28}\] To minimize the r.h.s., we may shift \[b_{2}\to b_{2}-W_{2}P_{R}b_{1} \tag{4.29}\] so that it remains to minimize the r.h.s. of \[\sqrt{N}\mathcal{C}[W_{j},b_{j}]\leq\|W_{2}RP\Delta X_{0}+B_{2}\|_{\mathcal{L} ^{2}}\,. \tag{4.30}\] with respect to \[B_{2}=[b_{2}\cdots b_{2}]=b_{2}u_{N}^{T}\ \in\mathbb{R}^{Q\times N}\,. \tag{4.31}\] We find \[0 = \partial_{b_{2}}\|W_{2}RP\Delta X_{0}+\beta_{2}u_{N}^{T}\|_{ \mathcal{L}^{2}}^{2} \tag{4.32}\] \[= \partial_{b_{2}}\Big{(}2\mathrm{Tr}(W_{2}RP\underbrace{\Delta X_ {0}u_{N}}_{=0}b_{2}^{T})+\mathrm{Tr}(u_{N}\underbrace{b_{2}^{T}b_{2}}_{=|b_{2 }|^{2}}u_{N}^{T})\Big{)}\,,\] using cyclicity of the trace. The first term inside the bracket on the second line is zero because for every \(i\in\{1,\ldots,M\}\), \[(\Delta X_{0}u_{N})_{i} = \sum_{\ell=1}^{N}[\Delta X_{0}]_{i\ell} \tag{4.33}\] \[= \sum_{j=1}^{Q}\sum_{\ell=1}^{N_{j}}(x_{0,j,\ell}-\overline{x_{0, j}})_{i}\] \[= 0\,.\] The second term inside the bracket in (4.32) is proportional to \(|b_{2}|^{2}\), so that its gradient is proportional to \(b_{2}\), and vanishes for \(b_{2}=0\). Hence, \(B_{2}=0\), which means that \[B_{2}=-W_{2}P_{R}B_{1} \tag{4.34}\] for the original variable \(b_{2}\) prior to the shift (5.10). We thus arrive at an upper bound \[\min_{W_{j},b_{j}}\sqrt{N}{\mathcal{C}}[W_{j},b_{j}] \leq \|\widetilde{W}_{2}P\Delta X_{0}\|_{{\mathcal{L}}^{2}} \tag{4.35}\] \[= \|Y{\rm Pen}[\overline{X_{0}^{red}}]P\Delta X_{0}\|_{{\mathcal{L} }^{2}}\,.\] This establishes the asserted upper bound in (3.19). Next, we bound \[\|\widetilde{W}_{2}P\Delta X_{0}\|_{{\mathcal{L}}^{2}} \leq \|Y\|_{op}\|{\rm Pen}[\overline{X_{0}^{red}}]P\Delta X_{0}\|_{{ \mathcal{L}}^{2}}\,. \tag{4.36}\] Let for notational brevity \(A:={\rm Pen}[\overline{X_{0}^{red}}]P\Delta X_{0}\). We have \[\|{\rm Pen}[\overline{X_{0}^{red}}]P\Delta X_{0}\|_{{\mathcal{L}} ^{2}}^{2} = {\rm Tr}(AA^{T}) \tag{4.37}\] \[= \sum_{\ell=1}^{Q}\sum_{i=1}^{N_{j}}|{\rm Pen}[\overline{X_{0}^{ red}}]P\Delta x_{0,\ell,i}|^{2}\] \[\leq \Big{(}\sup_{i,\ell}\Big{|}{\rm Pen}[\overline{X_{0}^{red}}]P \Delta x_{0,\ell,i}\Big{|}^{2}\Big{)}\sum_{\ell=1}^{Q}\sum_{i=1}^{N_{\ell}}1\] \[= N\sup_{i,\ell}\Big{|}{\rm Pen}[\overline{X_{0}^{red}}]P\Delta x _{0,\ell,i}\Big{|}^{2}\,\,.\] Therefore, \[\min_{W_{j},b_{j}}\sqrt{N}{\mathcal{C}}[W_{j},b_{j}] \leq \|\widetilde{W}_{2}P\Delta X_{0}\|_{{\mathcal{L}}^{2}} \tag{4.38}\] \[\leq \sqrt{N}\,\|Y\|_{op}\sup_{i,\ell}\Big{|}{\rm Pen}[\overline{X_{0 }^{red}}]P\Delta x_{0,\ell,i}\Big{|}\] \[\leq \sqrt{N}\,\|Y\|_{op}\delta_{P}\,,\] which proves (3.20). This concludes the proof of Theorem 3.1. ## 5. Proof of Theorem 3.2 To prove the exact degenerate local minimum in the special case \(M=Q\), we note that in this situation, \[P={\bf 1}_{Q\times Q}\ \ \,,\ P^{\perp}=0\,. \tag{5.1}\] In place of (4.12), the cost function is given by \[{\mathcal{C}}_{\mathcal{N}}[W_{j},b_{j}] = \|W_{2}\sigma(W_{1}(\overline{X_{0}}+\Delta X_{0})+B_{1})+B_{2}-Y ^{ext}\|_{{\mathcal{L}}_{\mathcal{N}}^{2}}\,, \tag{5.2}\] defined in (3.26), which is weighted by the inverse of the block diagonal matrix \[{\mathcal{N}}={\rm diag}(N_{j}{\bf 1}_{N_{j}\times N_{j}}\,|\,j=1,\ldots,Q)\ \in{ \mathbb{R}}^{N\times N}\,. \tag{5.3}\] Using a similar choice for \(B_{1}\) as in (4.4), we may assume that the matrix components of \(W_{1}(\overline{X_{0}}+\Delta X_{0})+B_{1}\) are non-negative. To be precise, we let \[b_{1}=\beta_{1}u_{Q} \tag{5.4}\] for some \(\beta_{1}\geq 0\) large enough that \[x_{0,j,i}+\beta_{1}u_{Q}\in{\mathbb{R}}_{+}^{Q} \tag{5.5}\] for all \(j=1,\ldots,Q\), \(i=1,\ldots,N_{j}\). This is achieved with \[\beta_{1}\geq 2\max_{j,i}|x_{0,j,i}|\,. \tag{5.6}\] Hence, choosing \[W_{1}=\mathbf{1}_{Q\times Q}\,, \tag{5.7}\] we find \[\sigma(W_{1}X_{0}+B_{1})=\overline{X_{0}}+\Delta X_{0}+B_{1}\,. \tag{5.8}\] Thus, 5.2 implies \[\min_{W_{j},b_{j}}\mathcal{C}_{\mathcal{N}}[W_{j},b_{j}]\leq\|W_{2}(\overline {X_{0}}+\Delta X_{0})+W_{2}B_{1}+B_{2}-Y^{ext}\|_{\mathcal{L}_{\mathcal{N}}^{ 2}}\,. \tag{5.9}\] To minimize the r.h.s., we note that since \(W_{2}\) and \(B_{2}\) are both unknown, we can redefine \(W_{2}\to W_{2}\), and we can shift \[b_{2}\to b_{2}-W_{2}b_{1}\,. \tag{5.10}\] Therefore, we find \[\min_{W_{j},b_{j}}\mathcal{C}_{\mathcal{N}}[W_{j},b_{j}]\leq\min_{W_{2},b_{2} }\|W_{2}(\overline{X_{0}}+\Delta X_{0})+B_{2}-Y^{ext}\|_{\mathcal{L}_{ \mathcal{N}}^{2}}\,. \tag{5.11}\] Minimizing with respect to \(W_{2}\in\mathbb{R}^{Q\times Q}\) is a least squares problem whose solution has the form \[(W_{2}(\overline{X_{0}}+\Delta X_{0})-Y^{ext}_{b_{2}})\mathcal{P}=0\,, \tag{5.12}\] where \[Y^{ext}_{b_{2}}:=Y^{ext}-B_{2}\,, \tag{5.13}\] \(\mathcal{P}^{T}\) is a projector onto the range of \(X_{0}^{T}\), and remains to be determined. To solve (5.12), we require \(W_{2}\) to satisfy \[W_{2}(\overline{X_{0}}+\Delta X_{0}) \mathcal{N}^{-1}(\overline{X_{0}}+\Delta X_{0})^{T}\] \[=Y^{ext}_{b_{2}}\,\mathcal{N}^{-1}(\overline{X_{0}}+\Delta X_{0} )^{T}\ \ \in\mathbb{R}^{Q\times Q}\,. \tag{5.14}\] Because \[\overline{X_{0}} = [\overline{x_{0,1}}u_{N_{1}}^{T}\cdots\overline{x_{0,Q}}u_{N_{Q} }^{T}]\,,\] \[Y^{ext}_{b_{2}} = [(y_{1}-b_{2})u_{N_{1}}^{T}\cdots(y_{Q}-b_{2})u_{N_{Q}}^{T}]\,, \tag{5.15}\] where \[u_{N_{j}}:=(1,1,\ldots,1)^{T}\ \ \in\mathbb{R}^{N_{j}}\,, \tag{5.16}\] we find that \[\Delta X_{0}\mathcal{N}^{-1}(\overline{X_{0}})^{T} = \sum_{j=1}^{Q}\frac{1}{N_{j}}\Delta X_{0,j}u_{N_{j}}\overline{x_{0,j}}^{T} \tag{5.17}\] \[= \sum_{j=1}^{Q}\underbrace{\Big{(}\frac{1}{N_{j}}\sum_{i=1}^{N_{j }}\Delta x_{0,j,i}\Big{)}}_{=0}\overline{x_{0,j}}^{T}\] \[= 0\,.\] For the same reason, \[\overline{X_{0}}{\mathcal{N}}^{-1}\Delta X_{0}^{T}=0\ \ \,\ Y^{ext}{\mathcal{N}}^{-1} \Delta X_{0}^{T}=0\,. \tag{5.18}\] Moreover, \[Y_{b_{2}}^{ext}{\mathcal{N}}^{-1}\overline{X_{0}}^{T} = \sum_{j=1}^{Q}\frac{1}{N_{j}}(y_{j}-b_{2})\underbrace{u_{N_{j}}^{ T}u_{N_{j}}}_{=N_{j}}\overline{x_{0,j}}^{T} \tag{5.19}\] \[= Y_{b_{2}}\overline{X_{0}^{red}}^{T}\,,\] where \[Y_{b_{2}}=[(y_{1}-b_{2})\cdots(y_{Q}-b_{2})]\ \ \in{\mathbb{R}}^{Q\times Q} \tag{5.20}\] and \[\overline{X_{0}}{\mathcal{N}}^{-1}\overline{X_{0}}^{T}=\overline{X_{0}^{red} \,X_{0}^{red}}^{T}\,. \tag{5.21}\] Therefore, (5.14) reduces to \[W_{2}(\overline{X_{0}^{red}}^{T}\overline{X_{0}^{red}}^{T}+\Delta X_{0}{ \mathcal{N}}^{-1}\Delta X_{0}^{T})=Y_{b_{2}}\overline{X_{0}^{red}}^{T}\,, \tag{5.22}\] and invertibility of \(\overline{X_{0}^{red}}\overline{X_{0}^{red}}^{T}\in{\mathbb{R}}^{Q\times Q}\) implies invertibility of the matrix in brackets on the l.h.s. of (5.22), because \[D_{2}[X_{0}]:=\Delta X_{0}{\mathcal{N}}^{-1}\Delta X_{0}^{T}=\sum_{j=1}^{Q} \frac{1}{N_{j}}\sum_{i=1}^{N_{j}}\Delta x_{0,j,i}\Delta x_{0,j,i}^{T}\ \ \in{\mathbb{R}}^{Q\times Q} \tag{5.23}\] is a non-negative operator. Hence, \[W_{2}=Y_{b_{2}}\overline{X_{0}^{red}}^{T}(\overline{X_{0}^{red}\,X_{0}^{red}} ^{T}+\Delta X_{0}{\mathcal{N}}^{-1}\Delta X_{0}^{T})^{-1}\,, \tag{5.24}\] so that \[W_{2}X_{0}=Y_{b_{2}}^{ext}{\mathcal{P}}\,. \tag{5.25}\] Here, \[{\mathcal{P}}:={\mathcal{N}}^{-1}X_{0}^{T}(X_{0}{\mathcal{N}}^{-1}X_{0}^{T})^ {-1}X_{0}\in{\mathbb{R}}^{N\times N} \tag{5.26}\] is a rank \(Q\) projector \({\mathcal{P}}^{2}={\mathcal{P}}\) which is orthogonal with respect to the inner product on \({\mathbb{R}}^{N}\) defined by \({\mathcal{N}}\), i.e., \({\mathcal{P}}^{T}{\mathcal{N}}={\mathcal{N}}{\mathcal{P}}\). \({\mathcal{P}}^{T}\) is the projector onto the range of \(X_{0}^{T}\). We note that with respect to the inner product defined with \({\mathcal{N}}^{-1}\) (which we are using here), we have \({\mathcal{N}}^{-1}{\mathcal{P}}^{T}={\mathcal{P}}{\mathcal{N}}^{-1}\). Next, in order to control \(W_{2}X_{0}-Y_{b_{2}}^{ext}=-Y_{b_{2}}^{ext}{\mathcal{P}}^{\perp}\), we observe that \[Y_{b_{2}}^{ext}{\mathcal{P}} = Y_{b_{2}}^{ext}{\mathcal{N}}^{-1}X_{0}^{T}(X_{0}{\mathcal{N}}^{ -1}X_{0}^{T})^{-1}X_{0} \tag{5.27}\] \[= Y_{b_{2}}\overline{X_{0}^{red}}^{T}(\overline{X_{0}^{red}\,X_{0} ^{red}}^{T}+D_{2}[X_{0}])^{-1}X_{0}\] \[= Y_{b_{2}}\overline{X_{0}^{red}}^{T}(\overline{X_{0}^{red}\,X_{0} ^{red}}^{T})^{-1}X_{0}\] \[-Y_{b_{2}}\overline{X_{0}^{red}}^{T}(\overline{X_{0}^{red}\,X_{0} ^{red}}^{T})^{-1}D_{2}[X_{0}](X_{0}{\mathcal{N}}^{-1}X_{0}^{T})^{-1}X_{0}\] \[= Y_{b_{2}}(\overline{X_{0}^{red}})^{-1}X_{0}\] \[-Y_{b_{2}}(\overline{X_{0}^{red}})^{-1}D_{2}[X_{0}](X_{0}{ \mathcal{N}}^{-1}X_{0}^{T})^{-1}X_{0}\] where we have used the matrix identity \[(A+B)^{-1} = A^{-1}-A^{-1}B(A+B)^{-1} \tag{5.28}\] \[= A^{-1}-(A+B)^{-1}BA^{-1}\] for \(A\), \(A+B\) invertible. We observe that \[Y_{b_{2}}(\overline{X_{0}^{red}})^{-1}X_{0} = Y_{b_{2}}(\overline{X_{0}^{red}})^{-1}\overline{X_{0}}+Y_{b_{2} }(\overline{X_{0}^{red}})^{-1}\Delta X_{0} \tag{5.29}\] \[= Y_{b_{2}}^{ext}+Y_{b_{2}}(\overline{X_{0}^{red}})^{-1}\Delta X_ {0}\,.\] To pass to the last line, we used that \((\overline{X_{0}^{red}})^{-1}\overline{X_{0}}=[e_{1}u_{N_{1}}^{T}\cdots e_{Q}u _{N_{Q}}^{T}]\) where \(\{e_{j}\in\mathbb{R}^{Q}\,|\,j=1,\ldots,Q\}\) are the unit basis vectors. We therefore conclude that, due to \(\mathcal{P}^{\perp}=\mathbf{1}-\mathcal{P}\), \[Y_{b_{2}}^{ext}\mathcal{P}^{\perp} = -Y_{b_{2}}(\overline{X_{0}^{red}})^{-1}\Delta X_{0}+\mathcal{R} \tag{5.30}\] where \[\mathcal{R} := Y_{b_{2}}(\overline{X_{0}^{red}})^{-1}D_{2}[X_{0}](X_{0} \mathcal{N}^{-1}X_{0}^{T})^{-1}X_{0} \tag{5.31}\] satisfies \[\mathcal{RP}=\mathcal{R}\,, \tag{5.32}\] as can be easily verified. At this point, we have found that \[\min_{W_{j},b_{j}}\mathcal{C}_{\mathcal{N}}[W_{j},b_{j}]\leq\min_{b_{2}}\|Y_{b _{2}}^{ext}\mathcal{P}^{\perp}\|_{\mathcal{L}_{\mathcal{N}}^{2}}\,. \tag{5.33}\] To minimize with respect to \(b_{2}\), we set \[0 = \partial_{b_{2}}\|Y_{b_{2}}^{ext}\mathcal{P}^{\perp}\|_{\mathcal{ L}_{\mathcal{N}}^{2}}^{2} \tag{5.34}\] \[= \partial_{b_{2}}\Big{(}-2\mathrm{Tr}\Big{(}Y^{ext}\mathcal{P}^{ \perp}\mathcal{N}^{-1}(\mathcal{P}^{\perp})^{T}u_{N}b_{2}^{T}\Big{)}+\mathrm{ Tr}\big{(}b_{2}u_{N}^{T}\mathcal{P}^{\perp}\mathcal{N}^{-1}(\mathcal{P}^{\perp})^{T}u_{N} b_{2}^{T}\big{)}\Big{)}\] \[= \partial_{b_{2}}\Big{(}-2\mathrm{Tr}\Big{(}b_{2}^{T}Y^{ext} \mathcal{P}^{\perp}\mathcal{N}^{-1}(\mathcal{P}^{\perp})^{T}u_{N}\Big{)}+| \mathcal{N}^{-1/2}(\mathcal{P}^{\perp})^{T}u_{N}|^{2}|b_{2}|^{2}\Big{)}\] \[= \partial_{b_{2}}\Big{(}-2\mathrm{Tr}\Big{(}b_{2}^{T}Y_{b_{2}}( \overline{X_{0}^{red}})^{-1}\underbrace{\Delta X_{0}\mathcal{N}^{-1}u_{N}}_{=0 }\Big{)}\ +\ \mathrm{Tr}\Big{(}b_{2}^{T}\underbrace{\mathcal{R}\mathcal{N}^{-1}(\mathcal{P}^ {\perp})^{T}}_{=\mathcal{RP}\mathcal{N}^{-1}\mathcal{P}^{\perp}=0}u_{N}\Big{)}\] \[+|\mathcal{N}^{-1/2}(\mathcal{P}^{\perp})^{T}u_{N}|^{2}|b_{2}|^{2}\Big{)}\] using \(\mathcal{R}=\mathcal{RP}\) from (5.32) so that \(\mathcal{R}\mathcal{N}^{-1}\mathcal{P}^{\perp}=\mathcal{RP}\mathcal{N}^{-1} \mathcal{P}^{\perp}=\mathcal{RP}\mathcal{P}^{\perp}\mathcal{N}^{-1}=0\), and recalling \(u_{N}=(1,1,\ldots,1)^{T}\in\mathbb{R}^{N}\). This implies that \[b_{2}=0\,. \tag{5.35}\] We note that this corresponds to \[b_{2}=-W_{2}b_{1} \tag{5.36}\] prior to the shift (5.10), and hence arrive at (3.33), as claimed. Next, we derive a simplified expression for \(\|Y^{ext}\mathcal{P}^{\perp}\|_{\mathcal{L}_{\mathcal{N}}^{2}}^{2}\). For notational convenience, let \[\Delta_{1}^{rel} := (\overline{X_{0}^{red}})^{-1}\Delta X_{0}\] \[\Delta_{2}^{rel} := (\overline{X_{0}^{red}})^{-1}D_{2}[X_{0}](\overline{X_{0}^{red}})^ {-T} \tag{5.37}\] \[= \Delta_{1}^{rel}\mathcal{N}^{-1}(\Delta_{1}^{rel})^{T}\] where we recall (5.23). Then, we obtain (with \(b_{2}=0\)) \[\mathcal{R} = Y\Delta_{2}^{rel}\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}(\overline{X _{0}^{red}})^{-1}X_{0} \tag{5.38}\] \[= Y\Delta_{2}^{rel}\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}\Big{(}[e_{ 1}u_{N_{1}}^{T}\cdots e_{Q}u_{N_{Q}}^{T}]+\Delta_{1}^{rel}\Big{)}\] where we used \(X_{0}\mathcal{N}^{-1}X_{0}^{T}=\overline{X_{0}^{red}}(1+\Delta_{2}^{rel})( \overline{X_{0}^{red}})^{T}\) in (5.31) to obtain the first line. We find that \[-Y^{ext}\mathcal{P}^{\perp} \tag{5.39}\] \[= Y\Delta_{1}^{rel}-Y\Delta_{2}^{rel}\big{(}1+\Delta_{2}^{rel} \big{)}^{-1}\Big{(}[e_{1}u_{N_{1}}^{T}\cdots e_{Q}u_{N_{Q}}^{T}]+\Delta_{1}^{ rel}\Big{)}\] \[= Y(1-\Delta_{2}^{rel}\big{(}1+\Delta_{2}^{rel}\big{)}^{-1})\Delta _{1}^{rel}-Y\Delta_{2}^{rel}\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}[e_{1}u_{N_{1 }}^{T}\cdots e_{Q}u_{N_{Q}}^{T}]\] \[= Y(1+\Delta_{2}^{rel})^{-1}\Delta_{1}^{rel}-Y\Delta_{2}^{rel} \big{(}1+\Delta_{2}^{rel}\big{)}^{-1}[e_{1}u_{N_{1}}^{T}\cdots e_{Q}u_{N_{Q}}^ {T}]\,.\] Therefore, we obtain that \[\|Y^{ext}\mathcal{P}^{\perp}\|_{\mathcal{L}_{\mathcal{N}}^{2}}^{2} = (I)+(II)+(III) \tag{5.40}\] where \[(I) := \mbox{Tr}\Big{(}Y(1+\Delta_{2}^{rel})^{-1}\Delta_{1}^{rel} \mathcal{N}^{-1}(\Delta_{1}^{rel})^{T}(1+\Delta_{2}^{rel})^{-T}Y^{T}\Big{)} \tag{5.41}\] \[= \mbox{Tr}\Big{(}Y(1+\Delta_{2}^{rel})^{-1}\Delta_{2}^{rel}(1+ \Delta_{2}^{rel})^{-1}Y^{T}\Big{)}\] where we note that \(\Delta_{2}^{rel}=(\Delta_{2}^{rel})^{T}\in\mathbb{R}^{Q\times Q}\) is symmetric. Moreover, \[(II) := -2\mbox{Tr}\Big{(}Y(1+\Delta_{2}^{rel})^{-1}\Delta_{1}^{rel} \mathcal{N}^{-1}[e_{1}u_{N_{1}}^{T}\cdots e_{Q}u_{N_{Q}}^{T}]^{T}\big{(}1+ \Delta_{2}^{rel}\big{)}^{-1}\Delta_{2}^{rel}Y^{T}\Big{)}\] \[= -\sum_{j=1}^{Q}2\mbox{Tr}\Big{(}Y(1+\Delta_{2}^{rel})^{-1}( \overline{X_{0}^{red}})^{-1}\underbrace{\frac{1}{N_{j}}\Delta X_{0,j}u_{N_{j}} e_{j}^{T}}_{=0}\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}\Delta_{2}^{rel}Y^{T}\Big{)}\] \[= 0\] where we used that \(\frac{1}{N_{j}}\Delta X_{0,j}u_{N_{j}}=\overline{\Delta X_{0,j}}=0\) for all \(j=1,\ldots,Q\). Furthermore, \[(III) := \mbox{Tr}\Big{(}Y\Delta_{2}^{rel}\big{(}1+\Delta_{2}^{rel}\big{)} ^{-1}[e_{1}u_{N_{1}}^{T}\cdots e_{Q}u_{N_{Q}}^{T}]\mathcal{N}^{-1}[e_{1}u_{N_ {1}}^{T}\cdots e_{Q}u_{N_{Q}}^{T}]^{T} \tag{5.43}\] \[\qquad\qquad\qquad\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}\Delta_{2} ^{rel}Y^{T}\Big{)}\] \[= \mbox{Tr}\Big{(}Y\Delta_{2}^{rel}\big{(}1+\Delta_{2}^{rel}\big{)} ^{-1}\sum_{j=1}^{Q}\underbrace{\frac{1}{N_{j}}u_{N_{j}}^{T}u_{N_{j}}}_{=1}e_{ j}e_{j}^{T}\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}\Delta_{2}^{rel}Y^{T}\Big{)}\] \[= \mbox{Tr}\Big{(}Y\Delta_{2}^{rel}\big{(}1+\Delta_{2}^{rel}\big{)} ^{-1}\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}\Delta_{2}^{rel}Y^{T}\Big{)}\] \[= \mbox{Tr}\Big{(}Y\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}(\Delta_{2} ^{rel})^{2}\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}Y^{T}\Big{)}\] using \(\sum_{j=1}^{Q}e_{j}e_{j}^{T}=\mathbf{1}_{Q\times Q}\), and commutativity of \(\Delta_{2}^{rel}\) and \((1+\Delta_{2}^{rel})^{-1}\). We conclude that \[\|Y^{ext}{\mathcal{P}}^{\perp}\|^{2}_{{\mathcal{K}}_{N}} = (I)+(III) \tag{5.44}\] \[= {\rm Tr}\Big{(}Y\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}\Delta_{2}^{ rel}(1+\Delta_{2}^{rel})\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}Y^{T}\Big{)}\] \[= {\rm Tr}\Big{(}Y\Delta_{2}^{rel}\big{(}1+\Delta_{2}^{rel}\big{)}^{ -1}Y^{T}\Big{)}\] \[= {\rm Tr}\Big{(}Y|\Delta_{2}^{rel}|^{\frac{1}{2}}\big{(}1+\Delta_{2 }^{rel}\big{)}^{-1}|\Delta_{2}^{rel}|^{\frac{1}{2}}Y^{T}\Big{)}\] \[\leq \|\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}\|_{op}\;{\rm Tr}\Big{(}Y \Delta_{2}^{rel}Y^{T}\Big{)}\] since the matrix \(\Delta_{2}^{rel}\geq 0\) is positive semidefinite. We note that \[{\rm Tr}\Big{(}Y\Delta_{2}^{rel}Y^{T}\Big{)} = {\rm Tr}\Big{(}Y\Delta_{1}^{rel}{\mathcal{N}}^{-1}(\Delta_{1}^{ rel})^{T}Y^{T}\Big{)} \tag{5.45}\] \[= \|Y(\overline{X_{0}^{red}})^{-1}\Delta X_{0}\|^{2}_{{\mathcal{L} }_{N}^{2}}\,.\] Moreover, letting \(\lambda_{-}:=\inf{\rm spec}(\Delta_{2}^{rel})\) and \(\lambda_{+}:=\sup{\rm spec}(\Delta_{2}^{rel})\) denote the smallest and largest eigenvalue of the non-negative, symmetric matrix \(\Delta_{2}^{rel}\geq 0\), \[0\leq\lambda_{-}\leq\lambda_{+}=\|\Delta_{2}^{rel}\|_{op}=\|\Delta_{1}^{rel}{ \mathcal{N}}^{-1}(\Delta_{1}^{rel})^{T}\|_{op}\leq C\delta_{P}^{2}\,. \tag{5.46}\] Therefore, \[\|\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}\|_{op}=(1+\lambda_{-})^{-1}\leq 1-C_ {0}^{\prime}\delta_{P}^{2} \tag{5.47}\] for a constant \(C_{0}^{\prime}=0\) if \({\rm rank}(\Delta_{2}^{rel})<Q\) whereby \(\lambda_{-}=0\), and \(C_{0}^{\prime}>0\) if \({\rm rank}(\Delta_{2}^{rel})=Q\). We find the following bound from (5.44), \[\|Y^{ext}{\mathcal{P}}^{\perp}\|_{{\mathcal{L}}_{N}^{2}} \leq \sqrt{\|\big{(}1+\Delta_{2}^{rel}\big{)}^{-1}\|_{op}\;{\rm Tr} \Big{(}Y\Delta_{2}^{rel}Y^{T}\Big{)}} \tag{5.48}\] \[\leq (1-C_{0}\delta_{P}^{2})\;\|Y(\overline{X_{0}^{red}})^{-1}\Delta X _{0}\|_{{\mathcal{L}}_{N}^{2}}\,,\] as claimed, where \(C_{0}\) is proportional to \(C_{0}^{\prime}\). Finally, we use that \[\widetilde{W}_{2} = Y\overline{X_{0}^{red}}^{T}(\overline{X_{0}^{red}}\,\overline{X _{0}^{red}}^{T})^{-1} \tag{5.49}\] \[= Y(\overline{X_{0}^{red}})^{-1}\] agrees with (3.38) because \(\overline{X_{0}^{red}}\) is invertible, and hence \((\overline{X_{0}^{red}})^{-1}={\rm Pen}[\overline{X_{0}^{red}}]\). Hence, we observe that (5.24) implies \[W_{2} = \widetilde{W}_{2}-\widetilde{W}_{2}(\Delta X_{0}{\mathcal{N}}^{- 1}\Delta X_{0}^{T})(\overline{X_{0}^{red}}\,\overline{X_{0}^{red}}^{T}+\Delta X _{0}{\mathcal{N}}^{-1}\Delta X_{0}^{T})^{-1} \tag{5.50}\] \[= \widetilde{W}_{2}-Y(\overline{X_{0}^{red}})^{-1}(\Delta X_{0}{ \mathcal{N}}^{-1}\Delta X_{0}^{T})(\overline{X_{0}^{red}})^{-T}\] \[\qquad\qquad\quad\Big{(}{\bf 1}+(\overline{X_{0}^{red}})^{-1}( \Delta X_{0}{\mathcal{N}}^{-1}\Delta X_{0}^{T})(\overline{X_{0}^{red}})^{-T} \Big{)}^{-1}(\overline{X_{0}^{red}})^{-1}\] \[= \widetilde{W}_{2}-Y\Delta_{2}^{rel}({\bf 1}+\Delta_{2}^{rel})^{-1}( \overline{X_{0}^{red}})^{-1}\] Therefore, in operator norm, \[\|W_{2}-\widetilde{W}_{2}\|_{op} \leq \|Y\|_{op}\|\Delta_{2}^{rel}\|_{op}\|({\bf 1}+\Delta_{2}^{rel})^{-1}\|_{op} \|(\overline{X_{0}^{red}})^{-1}\|_{op} \tag{5.51}\] \[\leq C\delta_{P}^{2}\] where we used \(\|Y\|_{op}<C\), \(\|\Delta_{2}^{rel}\|_{op}<C\widetilde{\delta}_{P}^{2}\), \(\|(\mathbf{1}+\Delta_{2}^{rel})^{-1}\|_{op}\leq 1\), \(\|(\overline{X_{0}^{red}})^{-1}\|_{op}<C\) to obtain the last line. We note that when \(M>Q\), it follows that \(\overline{X_{0}^{red}}(\overline{X_{0}^{red}})^{T}\) is an \(M\times M\) matrix of rank \(Q\), and is hence not invertible, so that the above arguments do not apply. From here on, we denote the weights and shifts determined above in (5.7), (5.24) and in (5.4), (5.36), by \(W_{1}^{*},W_{2}^{*}\) and \(b_{1}^{*},b_{2}^{*}\), respectively. To prove that \(\mathcal{C}_{\mathcal{N}}[W_{i}^{*},b_{i}^{*}]\) is a local minimum of the cost function \(\mathcal{C}_{\mathcal{N}}\), we observe that for \[\rho:=\max_{j,i}|x_{0,j,i}| \tag{5.52}\] and \[\beta_{1}\geq 2\rho\ \,\ \ b_{1}^{*}=\beta_{1}u_{Q}\,, \tag{5.53}\] we have \[W_{1}^{*}x_{0,j,i}+b_{1}^{*} = x_{0,j,i}+b_{1}^{*} \tag{5.54}\] \[\in B_{\rho}(\beta_{1}u_{Q})\subset\mathbb{R}_{+}^{Q}\,.\] \(\forall j=1,\ldots,Q\) and \(\forall\;i=1,\ldots,N_{j}\). That is, all training input vectors are contained in a ball of radius \(\rho\) centered at a point on the diagonal \(\beta_{1}u_{Q}\in\mathbb{R}^{Q}\) with \(\beta_{1}\geq 2\rho\). This ensures that the coordinates of all \(W_{1}^{*}x_{0,j,i}+b_{1}^{*}\) are strictly positive, and therefore, \[\sigma(W_{1}^{*}X_{0}+B_{1}^{*})=W_{1}^{*}X_{0}+B_{1}^{*}\,. \tag{5.55}\] For \(\epsilon>0\) small, we consider an infinitesimal translation \[b_{1}^{*}\to b_{1}^{*}+\widetilde{b}_{1} \tag{5.56}\] combined with an infinitesimal transformation \[W_{1}^{*}\to W_{1}^{*}\widetilde{W}_{1} \tag{5.57}\] where \(|\widetilde{b}_{1}|<\epsilon\) and \(\|\widetilde{W}_{1}-\mathbf{1}_{Q\times Q}\|_{op}<\epsilon\). Then, we have that \[W_{1}^{*}\widetilde{W}_{1}x_{0,j,i}+b_{1}^{*}+\widetilde{b}_{1} \tag{5.58}\] \[= \underbrace{W_{1}^{*}x_{0,j,i}+b_{1}^{*}}_{\in B_{\rho}(\beta_{1} u_{Q})}+W_{1}^{*}(\widetilde{W}_{1}-\mathbf{1}_{Q\times Q})x_{0,j,i}+\widetilde{b}_{1}\] where \[|W_{1}^{*}(\widetilde{W}_{1}-\mathbf{1}_{Q\times Q})x_{0,j,i}+ \widetilde{b}_{1}| \leq \|W_{1}^{*}\|_{op}\|\widetilde{W}_{1}-\mathbf{1}_{Q\times Q}\|_{ op}|x_{0,j,i}|+|\widetilde{b}_{1}| \tag{5.59}\] \[\leq \|W_{1}^{*}\|_{op}\;\epsilon\;\max_{j,i}|x_{0,j,i}|+\epsilon\] \[\leq C\epsilon\] for some constant \(C\) independent of \(j,i\). Therefore, we conclude that (5.58) is contained in \(B_{\rho+C\epsilon}(\beta_{1}u_{Q})\), which lies in the positive sector \(\mathbb{R}_{+}^{Q}\subset\mathbb{R}^{Q}\) for sufficiently small \(\epsilon>0\). But this implies that \[\sigma(W_{1}^{*}\widetilde{W}_{1}X_{0}+B_{1}^{*}+\widetilde{B}_{1})=W_{1}^{*} \widetilde{W}_{1}X_{0}+B_{1}^{*}+\widetilde{B}_{1}\,. \tag{5.60}\] Thus, all arguments leading to the result in (3.29) stating that \[\mathcal{C}_{\mathcal{N}}[W_{i}^{*},b_{i}^{*}]=\|Y^{ext}\mathcal{P}^{\perp}\|_{ \mathcal{L}_{\mathcal{N}}^{2}}\,, \tag{5.61}\] for the choice of weights and shifts \(W_{i}^{*},b_{i}^{*}\), equally apply to the infinitesimally deformed \(W_{1}^{*}\widetilde{W}_{1}\), \(b_{1}^{*}+\widetilde{b}_{1}\) and the corresponding expressions for \(W_{2}^{*}\widetilde{W}_{2}\), \(b_{2}^{*}+\widetilde{b}_{2}\). We now make the crucial observation that the right hand side of (5.61) does not depend on \(W_{i}^{*},b_{i}^{*}\), as \(\mathcal{P}^{\perp}\) only depends on the training inputs \(X_{0}\); see (5.26). Therefore, \[\mathcal{C}_{\mathcal{N}}[W_{i}^{*},b_{i}^{*}]=\mathcal{C}_{\mathcal{N}}[W_{i}^ {*}\widetilde{W}_{i},b_{i}^{*}+\widetilde{b}_{i}]\,, \tag{5.62}\] and the variation in the perturbations \(\widetilde{W}_{i},\widetilde{b}_{i}\) is zero. This implies that \(\mathcal{C}_{\mathcal{N}}[W_{i}^{*},b_{i}^{*}]\) is a local minimum of the cost function, as it was obtained from a least square minimization problem. In particular, it is degenerate, and by repeating the above argument, one concludes that it has the same value for all weights and shifts \(W_{i},b_{i}\) which allow for the condition (5.55) to be satisfied. Finally, we note that the reparametrization of the training inputs \(X_{0}\to KX_{0}\), for any arbitrary \(K\in GL(Q)\), induces \[\overline{X_{0}}\to K\overline{X_{0}}, \Delta X_{0}\to K\Delta X_{0}\,. \tag{5.63}\] In particular, this implies that \[\Delta_{1}^{rel} = (\overline{X_{0}^{red}})^{-1}\Delta X_{0} \tag{5.64}\] \[\to (K\overline{X_{0}^{red}})^{-1}K\Delta X_{0}\] \[= (\overline{X_{0}^{red}})^{-1}K^{-1}K\Delta X_{0}\] \[= \Delta_{1}^{rel}\] is invariant under this reparametrization, and hence, \[\Delta_{2}^{rel} = \Delta_{1}^{rel}\mathcal{N}^{-1}(\Delta_{1}^{rel})^{T} \tag{5.65}\] also is. From the third and fourth lines in (5.44) follows that \(\mathcal{C}_{\mathcal{N}}[W_{i}^{*},b_{i}^{*}]\) is a function only of \(\Delta_{2}^{rel}\). We note that the reparametrization invariance of \(\mathcal{P}\) (and thus of \(\mathcal{P}^{\perp}\)) can also be directly verified from (5.26) by inspection. This implies that \(\mathcal{C}_{\mathcal{N}}[W_{i}^{*},b_{i}^{*}]\) is invariant under reparametrizations of the training inputs \(X_{0}\to KX_{0}\), for all \(K\in GL(Q)\). This concludes the proof of Theorem 3.2. ## 6. Constructive training of shallow network In this section, we follow the proof of Theorem 3.1 in order to determine a constructively trained shallow network, which does not invoke the gradient descent algorithm. In our approach, all relevant operations are confined to a \(Q\leq M\) dimensional linear subspace of \(\mathbb{R}^{M}\). To this end, we calculate \[\overline{X_{0}^{red}}:=[\overline{x_{0,1}}\cdots\overline{x_{0,Q}}]\in \mathbb{R}^{M\times Q}\,. \tag{6.1}\] We determine the Penrose inverse \[\mathrm{Pen}[\overline{X_{0}^{red}}]=\big{(}(\overline{X_{0}^{red}})^{T} \overline{X_{0}^{red}}\big{)}^{-1}(\overline{X_{0}^{red}})^{T}\in\mathbb{R}^{Q \times M} \tag{6.2}\] and the orthoprojector \[P:=\overline{X_{0}^{red}}\mathrm{Pen}[\overline{X_{0}^{red}}]\in\mathbb{R}^{ M\times M}\,. \tag{6.3}\] Subsequently, we determine \(W_{1}=R\in O(M)\) which acts by rotating the ranges of \(P\) and \(P^{\perp}\) to be in alignment with the coordinate vectors \(e_{j}:=(0,\ldots 0,1,0,\ldots,0)\). As \(P\) is symmetric, this amounts to solving the diagonalization problem \[P=R^{T}P_{R}R\,. \tag{6.4}\] \(P_{R}\) is diagonal, with \(Q\) diagonal entries equaling \(1\), and all other \(M^{2}-Q\) matrix entries being \(0\). The orthogonal matrix \(R\) diagonalizing \(P\) also diagonalizes \(P^{\perp}={\bf 1}-P\). We let \[W_{1}^{*}=R\in O(M)\,. \tag{6.5}\] Moreover, we set \[W_{2}^{*}=\widetilde{W}_{2}R^{T} \tag{6.6}\] where we calculate \[\widetilde{W}_{2}=Y((\overline{X_{0}^{red}})^{T}\overline{X_{0}^{red}})^{-1} (\overline{X_{0}^{red}})^{T}\in\mathbb{R}^{Q\times M}\,. \tag{6.7}\] To determine \(B_{1}^{*}\), we split \(b_{1}^{*}=P_{R}b_{1}^{*}+P_{R}^{\perp}b_{1}^{*}\), and set \[P_{R}b_{1}^{*}=\beta_{1}P_{R}u_{M}\,, \tag{6.8}\] recalling \(u_{M}=(1,1,\ldots,1)^{T}\in\mathbb{R}^{M}\) from (4.25). We choose \(\beta_{1}\geq 0\) large enough to ensure that \(RPX_{0}+P_{R}B_{1}^{*}\) is component-wise strictly positive. A sufficient condition is \(\beta_{1}\geq 2\max_{j,i}|x_{0,j,i}|\). Moreover, we set \[P_{R}^{\perp}b_{1}^{*}=-\delta P_{R}^{\perp}u_{M} \tag{6.9}\] with \(\delta\) as defined in (3.8). This defines \(B_{1}^{*}=[b_{1}^{*}\cdots b_{1}^{*}]\in\mathbb{R}^{M\times N}\). Finally, we let \[b_{2}^{*}=-W_{2}^{*}P_{R}b_{1}^{*}\,. \tag{6.10}\] The set \(W_{1}^{*},W_{2}^{*},b_{1}^{*},b_{2}^{*}\) defines the constructively trained shallow network. No use of the gradient descent algorithm is required. ## 7. Proof of Theorem 3.3 Here, we prove Theorem 3.3, and elucidate the geometric meaning of the constructively trained network obtained in Section 6. Given a non-training input \(x\in\mathbb{R}^{M}\), the constructively trained network metrizes the linear subspace \(\operatorname{range}(P)\) of the input space \(\mathbb{R}^{M}\), and determines to which equivalence class of training inputs \(Px\) is closest. To elucidate this interpretation, we observe that (6.7) implies that \[\widetilde{W}_{2}P=\widetilde{W}_{2}\quad\text{and}\;W_{2}^{*}P_{R}=W_{2}^{*} \tag{7.1}\] and hence, \[\widetilde{W}_{2}\overline{X_{0}^{red}}=Y\,, \tag{7.2}\] or in terms of column vectors, \[W_{2}^{*}RP\overline{x_{0,j}}=\widetilde{W}_{2}\overline{x_{0,j}}=y_{j}\,. \tag{7.3}\] Therefore, \[\mathcal{C}_{j}[x] = |W_{2}^{*}(W_{1}^{*}x+b_{1}^{*})_{+}+b_{2}^{*}-y_{j}| \tag{7.4}\] \[= |W_{2}^{*}(Rx+b_{1}^{*})_{+}+b_{2}^{*}-y_{j}|\] \[= |W_{2}^{*}(RPx+RP^{\perp}x+P_{R}b_{1}^{*}+P_{R}^{\perp}b_{1}^{*} )_{+}+b_{2}^{*}-y_{j}|\] \[= |W_{2}^{*}((P_{R}RPx+P_{R}b_{1}^{*})_{+}+(P_{R}^{\perp}RP^{\perp }x+P_{R}^{\perp}b_{1}^{*})_{+})+b_{2}^{*}-y_{j}|\] \[= |W_{2}^{*}(P_{R}RPx-y_{j})+W_{2}(P_{R}^{\perp}RP^{\perp}x-\delta P _{R}^{\perp}u)_{+}|\] \[= |W_{2}^{*}(P_{R}RP(x-\overline{x_{0,j}}))|\] where by construction of \(P\), we have \(P\overline{x_{0,j}}=\overline{x_{0,j}}\). Passing to the fourth line, we used \(P_{R}RP=RP\), passing to the fifth line, we recalled \(W_{2}^{*}P_{R}b_{1}^{*}+b_{2}^{*}=0\), and passing to the sixth line, we used (7.3). Moreover, we recall that \(u\) is defined in (4.25). To pass to the sixth line, we also used that \[P_{R}^{\perp}(P_{R}^{\perp}RP^{\perp}x-\delta P_{R}^{\perp}u)_{+}=(P_{R}^{\perp} RP^{\perp}x-\delta P_{R}^{\perp}u)_{+} \tag{7.5}\] because the vector inside \((\cdot)_{+}\) only has components in the range of \(P_{R}^{\perp}\), which is diagonal in the given coordinate system, and because \((\cdot)_{+}\) acts component-wise. Therefore, since also \(W_{2}^{*}P_{R}=W_{2}^{*}\), we have \[W_{2}^{*}(P_{R}^{\perp}RP^{\perp}x-\delta P_{R}^{\perp}u)_{+} \tag{7.6}\] \[= W_{2}^{*}P_{R}P_{R}^{\perp}(P_{R}^{\perp}RP^{\perp}x-\delta P_{ R}^{\perp}u)_{+}\] \[= 0\,.\] With \(W_{2}^{*}P_{R}RP=\widetilde{W}_{2}P\), (7.4) implies that \[\mathcal{C}_{j}[x]=d_{\widetilde{W}_{2}}(Px,\overline{x_{0,j}}) \tag{7.7}\] where \[d_{\widetilde{W}_{2}}(x,y):=|\widetilde{W}_{2}P(x-y)|\,, \tag{7.8}\] for \(x,y\in\operatorname{range}(P)\), defines a metric in the range of \(P\), which is a \(Q\)-dimensional linear subspace of the input space \(\mathbb{R}^{M}\). This is because the matrix \(\widetilde{W}_{2}=\widetilde{W}_{2}P\) has full rank \(Q\). We can therefore reformulate the identification of an input \(x\in\mathbb{R}^{M}\) with an output \(y_{j^{*}}\) via the constructively trained shallow network as the solution of the metric minimization problem \[j^{*}=\operatorname{argmin}_{j\in\{1,\ldots,Q\}}(d_{\widetilde{W}_{2}}(Px, \overline{x_{0,j}})) \tag{7.9}\] in the range of \(P\). **Acknowledgments:** T.C. gratefully acknowledges support by the NSF through the grant DMS-2009800, and the RTG Grant DMS-1840314 - _Analysis of PDE_. P.M.E. was supported by NSF grant DMS-2009800 through T.C.
2309.13679
Neural Network-PSO-based Velocity Control Algorithm for Landing UAVs on a Boat
Precise landing of Unmanned Aerial Vehicles (UAVs) onto moving platforms like Autonomous Surface Vehicles (ASVs) is both important and challenging, especially in GPS-denied environments, for collaborative navigation of heterogeneous vehicles. UAVs need to land within a confined space onboard ASV to get energy replenishment, while ASV is subject to translational and rotational disturbances due to wind and water flow. Current solutions either rely on high-level waypoint navigation, which struggles to robustly land on varied-speed targets, or necessitate laborious manual tuning of controller parameters, and expensive sensors for target localization. Therefore, we propose an adaptive velocity control algorithm that leverages Particle Swarm Optimization (PSO) and Neural Network (NN) to optimize PID parameters across varying flight altitudes and distinct speeds of a moving boat. The cost function of PSO includes the status change rates of UAV and proximity to the target. The NN further interpolates the PSO-founded PID parameters. The proposed method implemented on a water strider hexacopter design, not only ensures accuracy but also increases robustness. Moreover, this NN-PSO can be readily adapted to suit various mission requirements. Its ability to achieve precise landings extends its applicability to scenarios, including but not limited to rescue missions, package deliveries, and workspace inspections.
Li-Fan Wu, Zihan Wang, Mo Rastgaar, Nina Mahmoudian
2023-09-24T16:05:31Z
http://arxiv.org/abs/2309.13679v2
# Neural Network-PSO-based Velocity Control Algorithm ###### Abstract Precise landing of Unmanned Aerial Vehicles (UAVs) onto moving platforms like Autonomous Surface Vehicles (ASVs) is both important and challenging, especially in GPS-denied environments, for collaborative navigation of heterogeneous vehicles. UAVs need to land within a confined space onboard ASV to get energy replenishment, while ASV is subject to translational and rotational disturbances due to wind and water flow. Current solutions either rely on high-level waypoint navigation, which struggles to robustly land on varied-speed targets, or necessitate laborious manual tuning of controller parameters, and expensive sensors for target localization. Therefore, we propose an adaptive velocity control algorithm that leverages Particle Swarm Optimization (PSO) and Neural Network (NN) to optimize PID parameters across varying flight altitudes and distinct speeds of a moving boat. The cost function of PSO includes the status change rates of UAV and proximity to the target. The NN further interpolates the PSO-founded PID parameters. The proposed method implemented on a water strider hexacopter design, not only ensures accuracy but also increases robustness. Moreover, this NN-PSO can be readily adapted to suit various mission requirements. Its ability to achieve precise landings extends its applicability to scenarios, including but not limited to rescue missions, package deliveries, and workspace inspections. [Video] Aerial Systems: Perception and Autonomy; Machine Learning for Robot Control. ## I Introduction Many researchers equip drones with expensive sensors like depth cameras, dual cameras [1, 2], LiDAR [3], or even high-cost IR-Lock sensors to deal with the difficulties of precise landing. Although these added sensors bring benefits to a more precise landing, their weight and power consumption can not be ignored due to the limited battery life of drones. In this work, the landing process only depends on a low-cost RGB camera to detect fiducial markers and estimate the landing target location, plus a distance sensor to detect the height of the target plane. In addition, software algorithms have been developed for the precise landing of drones. Advances in drone state estimator and visual inertia odometry [4, 5, 6, 7] have significantly improved trajectory following accuracy and stability in waypoint navigation tasks. However, for safe and soft drone landing, regular aerial waypoint navigation may not be directly adapted due to the ground effect when the drone touches the landing platform. Besides, waypoint navigation dependency on drone odometry brings another source of error in localizing dynamic landing targets, compared to direct vision-driven velocity controller which does not focus on following certain trajectories when landing on a dynamic target. For example, when landing on a moving boat, forest canopies decrease the GPS accuracy. The sunlight reflection, shadow, and uneven and low-textured lake surface disturb the visual odometry positioning [8]. To ensure the robustness of landing in GPS-denied river environments, our work adopts an intermediate-level velocity controller. To achieve a velocity controller for fast, stable, and accurate horizontal alignment with the landing target, many researchers investigate the fuzzy logic [9], interaction matrix [10, 11, 12], model prediction [13, 14] or the cascade method [15]. However, many of these approaches require either manual parameter tuning before deployment (For example, fine-tuning multi PID parameters for different heights [16]) or have requirements about the differentiability of the optimization function (such as adjusting PID values with exponential functions based on heuristic rules [17, 18, 19]). To take into account both the accuracy and smoothness of drone landing with minimum human intervention and localization requirement, an adaptive velocity controller is proposed and optimized by NN-PSO algorithm. The controller includes variable PID parameters for different altitudes and boat speeds. Besides, an exponentially decaying function (_tanh_ in this work) is adopted to regulate z-direction velocity control for stable landing with braking effects [20, 21]. By taking advantage of PSO's exploration [22] and NN's expansion [23, 24], the method can keep the accuracy and greatly increase the training efficiency. Lastly, field experiments were conducted to test the sim-to-real transferability of the velocity controller that lands a hexacopter on a boat. The qualitative result verifies the precision and stability, even in windy conditions, of the NN-PSO-based velocity control algorithm. Low-cost dependent sensors and negligible parameter tuning effort from simulation training to real-world deployment reveal the wide range of applicability and simplicity of the proposed drone landing approach. This paper is organized in the following way: Sec. II elaborates on the NN-PSO inputs, the cost function design, the adaptive velocity controller, target state estimation, and state machine. Sec. III presents the water strider drone design, the training process of PSO particles, the comparison of the constant PID and the adaptive PID controllers, the velocity ramp function effectiveness, the comparison of single and dual marker design, and the landing accuracy in different environments. The conclusions are given in Sec. IV. ## II Algorithm and Controller This section first elaborates on the inputs, cost function design, and particle moving rule of the PSO algorithm. Second, the adaptive PID controller, the ramp function and _tanh_ function are adopted for three-dimensional speed control. Third, the inputs and outputs of the Neural Network are explained. Fourth, the state estimation error of the landing target is reduced by filtering. Fifth, the landing strategy, including three stages: Explore, Align, and Land, is depicted. ### _Particle Swarm Optimization_ PSO algorithm is a population-based optimization algorithm that is inspired by the behavior of social animals, such as birds and fish. With each particle being a set of parameters that needs to be optimized, PSO iteratively updates the velocity and position of the particles in the parameter search space to find the optimal solution. In this work, PSO helps optimize the parameters of the velocity controllers to ensure the accurate and stable landing of drones on the targets. The proposed velocity controller includes proportional, integral, and derivative parameters for the horizontal alignment, and the scaled _tanh_ function for stable descending with brake. Thus, each particle includes five parameters. \[X^{i}(t)=(K_{P}^{i}(t),K_{I}^{i}(t),K_{D}^{i}(t),\alpha^{i}(t),\beta^{i}(t)) \tag{1}\] where the position of particle \(i\) at iteration \(t\) is denoted as \(X^{i}(t)\). \(K_{P}\), \(K_{I}\), and \(K_{D}\) represent the PID parameters of the drone's horizontal velocity controller. \(\alpha\) is the scale rate of the vertical velocity controller, and \(\beta\) is the ramp function time, which will be elaborated in Sec. II-B. First, the basic optimization function is the horizontal distance between the target location and the final landing spot of the drone. \[d=\sqrt{(x_{d}-x_{t})^{2}+(y_{d}-y_{t})^{2}} \tag{2}\] \(x_{d}\) and \(y_{d}\) are the final position where the drone actually land, \(x_{t}\) and \(y_{t}\) are the position where the target is. Therefore, if \(d\) is very close to 0, it represents that the input, PID parameters, are suitable and help the drone land precisely. Second, to optimize the vertical landing speed and ensure stability, the optimization function further includes time, velocity, acceleration, and the roll and pitch rate. The goal of the PSO is to find the minimum of the function. \[F(X)=d+T+|\nu|+|a_{z}|+\sum\left|\dot{\phi}\right|+\sum\left|\dot{\theta}\right| \tag{3}\] where \(T\) is the duration from when the drone hovers vertically over the target, to the drone landing on the surface. \(\nu\) and \(a_{z}\) are the z-directional velocity and acceleration at the moment when the drone reaches the surface. \(\dot{\phi}\) and \(\dot{\theta}\) are the x-axis and y-axis angular velocity, which represent the roll and pitch rate of the UAV. Third, each particle \(i\) will update its own velocity vector \(V\) in every iteration \(t\). The velocity vector of particle \(i\) updates towards not only this particle's best parameter position but also the best global parameter position among all spread particles. \[X^{i}(t+1)=X^{i}(t)+V^{i}(t+1) \tag{4}\] \[V^{i}(t+1)=wV^{i}(t)+c_{1}r_{1}(bbest^{i}-X^{i}(t)) \tag{5}\] \[+c_{2}r_{2}(gbest-X^{i}(t))\] where velocity vector at next iteration \(V^{i}(t+1)\) is the weighted sum of the current velocity vector \(V^{i}(t)\), vector to current particle's best parameter \(pbest^{i}\) and vector to the best parameter among all particles \(gbest\), with weights being \(w\), \(c_{1}\), and \(c_{2}\) respectively. Weight \(w\) acts as an inertia force to particles to avoid sudden parameter jump that jeopardizes convergence, weights \(c_{1}\), and \(c_{2}\) improve the chance of finding the global sub-optimal solution without being stuck in the local optimal region. \(r_{1}\) and \(r_{2}\) are random scalars ranging [0, 1). They add noise to the velocity vector magnitude to further increase the convergence rate [25]. Last, the PSO algorithm can efficiently explore the sub-optimal global solution. Every particle represents a set of PID parameters that govern the horizontal movements of a drone during a complete landing process. In each iteration, particles share information on costs according to Eqn. 3 and update their velocity vectors based on this collective knowledge. This approach makes the discovery of near-optimal velocity controller parameters possible with less than a hundred iterations in simulation. Besides, the hyper-parameters in Eqn. 5 can adopt task-independent empirical values with little manual tuning. Moreover, the optimization of the cost function can be easily customized without the need to consider its differentiability with respect to the parameter variables. This enables researchers to mainly focus on the parameter design of velocity controllers and task-dependent cost functions. ### _Adaptive Velocity Controller_ The PID controller is adopted for horizontal control, as the below equation: \[v_{xy}=K_{P}*e+K_{I}*(I_{e}+e*T)+K_{D}*\frac{e-e^{\prime}}{T} \tag{6}\] where \(K_{P}\) is the proportional term, \(K_{I}\) is the integral term, \(I_{e}\) is the accumulated error, \(K_{D}\) is the derivative term, \(e\) is the current error, \(e^{\prime}\) is the last error, and \(T\) is the sampling period, which is 0.05 sec in this experiment. To optimize the landing stability and time spent, the vertical velocity controller adopts the _tanh_ function, which can make the drone descend fast at high altitudes and slow down at low altitudes. \[v_{z}=\frac{v_{max}}{2}(\tanh{(\alpha h-3)}+1)+v_{min} \tag{7}\] where the scaling parameter \(\alpha\) can be optimized to change the slope of \(v_{z}\) to exponentially regulate drone descending velocity at various altitudes in Fig. 1. \(h\) is the real-time height in meters measured by the range sensor. \(v_{max}\) and \(v_{min}\) are the maximum and minimum vertical velocity. The constants in the equation is for shifting the tanh function towards within the bounds of positive \(v_{max}\) and \(v_{min}\) values. To track a moving boat effectively, a ramp velocity function is implemented to work in tandem with the PID controller during the Align stage. This harmonious integration enables the drone to gradually increase its velocity in a smooth manner. Maintaining a stable acceleration profile allows to reduce oscillations and maintain steady roll and pitch angles, which are instrumental in enhancing the accuracy of boat tracking by keeping it within the camera field of view. \[\begin{split} v_{r}=f(v_{xy},\beta)=\\ \begin{cases}v_{r}+\lambda&\text{if $v_{xy}>v_{r}+\delta$ and $t< \beta$}\\ v_{r}-\lambda&\text{if $v_{xy}<v_{r}+\delta$ and $t<\beta$}\\ v_{xy}&\text{else}\end{cases}\end{split} \tag{8}\] where \(v_{r}\) is the output velocity of the ramp function. \(\lambda\) is the desired acceleration. The velocity threshold \(\delta\) and the time threshold \(\beta\) decide whether to directly use the PID controller output \(v_{xy}\). When tracking a fast-moving target, the integrator requires more time (higher \(\beta\)) to eliminate steady-state error. ### _Neural Network_ As a drone transitions from higher to lower altitudes, its field of vision narrows, prompting the need for increased \(K_{p}\) values to enhance horizontal agility and target tracking. Moreover, the varying speed of a moving boat also requires different PID parameters for velocity compensation. The studies referenced in [16, 17, 18] underscore the critical relationship between flight altitudes and specific PID parameter adjustments. The application of a Neural Network serves to provide a more comprehensive solution for adapting PID parameters across various scenarios, effectively reducing the reliance on an extensive number of PSO training cases. The initial phase involves training the neural network using a flexible dataset of 25 data points, which can be tailored to user requirements. This dataset includes flight height and boat velocity as inputs, while PID parameters and the ramp function time \(\beta\) serve as the corresponding outputs. These 25 training data points represent the outcomes produced by PSO, covering a range of five distinct flight altitudes and five different boat speeds. To provide context, the maximum observable height of the drone is limited to 5 m and the altitude of the landing platform on the boat is about 1.25 m. The maximum speed of the boat is 2.9 m/s. Accordingly, the five flight altitudes are set between 1.3 and 5 m, while boat speeds span from 0 to 2.9 m/s. Leveraging the interpolation capabilities of the Neural Network, we can construct a dynamic PID controller that adapts seamlessly to varying heights and boat speeds. During field tests, the drone can readily utilize this Neural Network by inputting observed boat velocities, allowing it to adjust PID parameters in real-time according to its current altitude. This process is visualized in Fig. 2. ### _Target State Estimation_ Drones typically utilize pitch or roll angles to adjust flight direction, which in turn affects the camera orientation on the drone. To correct for this observational error, transformation matrices are employed to rotate coordinates. \[P_{\text{body}}=T_{\text{body\_world}}T_{\text{world\_cam}}P_{\text{cam}}=T_{ \text{body\_cam}}P_{\text{cam}} \tag{9}\] where real-time roll (\(\phi\)) and pitch (\(\theta\)) angles of the drone can be directly obtained from the IMU sensor. When projecting the target position onto the camera frame, only \(R_{x}\) and \(R_{y}\) are needed, so the yaw (\(\psi\)) angle is set as 0. Last, the movement of the target in the drone body frame, divided by the time can derive the velocity estimation: \[\overrightarrow{x(t)}=(\sum_{k=t-n}^{t}\frac{\Delta x(k)}{T_{s}})/n \tag{10}\] where a filter samples and averages \(n\) velocities to reduce errors, with \(x\) representing the position of the boat in the drone body frame and \(T_{s}\) denoting the sampling time, which dynamically adjusts while adhering to an outlier threshold, because, during field tests, variations in the on-board computer computational load impact the camera sampling rate. ### _Landing Strategy_ To ensure comprehensive coverage of various scenarios, such as missing targets and targets with varying speeds, multiple events have been integrated into the system to enhance its robustness. The mission consists of three main stages: exploration, alignment, and landing. Fig. 3 illustrates these stages along with their corresponding trigger conditions. Fig. 1: From Align to Land stage, velocity controllers change Fig. 2: The system overview: Yellow blocks are the controllers; Green blocks are sensor modules; and the blue blocks represent software design. The communication between modules is through ROS. ## III Results and Analysis This section details the training and validation processes of the velocity controller for drone landing in simulation, the real-world tests of the trained controller, as well as the results and their analysis. ### _Water Strider Design_ The hexacopter is inspired by water striders, featuring four legs to handle challenging scenarios like landing on a moving boat and emergency lake landings. Fig. 4 provides an overview of the autonomous drone equipped with (a) Pixhawk4 FMUv5, (b) Jetson TX1 running Ubuntu 18.04 with ROS Melodic, (c) GPS, (d) AWC201-B RGB webcam, and (e) LiDAR-Lite v3 distance sensor. Separate batteries (11.1V Li-Po) power Pixhawk4 and six 960 KV motors. To ensure smooth operations, the distance sensor is wired with a capacitor to prevent voltage fluctuations and communicates with Pixhawk through the I2C protocol. Additionally, (f) 2.4GHz and 5GHz WiFi antennas are integrated into Jetson TX1 for wireless SSH access and off-board control. Other components include (g) Holybro Radio Telemetry and FrSky X8R Telemetry systems facilitate communication with QGroundcontrol and an RC transmitter, allowing manual flight mode. Three ABS elastic ribs, shown in (h) side view and (i) front view, are strategically positioned to safeguard the computer and controllers. Each water strider-inspired leg, composed of (j) the upper part connecting to the base and (k) the lower part outfitted with two 330 ml bottles for added buoyancy, ensures stability during landings. The total weight is approximately 2.4 kg. ### _Training and Validation of the NN-PSO_ To train the velocity controller using the PSO algorithm, a five-particle configuration is utilized to strike a balance between training efficiency and optimal solution attainment. In the initialization phase of the velocity controller parameters, namely \(K_{P}\), \(K_{I}\), \(K_{D}\), \(\alpha\), and \(\beta\) in Eqn. 1, we employed randomization within predefined ranges: [0, 2) for \(K_{P}\), [0, 1) for \(K_{I}\), [0, 1) for \(K_{D}\), and [0, 3) for both \(\alpha\) and \(\beta\). For the validation of the controller performance of UAV landing on a moving ASV in the same simulated environment, Gazebo is adopted and includes the river environment with a differential-driven WAM-V boat from [26], a simulated hex-rotor firefly [27], and a Iris drone to take off and land on the boat. To achieve robust controller parameters, the training sessions are separately conducted for descending and horizontal tracking. The descending motion, potentially conflicting with tracking during PSO training, allows the drone to land quickly for higher rewards instead of stable, extended tracking. In the left Fig. 5, the trajectories of the five particles converge towards optimal descending parameters. These trajectories document the adjustments made to \(K_{P}\), \(K_{I}\), \(K_{D}\), and \(\alpha\). Notably, the cost of the best particle drops rapidly within the initial 20 iterations, indicating the swift convergence of the PSO algorithm towards optimal values. Subsequently, by sharing costs, all particles gradually converge towards a global sub-optimal solution, resulting in closely aligned optimized parameters among all particles. After 100 iterations of PSO training, utilizing the best PID parameters (\(K_{P}=1.565\), \(K_{I}=0.121\), and \(K_{D}=0.245\)) at that moment, the horizontal distance error between the landed drone and the target spot is reduced to a mere 0.0026 meters within the training simulation environment. In comparison to the default Pixhawk AutoLand Mode, descending from the same height results in a significant reduction in landing time, from 18.62 to 3.21 seconds, while the vertical velocity \(\nu\) decreases from -0.3041 to 0.002 m/s. Additionally, the vertical acceleration \(a_{z}\) experiences a negligible increase, from 9.817 to 9.821 \(\mathrm{m/s^{2}}\) (including gravitational acceleration). This indicates that the drone can land quickly without bouncing upon contact with the surface of a boat. The right Fig. 5 shows the trajectories of the five particles moving close to the optimized horizontal tracking parame Fig. 4: The hexacopter schematics, water strider design, and the boat in the experiments. Fig. 3: The flowchart of the landing strategy ters. These trajectories document the adjustments made to \(K_{P}\), \(K_{I}\), \(K_{D}\), and \(\beta\). The PSO algorithm separately optimized these parameters at five different heights and positions, which are marked in yellow in Tab. I. Subsequently, the trained neural network model predicts continuous parameters for altitudes from 1.5 to 5 m and velocity from 0.6 to 2.3 m/s. ### _Adaptive PID controller_ The Adaptive PID controller exhibits an impressive maximum tracking speed of 8.9 m/s (target speed) when deployed on the Iris drone, which itself can reach a top speed of 11 m/s. Thus, the maximum landing speed achieved with this controller reaches 80.9% of the drone maximum flight speed (the target size in the experiment is 1x1 m). In Fig. 6, the 3D and 2D trajectories of the flight, along with target velocity estimation, are depicted. At the 9-second mark in the velocity plot, the drone velocity measurement is sufficiently accurate, serving as input for the neural network in the proposed landing strategy. In comparison, the Visual Odometry Tracking [1] exhibits a maximum tracking speed of 4.2 m/s in simulation scenarios. When deployed on the DJI F450 drone, which itself can reach a top speed of 36.1 m/s, its controllers' tracking capability is far from the hardware limit. The target landing platform size in their experiment is 1.5x1.5 m. The maximum landing speed is about 11.6% of the drone's maximum flying speed. Hence, the utilization of the NN-PSO algorithm in conjunction with the adaptive PID controller streamlines testing and exploration, allowing us to approach the landing speed near the maximum flight speed more efficiently. Many studies [16, 17] have demonstrated that the height-adaptive PID controller outperforms its constant counterpart. To further substantiate this theory and validate the superior performance of our Adaptive PID controller, a comparative analysis involving two constant PID controllers is executed. These two PID controllers were fine-tuned using PSO at both low and high altitudes, serving as benchmarks. The descent speed remained constant, allowing ample time for observing the controllers' performance. Furthermore, the boat's variable speed, as opposed to a constant speed, enabled a comprehensive evaluation of the PID controllers' tracking capabilities. In Fig. 7, the fourth column illustrates that the adaptive PID controller excels in both tracking ability and boat velocity estimation, attributed to its stable flight performance. ### _Ramp Function Effectiveness Assessment_ The ramp function can gradually increase the drone's speed, maintains camera stability, ensures the target remains in view, and allows the integrator to proactively contribute. Once the ramp function phase concludes, the PID controller takes over, with its integrator finalizing velocity compensation. The accompanying Fig. 8 illustrates the impact of varying ramp function duration. If the duration is too brief or omitted altogether, the drone may exhibit an initial over-reaction and miss the target. Especially when the marker initially appears at the edge of the drone camera's field of view, large estimated distance typically generates substantial rush. Conversely, if the duration is excessively prolonged, the drone may struggle to land accurately during descent. By employing an appropriately timed ramp function, the drone can effectively track and execute a successful landing. ### _Dual Marker Performance Evaluation_ In the context of tracking landing targets, ARTag [28, 29, 30] markers, known for their binary square fiducial properties, offer a swift and robust solution for 3D camera pose estimation. In our experimental setups, the Dual Marker is designed and attached on the boat to facilitate low-altitude target tracking and delicate flight maneuvers. The Dual Marker comprises two ARTags, one measuring 5.5x5.5 cm and the other 16x16 cm, positioned 1 cm apart. To further validate the effectiveness of the Dual Marker, we conducted tests using a circularly moving boat scenario in Fig. 9, and the data is recorded in Tab. II. Under identical conditions, including the same PID settings and velocity limitations, the Dual Marker design significantly improves the drone's landing precision during simultaneous forward movement and descent, thereby reducing landing position errors compared to using a single marker. In summary, transitioning to the smaller marker when descending proves instrumental in helping the drone overcome the limited field of vision at low altitudes. ### _Field Test_ The field tests demonstrate precise landing capabilities in both static and dynamic scenarios, as visualized in Fig. 10. ## IV Conclusions The Neural Network-PSO-based velocity control algorithm enables autonomous landing of the drone on a moving boat, using only onboard sensing and computing, eliminating \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline (**Nip, K, K, K, \(\beta\))** & **Vinberg-0.6 (\(\alpha\))** & **Vinberg-1.0 (\(\beta\))** & **Vinberg-1.5 (\(\beta\))** & **Vinberg-2.0 (\(\alpha\))** & **Vinberg-2.0 (\(\beta\))** \\ \hline \hline **Robag 1.5 (\(\beta\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** \\ \hline \hline **Robag 2.0 (\(\beta\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** \\ \hline \hline **Robag 3.0 (\(\beta\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** \\ \hline **Robag 4.0 (\(\beta\))** & **0.5000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** \\ \hline \hline **Robag 5.0 (\(\beta\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** & **0.50000** (\(\alpha\))** \\ \hline \hline \end{tabular} \end{table} TABLE I: The yellow blocks are the outputs of the PSO, and the other blocks are the result of the Neural Network. Fig. 5: PID gains, \(\alpha\) and \(\beta\) value of the five particles. The cost function trajectory of the best particle across all iterations during PSO training. the need for external infrastructure like visual odometry or GPS. The approach doesn't require prior information about the boat's location. To further increase robustness, the adaptive PID controller is designed and NN-PSO trained for variable speeds and altitudes. This algorithm includes features like a ramp function for keeping targets in vision, a tanh function for landing softly, multi-sensor fusion for relative UAV localization and boat velocity estimation. By achieving more stable flight velocity, the drone can autonomously track and land on a boat. This framework is validated through both simulation and real-world experiments. \begin{table} \begin{tabular}{c|c|c} \hline Error (m) & Single Marker & Dual Marker \\ \hline **Static** & 0.113 & 0.012 \\ \hline **Linear movement** & speed 0.1 m/s & 0.0929 & 0.0558 \\ speed 0.2 m/s & 0.0881 & 0.0117 \\ speed 0.4 m/s & fail & 0.2497 \\ \hline **Circular movement** & & \\ **with x-speed 0.1 m/s** & & \\ **speed 0.1 rad/s** & 0.1539 & 0.0781 \\ speed 0.2 rad/s & 0.1708 & 0.1162 \\ speed 0.3 rad/s & 0.3268 & 0.2082 \\ speed 0.4 rad/s & fail & 0.2130 \\ \hline \end{tabular} \end{table} TABLE II: The distance error of the drone landing on the boat with different moving patterns Fig. 8: The results of ramp function duration comparison experiments. The appropriate 2.15 sec ramp function in 4th column, can help the drone effectively decrease oscillation and successfully track the target. Fig. 10: Visual-based Drone Landing with Neural Network-PSO-based Velocity Control Algorithm Fig. 6: The simulation result: Iris drone can land on a boat that moves at a speed of 8.9 m/s. Fig. 7: The results of constant and Adaptive PID comparison experiments. Fig. 9: The circularly moving boat and the real dual marker.
2309.16729
SimPINNs: Simulation-Driven Physics-Informed Neural Networks for Enhanced Performance in Nonlinear Inverse Problems
This paper introduces a novel approach to solve inverse problems by leveraging deep learning techniques. The objective is to infer unknown parameters that govern a physical system based on observed data. We focus on scenarios where the underlying forward model demonstrates pronounced nonlinear behaviour, and where the dimensionality of the unknown parameter space is substantially smaller than that of the observations. Our proposed method builds upon physics-informed neural networks (PINNs) trained with a hybrid loss function that combines observed data with simulated data generated by a known (approximate) physical model. Experimental results on an orbit restitution problem demonstrate that our approach surpasses the performance of standard PINNs, providing improved accuracy and robustness.
Sidney Besnard, Frédéric Jurie, Jalal M. Fadili
2023-09-27T06:34:55Z
http://arxiv.org/abs/2309.16729v1
Simpinn's: Simulation-Driven Physics-Informed Neural Networks for Enhanced Performance in Nonlinear Inverse Problems ###### Abstract This paper introduces a novel approach to solve inverse problems by leveraging deep learning techniques. The objective is to infer unknown parameters that govern a physical system based on observed data. We focus on scenarios where the underlying forward model demonstrates pronounced nonlinear behaviour, and where the dimensionality of the unknown parameter space is substantially smaller than that of the observations. Our proposed method builds upon physics-informed neural networks (PINNs) trained with a hybrid loss function that combines observed data with simulated data generated by a known (approximate) physical model. Experimental results on an orbit restitution problem demonstrate that our approach surpasses the performance of standard PINNs, providing improved accuracy and robustness. Sidney Besnard\({}^{1,2}\), Frederic Jurie\({}^{1}\), Jalal Fadili\({}^{1}\)\({}^{1}\)Univ. Caen Normandie, ENSICAEN, CNRS \({}^{2}\)Safran Data Systems Inverse problems, Neural Networks, Physics-Informed, Simulation ## 1 Introduction Inverse problems play a crucial role in science by allowing to unravel the hidden properties and processes behind observed data. They allow scientists to infer and understand phenomena that are otherwise difficult or impossible to observe or measure directly. These problems involve determining the parameters of a system from some available measurements. Inverse problems have far-reaching applications spanning a wide spectrum ranging from medical imaging to non-destructive control or astronomical imaging, as we will see. Machine and deep learning have recently emerged as powerful alternatives to variational models for solving inverse problems. These methods include supervised and unsupervised methods, such as Deep Inverse Prior (DIP) [1], Unrolling [2, 3], Plug-and-play (PnP) [4, 5], and generative models [6]. For a review see [2]. Unrolling and PnP rely on neural networks to learn the regularization from the data. However, these approaches only make sense if the output parameter space can be equipped with a suitable notion of regularity. This is certainly the case if the input parameters are in the form of a structured signal, but is not always the case as in our setting (think of inferring a few parameters that are not structured on a grid). A naive technique would then be to train a neural network using a dataset consisting of input-output pairs, where the input is the observed data and the output is the sought-after vector of parameters [7, 8, 9]. Clearly, the neural network learns to invert the forward model (i.e. the mapping between the observed data and parameters), with the hope that it would predict the unknown parameters for new observations. This approach leverages the ability of neural networks to capture intricate patterns and non-linear relationships in the data. Unfortunately, this type of technique is only applicable when a large set of training pairs is available, which is barely the case in most practical situations. Moreover, such approaches are completely agnostic to the forward model which would produce unrealistic solutions and may not generalize well. Physics-informed neural networks (PINNs) were primarily proposed to solve partial differential equations (PDE) [10, 11, 12, 13]. Their core idea is to supplement the neural network training with information stemming from the measurement formation model, e.g. the PDE model. In turn, this allows to restrict the space of solutions by enforcing the output of the trained neural network to comply with the physical model as described by the PDE. In turn, these methods are expected to be trained with a smaller dataset. An aspect to consider regarding PINNs is their reliance solely on the reconstruction error, which represents constraints imposed by the partial differential equation (PDE). However, in numerous cases, achieving a low reconstruction error does not guarantee an accurate prediction of the parameters (i.e., a low parameter error). Thus, it is essential to emphasize the network's requirement for regularization of the solution space during training. Contributions.In this paper, we propose a novel hybrid approach that combines both physics-informed and data-driven methods by using simulated data to regularise the solution space. Given the difficulty in obtaining real training pairs (observations, parameters) for many real-world problems, simulations offer a convenient means to provide the missing complementary information. Our aim is to demonstrate the effectiveness of neural networks in dealing with non-linear problems, while presenting a method that implicitly infers the forward operator within the neural network parameters by minimising the reconstruction error. ## 2 Proposed Method Let \(\mathcal{X}\subset\mathbb{R}^{n}\) be the space of parameters of the (physical) model, and \(\mathcal{Y}\subset\mathbb{R}^{m}\) be the space of observations. An inverse problem consists in reliably recovering the parameters \(x\in\mathcal{X}\) from noisy indirect observations \[y=f(x)+\epsilon, \tag{1}\] where \(f:\mathcal{X}\rightarrow\mathcal{Y}\) is the forward operator, and \(\epsilon\) stands for some additive noise that captures the measurement and possibly the modeling error. Throughout, we assume that \(f\) is smooth enough (at least continuously differentiable). In the sequel, for a neural network with parameters (weights and biases) \(\theta\in\Theta\), \(\psi:(y,\theta)\in\mathcal{Y}\times\Theta\mapsto\psi(y,\theta)\) denotes its output. Finally, we also assume that we have an approximate explicit model of \(f\), sometimes referred to as a digital twin. This model, \(\hat{f}\), is obtained by modelling the physical phenomena involved in the observations. It will be used later to generate simulated data. ### PINNs for non-linear inverse problems The key idea of PINNs is to incorporate the physical model into the cost function during the training process. For a neural network \(\psi\) and training samples \(\{y_{i}:i=1,\ldots,n\}\) with \(n\) samples, this amounts to solving the following minimization problem with the empirical loss: \[\min_{\theta\in\Theta}\frac{1}{n}\sum_{i=1}^{n}\lVert y_{i}-\hat{f}(\psi(y_{i },\theta))\rVert^{2} \tag{2}\] This loss function leverages the information provided by the physical forward model directly into the training loss. It is also a non-supervised method that relies solely on observations, without any knowledge of the parameter vector \(x_{i}\) corresponding to each \(y_{i}\). Unfortunately, as was observed previously in the literature (e.g. in [14]), when \(f\) is not injective, there are infinitely many solutions \(\psi(\cdot,\theta)\) which attain zero training error. This is because the forward model \(f\) may map multiple input vectors to the same output vector. For example, in the linear case, the action of \(f\) is invariant along its null space. This suggests that training a reconstruction network as (2) only from the observed data, without any additional assumptions or constraints, is not viable. Possible workarounds include explicitly constraining the output of the reconstruction network through regularization (and we are back to the variational world), or introducing invariances such as in [14]. In the forthcoming section, we will describe an alternative based on exploiting the forward model to simulate input-output pairs. This approach will help to regularize the training process and make it more robust to the non-injectivity of \(f\). ### SimPINNs: Simulation aided PINNs In many areas of science, obtaining pairs of input (parameters)-output (observations) training data, can be a significant challenge. This can be due to various reasons, including that data are difficult or expensive to acquire. This is for instance the case in large instruments in physics. Furthermore, even if such pairs of data can be acquired, they are available only in limited quantity, which often impedes the use of data-intensive machine learning approaches. There are however situations cases where even if such data are unavailable, it is possible to artificially generate pairs by generating the input-output pairs by leveraging knowledge of the forward model (Eq. (1)), even if the latter is only approximately known. This involves generating a parameter/input vector \(x\) sampled from the range of possible input values in the model or based on the known distribution of input data. We propose to compute the corresponding simulated observation \(\hat{f}(x)\), i.e. without noise. It is important to note that the forward model serves anyway as an approximation of the underlying physical phenomenon \(f\) it represents, and the simulated observation can only be considered as a perturbed version of the unknown observation due to model imperfections. Summarizing our discussion above, we propose to train a neural network \(\psi\) by replacing Eq. (2) with \[\min_{\theta\in\Theta}\frac{1}{n}\sum_{i=1}^{n}\mathcal{L}_{\lambda}(x_{i},y_ {i},\theta),\mathrm{where} \tag{3}\] \[\mathcal{L}_{\lambda}(x,y,\theta)=\lambda\lVert y-\hat{f}(\psi(y,\theta)) \rVert^{2}+(1-\lambda)\lVert\psi(y,\theta)-x\rVert^{2}.\] Here \(\lambda\in]0,1[\) balances between the two terms: fidelity to the observation and reconstruction error. The determination of an Figure 1: Illustrations of the architectures used in PINNs and SimPINNs for inverse problems. The top part depicts the PINNs approach, where a neural network is trained to learn the inverse function of \(f\), enabling the reconstruction of accurate \(x\) values from observed \(y\) values solely based on the observations and the underlying physics (unsupervised learning). In the bottom part, the SimPINNs (supervised) approach is shown, which utilizes the physics-based simulations to complement the training set with ’annotated’ simulated data and regularize the solution. optimal value for \(\lambda\) can be a challenging task. In our study, we employed an empirical approach to estimate this value by performing cross-validation. In the case of real observations, the value of \(x\) is unknown, making it impossible to calculate the second term in the loss function (Eq. (3)). Therefore, only the first term, which focuses on reconstruction fidelity, is utilized for such data. The ratio between the number of real data, denoted as \(N_{o}\) (where only the observation is known), and the number of simulated data, denoted as \(N_{s}\), plays a significant role in the analysis. Consequently, the influence of this ratio have to be thoroughly examined through experimental studies. ## 3 Experimental Results We validate the proposed method by applying it to an orbit restitution problem, where the dimensionality of \(\mathcal{X}\) (\(n=6\)) is relatively smaller than the dimensionality of \(\mathcal{Y}\) (\(m=64^{2}\)). This problem encompasses several intriguing aspects that make it particularly compelling. Firstly, the underlying physics and the involved forward operator exhibit nonlinearity, which is a primary focus in real-world research problems. Secondly, an orbit is defined by six orbital elements, while the received data exists in a significantly larger space, such as the image space in our case. Consequently, the forward operator maps from a smaller parameter space to a substantially larger image space. The third aspect of this problem pertains to the challenging nature of obtaining labelled data, as it requires integrating raw acquisition with non-trivial evaluations and determinations of orbit parameters. It is important to note that despite initially appearing simple due to the presence of more equations than unknown variables, this problem poses additional difficulties. The involved physics operator is nonlinear, rendering the problem ill-posed with non-trivial equivalence classes. This complexity makes it more challenging than it may seem. In the subsequent sections, we present the details of the problem, including the experimental settings and the obtained results. ### Problem and dataset The objective is to derive an inverse operator for an orbit propagator using images obtained from a simulated sensor. The forward operator performs the projection of orbits expressed in the Terrestrial Reference Frame (TRF) onto an image, similar to a ground track orbit projection. Consequently, the received data correspond to the projection of an orbit onto a sensor positioned on the Earth (see Figures 2 and 3). For this problem, we opt to represent each orbit using 3 Keplerian parameters: inclination, eccentricity, and periapsis. This choice leads to a wide range of images and results, while avoiding trivial equivalence classes in the parameter set. In this problem, the forward operator as follows: \[\begin{array}{rcl}f&:&[0,1[\times[0,2\pi[\times[0,2\pi[&\rightarrow& \mathcal{I}_{64\times 64}(\mathbb{R})\\ &e,i,\omega&\mapsto&.\end{array} \tag{4}\] This operator simulates the entire system, encompassing the satellite position (achieved through an orbital propagator), the radiation pattern, and the projection to the final image (assumed to be \(64\times 64\) pixels). In this experiment, it is assumed that the approximated and actual physics models are identical, denoted as \(\hat{f}=f\). To compute the satellite position while preserving the differentiability criterion of \(\hat{f}\), an analytical Keplerian propagator is employed. Finally, the neural network utilized for this problem consists of 5 dense layers, with each layer containing 784 neurons and employing a ReLU activation function. To assess the performance of the neural network, two previously defined loss functions are employed, as illustrated in Figure 1. Specifically, the Mean Squared Error (MSE) is utilized for both the reconstruction error and the parameter error evaluation. Figure 3: Orbit restitution: Simulation of the orbit projection, as depicted in Figure 2. The left image exhibits the simulated sensor exhibiting the projected orbit, while the right image depicts the orbit projected onto the ground plane. Figure 2: Orbit Restitution: Visualization of an orbit in multiple reference frames. In this example, the depicted orbit has an inclination of 45 degrees, an eccentricity of 0.4, and an apogee of \(42164\times 10^{3}\) meters. ### Results and analysis As shown in Table 1, the SimPINNs method effectively utilizes and leverages information from both real observations and generated data. It outperforms the PINNs method in terms of parameter reconstruction; PINNs is the row with \(N_{s}=0\). The most favorable results are achieved when employing the largest number of real and generated data points (\(N_{o}=N_{s}=40k\)), with an average improvement of a factor of 2 for the parameter error compared to the unsupervised PINNs method. Additionally, the SimPINNs method demonstrates a significant advantage, reducing the reconstruction error by a factor of 10. The impact of incorporating orbital physics in the forward operator becomes evident in the significant benefit it provides for image reconstruction in this particular use case. Due to the influence of orbital dynamics, even small changes in the parameters can have a substantial impact on the satellite's orbit and drastically alter the resulting projection on the observed image. As a result, two images with nearly identical parameters can exhibit significant differences. In this context, the reconstruction loss plays a crucial role in assisting the network in handling these high-gradient values that may not be adequately captured by the supervised loss alone. By combining both the generated and real data, the model can benefit from regularization through \(\mathcal{L}_{\lambda}\) applied to the real dataset. This approach allows the model to leverage the strengths of both data types and converge towards an improved solution with reduced parameter error. Figure 4 presents a selection of projected orbits along with their corresponding reconstructions. ## 4 Conclusions This article explores an NN approach to solve non-linear inverse problems, focusing on the promising SimPINNs method. SimPINNs leverages the physics model to generate labeled training data and effectively regularize the neural network. The study demonstrates that simulation-aided training provides more information compared to conventional PINNs or vanilla neural network training. By utilizing the approximated physics operator, the model achieves improved learning and generalization over non-labeled datasets. SimPINNs shows potential for addressing challenging inverse problems with limited labeled data, offering insights into training neural networks with physics-based knowledge. ## Acknowledgements The research presented in this paper is, in part, funded by the French National Research Agency (ANR) through the grant ANR-19-CHIA-0017-01-DEEP-VISION. \begin{table} \begin{tabular}{|c|c||c|c|c|c|} \hline \multirow{2}{*}{ArgPer(\(\times 10^{-5}\))} & \multicolumn{5}{c|}{Number of real observations (\(N_{o}\))} \\ \cline{3-6} & 0 & - & 1000 & 10000 & 20000 & 40000 \\ \hline \multirow{3}{*}{\(\times 2^{*}\)} & 0 & - & 2.4541 & 1.2970 & 0.9947 & 0.9081 \\ \cline{2-6} & 1000 & 2.0546 & 2.3412 & 1.1546 & 0.9731 & 1.1742 \\ \cline{2-6} & 10000 & 1.1470 & 2.1942 & 0.8591 & 0.7854 & 1.0644 \\ \cline{2-6} & 20000 & 0.8401 & 0.7721 & 0.6924 & 0.6571 & 0.6431 \\ \cline{2-6} & 40000 & 0.4541 & 0.4412 & 0.4201 & 0.4121 & 0.4212 \\ \hline \multirow{3}{*}{\(\times 10^{-5}\)} & \multirow{3}{*}{\(\times 2^{*}\)} & \multicolumn{5}{c|}{Number of real observations (\(N_{o}\))} \\ \cline{3-6} & & 0 & 1000 & 10000 & 20000 & 40000 \\ \cline{2-6} & 0 & - & 1.5887 & 1.4424 & 1.1232 & 0.8638 \\ \cline{2-6} & 1000 & 1.3545 & 1.5225 & 0.7542 & 1.1036 & 0.8452 \\ \cline{2-6} & 10000 & 1.6556 & 1.5245 & 0.7054 & 1.1023 & 0.7214 \\ \cline{2-6} & 20000 & 1.4684 & 1.0845 & 1.2784 & 0.6306 & 0.6251 \\ \cline{2-6} & 40000 & 0.3895 & 0.3856 & 0.3776 & 0.3435 & 0.2861 \\ \hline \multirow{3}{*}{\(\times 2^{*}\)} & \multicolumn{5}{c|}{Number of real observations (\(N_{o}\))} \\ \cline{3-6} & & 0 & 1000 & 1000 & 20000 & 40000 \\ \cline{2-6} & 0 & - & 10012 & 0.0021 & 0.0009 & 0.0004 \\ \cline{2-6} & 1000 & 0.0134 & 0.0114 & 0.0017 & 0.0012 & 0.0003 \\ \cline{2-6} & 10000 & 0.0022 & 0.0028 & 0.0024 & 0.0008 & 0.0007 \\ \cline{2-6} & 20000 & 0.0014 & 0.0007 & 0.0007 & 0.0007 & 0.0005 \\ \cline{2-6} & 40000 & 0.0002 & 0.0002 & 0.0002 & 0.0001 & 0.0001 \\ \hline \multirow{3}{*}{\(\times 2^{*}\)} & \multicolumn{5}{c|}{Number of real observations (\(N_{o}\))} \\ \cline{3-6} & & 0 & 1000 & 10000 & 20000 & 40000 \\ \cline{1-1} \cline{2-6} & 0 & - & 1.3243 & 1.1667 & 0.8742 & 0.3023 \\ \cline{1-1} \cline{2-6} & 1000 & 1.4581 & 0.7064 & 0.1314 & 0.0989 & 0.0852 \\ \cline{1-1} \cline{2-6} & 1000 & 0.2467 & 0.1510 & 0.1394 & 0.0920 & 0.0955 \\ \cline{1-1} \cline{2-6} & 20000 & 0.0966 & 0.0831 & 0.0968 & 0.0750 & 0.0701 \\ \cline{1-1} \cline{2-6} & 40000 & 0.0304 & 0.0305 & 0.0298 & 0.0271 & 0.0251 \\ \hline \end{tabular} \end{table} Table 1: Error evaluation across various parameters, using a test set comprising solely of observations. Figure 4: Reconstruction Results using the SimPINNs Method.
2309.11046
Heterogeneous Entity Matching with Complex Attribute Associations using BERT and Neural Networks
Across various domains, data from different sources such as Baidu Baike and Wikipedia often manifest in distinct forms. Current entity matching methodologies predominantly focus on homogeneous data, characterized by attributes that share the same structure and concise attribute values. However, this orientation poses challenges in handling data with diverse formats. Moreover, prevailing approaches aggregate the similarity of attribute values between corresponding attributes to ascertain entity similarity. Yet, they often overlook the intricate interrelationships between attributes, where one attribute may have multiple associations. The simplistic approach of pairwise attribute comparison fails to harness the wealth of information encapsulated within entities.To address these challenges, we introduce a novel entity matching model, dubbed Entity Matching Model for Capturing Complex Attribute Relationships(EMM-CCAR),built upon pre-trained models. Specifically, this model transforms the matching task into a sequence matching problem to mitigate the impact of varying data formats. Moreover, by introducing attention mechanisms, it identifies complex relationships between attributes, emphasizing the degree of matching among multiple attributes rather than one-to-one correspondences. Through the integration of the EMM-CCAR model, we adeptly surmount the challenges posed by data heterogeneity and intricate attribute interdependencies. In comparison with the prevalent DER-SSM and Ditto approaches, our model achieves improvements of approximately 4% and 1% in F1 scores, respectively. This furnishes a robust solution for addressing the intricacies of attribute complexity in entity matching.
Shitao Wang, Jiamin Lu
2023-09-20T03:49:57Z
http://arxiv.org/abs/2309.11046v1
# Heterogeneous Entity Matching with Complex Attribute Associations using BERT and Neural Networks ###### Abstract Across various domains, data from different sources such as Baidu Baike and Wikipedia often manifest in distinct forms. Current entity matching methodologies predominantly focus on homogeneous data, characterized by attributes that share the same structure and concise attribute values. However, this orientation poses challenges in handling data with diverse formats. Moreover, prevailing approaches aggregate the similarity of attribute values between corresponding attributes to ascertain entity similarity. Yet, they often overlook the intricate interrelationships between attributes, where one attribute may have multiple associations. The simplistic approach of pairwise attribute comparison fails to harness the wealth of information encapsulated within entities.To address these challenges, we introduce a novel entity matching model, dubbed "Entity Matching Model for Capturing Complex Attribute Relationships (EMM-CCAR)," built upon pre-trained models. Specifically, this model transforms the matching task into a sequence matching problem to mitigate the impact of varying data formats. Moreover, by introducing attention mechanisms, it identifies complex relationships between attributes, emphasizing the degree of matching among multiple attributes rather than one-to-one correspondences. Through the integration of the EMM-CCAR model, we adeptly surmount the challenges posed by data heterogeneity and intricate attribute interdependencies. In comparison with the prevalent DER-SSM and Ditto approaches, our model achieves improvements of approximately 4% and 1% in F1 scores, respectively. This furnishes a robust solution for addressing the intricacies of attribute complexity in entity matching. Keywords:Entity Matching, Attribute Comparision, Attention, Pre-trained Model. ## 1 Introduction Knowledge graph update[23] is a dynamic process of maintaining and revising existing knowledge graphs to reflect the ever-changing landscape of real-world knowledge. In this context, entity matching (EM) assumes paramount importance as different data sources continuously evolve, leading to a more complex and challenging knowledge graph update. Entity Matching (EM)[7] aims to determine whether different data references point to the same real-world entity. The objective of EM is to ascertain if data belongs to the same hydraulic entity. In entity matching, data can be classified into two categories[26]: homogenous data and heterogeneous data. Homogenous data refers to data with the same schema, meaning they share identical attribute names. Based on the correctness of attribute values and their alignment with attribute names, homogenous data can be further categorized into clean data and dirty data. Clean data indicates that attribute values are correctly placed under the appropriate attributes, i.e., attribute values are aligned with corresponding attributes in the schema. Dirty data [27] implies that attribute values might be erroneously placed under the wrong attributes, i.e., attribute values are not aligned with corresponding attributes in the schema. Heterogeneous data, on the other hand, involves dissimilar attribute names and may exhibit one-to-one, one-to-many, or many-to-many correspondence relationships. Entity matching [22] typically involves two steps: blocking[24] and matching[25]. The purpose of blocking is to reduce computational costs by partitioning records into multiple blocks, where only records within the same block are considered potential matches. Subsequently, within each block, matching is performed to identify valid pairs of matching records, which is a crucial step in the entity matching process. However, in the matching process, prevalent models often encounter attribute matching issues, particularly when dealing with heterogeneous data. As these models [8][10][11] typically concentrate on homogenous data (often directly performing entity matching on structured database tables), they neglect the consideration of heterogeneous data (i.e., data scraped from web pages, where data attributes exhibit substantial variations). As depicted in Fig.1,\(e_{1}\) and \(e_{2}\) respectively represent heterogeneous data extracted from Wikipedia and Baidu Baike about the Three Gorges Reservoir. Their attribute names are not identical and complex correspondence relationships exist.For candidate entity pairs \((e_{1},e_{2})\),conventional EM methods tend to compare tokens based on properties like "Location" and "Reservoir Location," due to their highest token similarity. However, the attribute "Reservoir Location" encompasses information related to both "Location" and "Region," and a simplistic token-based similarity assessment between "Reservoir Location" and "Location" neglects the context of "Area" thereby diminishing the matching accuracy. In this work, we propose a neural network approach based on pre-trained models to capture attribute matching information for deep entity matching (EM). To establish the correspondence between entity attributes, following the hierarchical structure Token\(\rightarrow\)Attribute\(\rightarrow\)Entity, we compare individual tokens within entities and across entities to obtain token similarity information. By subsequently aggregating the similarity information among tokens, we uncover complex attribute relationships in heterogeneous data. Ultimately, entity similarity is derived by evaluating the similarity of attribute values. To address matching challenges in heterogeneous data, we first learn contextual representations of tokens for a given pair of entities. Subsequently, within each entity, we leverage self-attention mechanisms to ascertain token dependencies, thus determining the significance of tokens within entities. This is followed by cross-entity token alignment using interaction attention mechanisms, yielding token similarity between entities. The aggregated token similarity is then weighted to derive attribute similarity. Concurrently, candidate entity pairs are serialized into sentence inputs for BERT model, generating sentence-level embed Figure 1: Wikipedia and Baidu Baike information about the Three Gorges Reservoir. dings to mitigate the impact of data heterogeneity. Subsequently, within a Linear layer, heightened emphasis is placed on the matching degree of similar attributes, harnessing more attribute information while disregarding the influence of dissimilar attributes. This comprehensive approach culminates in the determination of entity matching outcomes. Our main contributions can be summarized as follows: 1. We employ BERT for contextual embeddings, enabling richer semantic and contextual information to be learned from a reduced dataset and producing more expressive token embeddings. 2. Building upon the transformation of entity matching into sequence pair classification, we introduce attribute similarity. This inclusion grants heightened focus to similar attributes, effectively harnessing entity attribute information. 3. We crawled data about Songhua River Basin and Liao River Basin from Wikipedia and Baidu Baike, resulting in a dataset encompassing 4039 reservoirs and 6576 river data entries. We constructed a water resources dataset and validated the model's effectiveness and robustness on this dataset. ## 2 Related Work Entity Matching (EM), also known as entity resolution or record linkage, has been a subject of research in data integration literature for several decades[5]. To mitigate the high complexity of directly matching every pair of data, EM is typically divided into two steps: blocking and matching. In recent years, matching techniques have garnered increased research attention. [2]Ditto leverages pre-trained models such as BERT to transform entity matching into a binary classification problem of sequence pairs. It accomplishes this by inserting data attributes and values into special COL and VAL markers, concatenating them into sequence pairs, and then inputting them into the pre-trained model. This enables the model to classify sequence pairs and thus perform entity matching tasks. DER-SSM[1] introduces and implements soft pattern matching, flexibly associating the relationships between attributes by considering inter-word correlations. It aggregates word information during entity matching to express relationships between attributes, greatly enhancing the effectiveness of entity matching for complex and corrupted data. JointMatcher[3] employs relevance-aware encoders and numeral-aware encoders to enhance the model's focus on similar segments and numeral segments within sequences, thereby improving the accuracy of entity matching. HHG[4] pioneers the use of graph neural networks to establish a hierarchical structure among words, attributes, and entities. By learning entity embeddings from top to bottom and capturing more contextual information, it enhances the derivation of entity embeddings. Ditto utilizes the BERT model to transform entity matching into a binary classification problem of sequence pairs, better exploiting the contextual information of tokens. DER-SSM considers soft patterns to establish correspondences between attributes, mitigating the impact of heterogeneous data. Meanwhile, JointMatcher prioritizes the matching degree of similar segments between entities during the matching process. We have comprehensively considered these methods and, based on the foundation of using the BERT model to convert entity matching into binary classification of sequence pairs, we focus on the matching degree of similar attributes to address the issues of inadequate utilization of semantic information and matching of heterogeneous data. Preliminaries This section provides a formal definition of Entity Matching (EM) and subsequently outlines an LM-based approach to solving EM. ### Entity Matching Entity Matching, also known as entity resolution, refers to the process of identifying pairs of records from structured datasets or text corpora that correspond to the same real-world entity. Let D be a collection of records from one or multiple sources, such as rows of relational tables, XML documents, or text paragraphs. Entity Matching typically involves two steps: blocking and matching. In this paper, we focus on the matching step of entity matching. Formally, we define the entity matching problem as follows: Input: A set M of pairs of records to be matched.For each pair\((e_{1},e_{2})\in M\), \(e=\{(attr_{i},\nu al_{i})\}_{1\leq i\leq k}\)each entity is represented in the form of K key-value pairs, where Key and Value are respectively the attribute name and attribute value of the entity. Output: A set of pairs of records\(M^{*}\), where each entity in each pair \((e_{1},e_{2})\)points to the same entity, indicating the same real-world entity. In this definition, our input is sufficiently general to apply to both structured and textual data. Attribute names and attribute values can take any form, including even indices or other identifiers, even if their true semantics are not available, such as "attr1" and "attr2". ### Methodology Framework the schema of an entity represents an abstracted representation of the basic information about that entity. Schema matching is often a necessary prerequisite in the context of Entity Matching (EM), as there might be differences among attributes of different entities. Traditional EM methods typically establish one-to-one mapping relationships between attributes from different entities. However, in reality, the associations between two entity attributes can be intricate, and simple one-to-one mappings may fail to capture these complex relationships. To address EM more effectively, it becomes crucial to consider the intricate associations between attributes during the entity matching process, thereby enhancing the performance of entity matching. For entities comprising distinct attributes, an entity itself can be seen as an instance of a schema. Given two entities,\(e_{1}=\{<a_{1}^{s},\nu_{1}^{s}>...<a_{m}^{s},\nu_{m}^{s}>\}\) and \(e_{2}=\{<a_{1}^{t},\nu_{1}^{t}>...<a_{n}^{t},\nu_{n}^{t}>\}\), \(\{a_{1}^{s},a_{2}^{s},...,a_{m}^{s}\}\) and \(\{a_{1}^{t},a_{2}^{t},...,a_{n}^{t}\}\) respectively denote the distinct schemas of the two entities, and each schema is composed of Tokens representing attribute values of the entities. To achieve this goal, we construct a neural network based on BERT for entity matching. As illustrated in Fig.2, the left part involves a neural network that captures the complex associations between entity attributes. On the right side of the matching process, the final matching results are derived by considering the association matrix, which focuses on the degree of matching between different entity attributes. The matching process of the network mainly comprises the following steps: (1) Token Embedding: Converting Tokens within attribute values into vectors using BERT, while capturing contextual relationships between each Token. (2) Token Self-attention: Obtaining attention scores between Tokens of the same entity through self-attention mechanism. (3) Token Aggregation: Aggregating attention scores between Tokens to obtain similarity information. (4) Attribute Inter-Attention: Determining attention scores between Tokens of different entities through interactive attention. (5) Attribute Comparison: Aggregating similarity scores between Tokens within and between entities to create a similarity matrix for attributes. (6) Serialize: Serializing entities in the form of \({}_{i}\)Key, Value\({}_{\dot{\ell}}\). (7) Sentence Embedding: Converting the serialized result into sentence vectors. (8) Linear: Focusing on the matching degree of similar attributes within sentence vectors. (9) Softmax: Normalizing the output of the Linear layer, resulting in a match (0/1) output. ## 4 Em-Car Model In this section, we will mainly introduce the specific implementation of each step of the model. ### Token Embedding Each token of both attribute values and attribute names needs to be embedded into a low-dimensional vector for subsequent calculations. Since attribute values are composed of sequences of different tokens, in the embedding process, the same token may have different vector representations in different contexts. Therefore, contextual semantic information should be integrated into the vectors. BERT is a pre-trained deep bidirectional Transformer model that effectively captures the semantic information of each token in its context through unsupervised learning on large-scale corpora. This implies that the token vectors generated by BERT can better represent the meaning of each token. Therefore, we choose to use the pre-trained BERT model for token embedding. \[H_{i}^{s}=BERT(A_{i}^{s}V_{i}^{s}) \tag{1}\] In the process of obtaining the attribute similarity matrix, the vector representation of entities is obtained by concatenating different tokens.\(H^{S}=[H_{1}^{s},H_{2}^{s},...,H_{m}^{s}]\), \(H^{T}=[H_{1}^{s},H_{2}^{s},...,H_{n}^{t}]\). ### Token Self-attention During the matching process, it is necessary to compare the similarity of each attribute, so we need to determine the significance of attributes within an entity. This is achieved Figure 2: Model Architecture, On the left is how to capture complex relationships between attributes, and on the right is how to incorporate these complex relationships into the matching process. through Token Self-attention, which computes the weights of tokens to aggregate the importance of attributes. Its role is to establish relationships between each token and other tokens within the sequence, capturing significant interdependencies to derive the importance of attributes. Through Self-attention, each token can be weighted and combined based on the importance of other tokens in the sequence, thereby better reflecting contextual information and semantic dependencies. Such an attention mechanism enables the model to dynamically focus on important tokens while disregarding less significant ones. Consequently, token-level self-attention is employed to weight the tokens within an entity. For attributes, their self-attention scores are computed using trainable matrices, as shown in the equation. \[\alpha_{i}^{s}=softmax((H_{i}^{s})^{T}W_{s}H_{i}^{s}) \tag{2}\] ### Token Aggregation To better harness token information for token-level comparisons, we employ Token Aggregation to merge the representations obtained after the Self-Attention operation, creating a comprehensive representation of the entire entity. This aggregation fuses all token information from the sequence into a single vector, enhancing the overall representation of the entity's information. This process involves an attention matrix, but we require a token weight vector for token aggregation. Therefore, we utilize a transformation function \(m2\nu()\) to convert it into a token weight vector \(W_{i}\). \(m2\nu()\) By summing the aggregation of each row of \(\alpha_{i}^{s}\), a vector is derived, and subsequently, each element of the vector is normalized by dividing it by the maximum element. \[\alpha_{i}^{\prime s}=m2\nu(\alpha_{i}^{s}) \tag{3}\] ### Attribute Inter-Attention The aforementioned operations yield attribute relationships within individual entities. However, our objective is to capture the complex relationships between attributes in heterogeneous data. Therefore, it is necessary to perform matching across different entities. Through Attribute Inter-Attention in entity matching tasks, interactive attention calculation is applied to different entity attributes. This process yields correspondences between attributes of different entities, enabling the learning of correlations among various attributes. As a result, a better grasp of the associations between entities is achieved. This attention mechanism aids in focusing on attributes relevant to matching while disregarding irrelevant ones. Each entity is considered as a sequence concatenated by tokens, inter-entity interaction attention is leveraged to obtain interaction representations between \(H_{i}^{s}\) and \(H_{t}^{j}\). Here, \(W_{i\to T}\)denotes the inter-entity interaction attention. \[\beta^{i\to T}=softmax((\dot{H}_{i}^{s})^{T}W_{i\to T}H^{T}) \tag{4}\] \[\dot{H}_{i}^{s}=\beta^{i\to T}H^{T} \tag{5}\] ### Attribute Comparision After obtaining the cross-entity correspondences, we need to use these correspondences to calculate the similarity between entity attributes, thus deriving the relationships between attributes. Attribute Comparison involves comparing different attributes of distinct entities during the matching process, computing their similarity or dissimilarity, and thereby gauging the level of association between different attributes. This attribute comparison mechanism aids the model in capturing essential features among attributes in entity matching tasks, further assisting the model in entity-level matching or classification. We apply element-wise absolute difference and Hadamard product to \(H_{i}^{s}\) and\(\hat{H}_{i}^{s}\), and incorporate the intermediate representation into a highway network. Subsequently, the token-level similarity C from \(e_{1}\) to \(e_{2}\) is the output of the highway network. \[C=HighwayNet([H^{s}-H^{s}],[H^{s}\otimes H^{s}]) \tag{6}\] Lastly, the aggregation of token similarities obtained through the interaction attention mechanism of self-attention yields the similarity between entity attributes. \[\delta_{i}^{s}=\sum_{x\in[1,|H_{i}^{s}|]}C_{i}^{s}(x)\alpha_{i}^{\prime s}(x) \overset{s}{\text{C}} \tag{7}\] \[R_{ij}^{S\to T}=\sum_{x\in[1,|H_{i}^{s}|]}C_{i}^{s}(x)\alpha_{i}^{\prime s}(x) \overset{s}{\text{C}} \tag{8}\] ### Serialize and Sentence-Embedding We employ the methods from Ditto [2] to serialize the data and generate sentence embeddings. For each entity pair, we serialize it as follows: \(serialize(e)=[COL]attr_{1}[VAL]val_{1}[COL]attr_{2}[VAL]val\), Where [COL] and [VAL] are special tokens used to indicate the start of attribute names and values, respectively. For example, the first entry in the second table is serialized as: For each candidate entity pair, \(serialize(e_{1},e_{2})=[CLS]serialize(e_{1})[SEP]serialize(e_{2})[CLS]\), where [SEP] is a special token that separates the two sequences, and [CLS] is a special token required by BERT to encode the sequence pair into a 768-dimensional vector. ### Linear and Softmax In entity matching tasks, a linear layer is employed to perform a linear transformation on the vectors that have undergone feature extraction and encoding. The formula representing the linear layer for input feature vector X is as follows: \[L(X,W,b)=X\bullet W+b \tag{9}\] Here, W represents the weight matrix to be learned, and b is the bias vector. In this context, the vector X is obtained through previous serialization and sentence embedding, resulting in the embedded vector E for entity pairs. The matrix W corresponds to the obtained entity attribute similarity matrix Further applying a softmax function yields the output vector. This vector represents a probability distribution, where each element signifies the probability for the respective category. In this context, the output of 0 signifies a non-match, while 1 signifies a match. ## 5 Experiment In this section, we utilized a dataset to evaluate our EM-CAR model. ### Experiment Dataset When evaluating our approach, we utilized various types of datasets, including: Two isomorphic public datasets[9] (simple 1:1 attribute associations). Two heterogeneous public datasets (complex associations of 1:m and m:n). A hydraulic heterogeneous dataset (complex associations of 1:m and m:n). The information for the public datasets is presented in the following table. Homogeneous Data: The patterns of homogeneous data involve simple associations (1:1). iTunes-Amazon (iA) and DBLP-Schoral1 (DS1) correspond respectively to iTunes-Amazon1 and DBLP-choral1[5]. Heterogeneous Data: The complex data SM involves complex associations (1:m or m:n). We implemented a variant of the Synthetic Data Generator UIS to generate the UIS1-UIS2 (UU) dataset[10]. The initial five attributes are name, address, city, state, and zip code. Address and city are combined into a new attribute in UIS1 records, while city and state are integrated into a new attribute in UIS2 records. Therefore, the attribute numbering for UU is 4-4. Walmart-Amazon1 (WA1) is a variant of Walmart-Amazon (5-5)[12]. Brand and model are merged into a new attribute in Walmart records, while category and model are integrated into a new attribute in Amazon records. Hence, the attribute numbering for WA1 is 4-4. Next, we introduce the composition of the hydraulic dataset, which we crawled separately from Wikipedia and Baidu Baike. We collected data about the Songhua River Basin and the Liao River Basin, including 4039 reservoirs and 6576 river records. Reservoirs include attributes such as reservoir name, location, area, region, normal water level, watershed area, normal storage level, etc. Rivers include attributes such as river name, region, river grade, river length, basin area, etc. As shown in Figure 1, there exist complex correspondences among these attributes. During data processing, we labeled data belonging to the same entity as matching and labeled data pointing to different entities as non-matching. Since this data was crawled from web pages based on names, most of the non-matching entities are entities with the same name, and the proportion of same-name entities in the data is not high. Therefore, there are not enough negative examples in the data. We created some negative examples by replacing attribute values with synonyms. Finally, our dataset has a size of 21230, including 5000 positive instances, with 2000 positive instances for reservoirs and 3000 for rivers. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline Type & Dataset & Domain & Size & \#POS. & \#ATTR \\ \hline Same pattern & iTunes-Amazon & Music & 539 & 132 & 8-8 \\ \hline Same pattern & DBLP-Scholar & Citation & 28707 & 5347 & 4-4 \\ \hline Different pattern & UIS1-UIS2 & Person & 12853 & 2736 & 4-4 \\ \hline Different pattern & Walmart-Amazon & Electronics & 10242 & 962 & 4-4 \\ \hline \end{tabular} \end{table} Table 1: The ”Size” column indicates the size of the ”Size” table, ”#POS.” represents the number of positive matches, and ”#ATTR.” represents the attribute number. The attribute association ”m:n” between two patterns is entirely different from the attribute numbering ”c-d”. The ”m:n” attribute association signifies the presence of at least one complex 1:m or m:n attribute association between two patterns. The attribute numbering ”c-d” only denotes that the first pattern has ”c” attributes, while the second pattern has ”d” attributes. ### Implementation and Setup We implemented our model using PyTorch and the Transformers library. In all experiments, we used the base uncased variants of each model. We further applied mixed-precision training (fp16) optimization to accelerate both training and inference speed. For all experiments, we fixed the maximum sequence length to 256, set the learning rate to 3e-5, and employed a linear decay learning rate schedule. The training process runs for a fixed number of epochs (10, 15, or 40 depending on the dataset size) and returns the checkpoint with the highest F1 score on the validation set. Comparison Methods. We compare EM-CAR with state-of-the-art EM solutions such as Ditto, the attribute correspondence-aware method DEM-SSR, and the classical method DeepMatcher. Here's a summary of these methods. We report the average F1 score over 6 repeated runs in all settings. DeepMatcher: DeepMatcher is a state-of-the-art classical method, customizes an RNN architecture to aggregate attribute values and then compare/align the aggregated representations of attributes. Ditto: Ditto is a state-of-the-art matching solution that employs all three optimizations, Domain Knowledge (DK), TF-IDF summarization (SU), and Data Augmentation (DA). DER-SSM: In comparison to Ditto, DER-SSM defines and implements soft pattern matching, obtaining context relations between tokens through BiGRU. It considers soft pattern matching by aggregating token similarity during entity matching based on the context relationships between tokens. ### Experiment Result The F1 score is used to measure the precision of entity matching (EM) and is the harmonic mean of precision (P) and recall (R). Precision (P) represents the score of correct matching predictions, while recall (R) represents the score of true matches predicted as matches. Typically, in EM, there are two phases: blocking and matching[2]. Our focus is on the matching phase in entity matching (EM), assuming that blocking has already been performed. We follow the same blocking setup[2], where blocking is applied to generate a candidate set for the dataset. All pairs in the candidate set are labeled. The dataset is then divided into a 3:1:1 ratio for training, validation, and testing. To further demonstrate the performance of EM-CAR, we conducted a case study comparing it with Ditto. First, it should be noted that Ditto directly utilizes context-based embeddings obtained from pre-trained language models (LM) for classification, making it not entirely suited for entity matching (EM) tasks. Specifically,Ditto's embeddings might not be fully optimized for the specific task of entity matching, as they are derived from a \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline Type & Dateset & DeepMatcher & DER-SSM & Ditto & EM-CAR \\ \hline Same pattern & iTunes-Amazon & 82.3 & 85.7 & 89.6 & 89.5 \\ \hline Same pattern & DBLP-Scholar & 85.4 & 89.2 & 90.1 & 90.9 \\ \hline Different pattern & UIS1-UIS2 & 76.2 & 80.4 & 85.2 & 86.3 \\ \hline Different pattern & Walmart-Amazon & 77.1 & 81.2 & 84.4 & 85.3 \\ \hline Water data & SongLiao & 74.3 & 78.2 & 80.1 & 81.2 \\ \hline \end{tabular} \end{table} Table 2: Average F1 Scores of Different Methods. broader range of language modeling objectives. This could potentially limit Ditto's ability to capture the nuanced and complex relationships between attributes required for accurate entity matching. In contrast, our model aims to address this issue by placing greater emphasis on attributes with higher similarity. As depicted in the figure, our model takes into account the similarity of attributes, particularly those that are more closely related, This approach allows our model to better capture the nuanced relationships between attributes and improve the overall matching accuracy. As shown in the Fig.3, when performing matching, Ditto's utilization of pre-trained models can lead to erroneous judgments. This is attributed to the fact that, while determining whether these two entities match, the top two attention scores are placed on the tokens "YichangCity" and "YilingDistrict" while the score for "SandoupingTown" is not as high. Consequently, more attention is directed towards the correspondence between "YichangCity" and "YilingDistrict" as well as "YilingDistrict" and "SandoupingTown". In contrast, we aim to prioritize the matching probability between "YilingDistrict" and "SandoupingTown" As illustrated in Fig.4, our model focuses more on the matching degree of similar attributes, denoted by \((e_{1},e_{2})\). This enables our model to appropriately emphasize the similarity between attributes and achieve accurate results. Figure 4: Attention scores of EM-CAR. Figure 3: Attention scores of Ditto. Conclusion In this paper, We propose EM-CAR, a method that leverages attribute similarity within the context of pre-trained models to address complex entity correspondence. In our approach, we compare the classical DeepMatcher, DER-SSM (which considers soft patterns, i.e., complex attribute correspondences), and Ditto, which employs pre-trained models. We evaluate these methods, including our own, on three types of datasets: homogeneous, heterogeneous, and hydraulic data (heterogeneous). Across all datasets, DeepMatcher achieves the lowest F1 score due to its reliance on a simple CNN network, which struggles to capture semantic information effectively. For the two homogeneous public datasets, DER-SSM and Ditto exhibit comparable accuracy, as shown in the figure. Homogeneous datasets feature straightforward 1:1 relationships, thus methodological differences have less pronounced impacts. The primary distinction lies in whether a pre-trained model is utilized. Concerning the two heterogeneous public datasets, DER-SSM initially shows promise, but its use of BIGRU limits semantic context to a local n-character window, resulting in slightly lower accuracy compared to Ditto. In contrast, our model takes into account complex attribute correspondences, placing greater emphasis on matching similar attributes, thereby enhancing accuracy to a certain extent. On the hydraulic dataset, limited training data affects all three models' performance, resulting in reduced accuracy. However, our model still achieves the highest F1 score. This suggests that prioritizing the matching of similar attributes during the matching process has a positive impact on improving matching accuracy. In summary, our EM-CAR approach effectively enhances entity matching accuracy by focusing on the similarity between attributes, especially for complex correspondences, as demonstrated across various datasets in comparison to other methods such as DeepMatcher, DER-SSM, and Ditto.
2309.17437
Learning Decentralized Flocking Controllers with Spatio-Temporal Graph Neural Network
Recently a line of researches has delved the use of graph neural networks (GNNs) for decentralized control in swarm robotics. However, it has been observed that relying solely on the states of immediate neighbors is insufficient to imitate a centralized control policy. To address this limitation, prior studies proposed incorporating $L$-hop delayed states into the computation. While this approach shows promise, it can lead to a lack of consensus among distant flock members and the formation of small clusters, consequently resulting in the failure of cohesive flocking behaviors. Instead, our approach leverages spatiotemporal GNN, named STGNN that encompasses both spatial and temporal expansions. The spatial expansion collects delayed states from distant neighbors, while the temporal expansion incorporates previous states from immediate neighbors. The broader and more comprehensive information gathered from both expansions results in more effective and accurate predictions. We develop an expert algorithm for controlling a swarm of robots and employ imitation learning to train our decentralized STGNN model based on the expert algorithm. We simulate the proposed STGNN approach in various settings, demonstrating its decentralized capacity to emulate the global expert algorithm. Further, we implemented our approach to achieve cohesive flocking, leader following and obstacle avoidance by a group of Crazyflie drones. The performance of STGNN underscores its potential as an effective and reliable approach for achieving cohesive flocking, leader following and obstacle avoidance tasks.
Siji Chen, Yanshen Sun, Peihan Li, Lifeng Zhou, Chang-Tien Lu
2023-09-29T17:50:57Z
http://arxiv.org/abs/2309.17437v2
# Learning Decentralized Flocking Controllers with ###### Abstract Recently a line of researches has delved the use of graph neural networks (GNNs) for decentralized control in swarm robotics. However, it has been observed that relying solely on the states of immediate neighbors is insufficient to imitate a centralized control policy. To address this limitation, prior studies proposed incorporating \(L\)-hop delayed states into the computation. While this approach shows promise, it can lead to a lack of consensus among distant flock members and the formation of small clusters, consequently resulting in the failure of cohesive flocking behaviors. Instead, our approach leverages spatiotemporal GNN, named STGNN that encompasses both spatial and temporal expansions. The spatial expansion collects delayed states from distant neighbors, while the temporal expansion incorporates previous states from immediate neighbors. The broader and more comprehensive information gathered from both expansions results in more effective and accurate predictions. We develop an expert algorithm for controlling a swarm of robots and employ imitation learning to train our decentralized STGNN model based on the expert algorithm. We simulate the proposed STGNN approach in various settings, demonstrating its decentralized capacity to emulate the global expert algorithm. Further, we implemented our approach to achieve cohesive flocking, leader following and obstacle avoidance by a group of Crazyflie drones. The performance of STGNN underscores its potential as an effective and reliable approach for achieving cohesive flocking, leader following and obstacle avoidance tasks. ## I Introduction Flocking is a collective behavior observed in groups of animals or autonomous agents, such as birds, fish, or artificial robots, where individuals move together in a coordinated manner. In a flock, each robot follows simple rules based on local information to achieve a common group objective [1]. Multi-robot systems based on flocking models exhibit self-organization and goal-directed behaviors, making them suitable for various applications, including automated parallel delivery, sensor network design, and search and rescue operations [2]. Flocking is typically modeled as a consensus or alignment problem, aiming to ensure that all robots in the group eventually agree on their states [3]. Classical methods such as those proposed by Tanner [4] and Olfati-Saber [3], define rules and constraints governing the position, speed, and acceleration of the robots. However, these methods heavily rely on parameter tuning and are limited to predefined scenarios. In contrast, learning-based methods spontaneously explore complex patterns and adapt their parameters through training, providing more flexibility and adaptability compared to classical approaches. There are primarily two research directions in learning-based methods. One approach focuses on imitation learning, as demonstrated by Tolstaya et al. [5], Kortvelesy et al. [6], Zhou et al. [7], and Lee et al. [8]. The other approach involves multi-robot deep reinforcement learning (MADRL), as explored in the studies of Yan et al. [9] and Xiao et al. [10]. MADRL is particularly useful when labels are unavailable, but it presents its own set of challenges, including the demand for an extensive volume of training data and limitations in generalizing to new and unencountered scenarios [11]. In this work, we choose to utilize imitation learning due to the availability of an expert policy that has proven to be effective for our task [4, 5, 12]. Recent research in this direction adopts a graph-based approach to represent robots and their interactions and leverages Graph Neural Networks (GNNs) [13, 14] for modeling and analyzing flock dynamics. This approach shows promise in addressing flocking tasks by harnessing the power of graph-based representations and neural networks. Specifically, studies such as Tolstaya et al. [5], Kortvelesy et al. [6], Zhou et al. [7], and Lee et al. [8] utilize a technique called "delayed state" to incorporate Fig. 1: The comparison of the information gathered by DGNN (top) and TGNN (bottom) in the case of four aerial robots. DGNN propagates the states of robots 2 and 3 at \(t-1\) to robot 0 through robot 1, along with robot 1’s current state at \(t\). TGNN propagates the states at the previous step (\(t-1\)) along with the current state of the robot 1 at \(t\) to robot 0. STGNN combines information from both DGNN and TGNN, thus processing superior predictive power than either method alone. the information from the \(l\)-step-before states of a robot's \(l\)-hop neighbors, where \(l=1,2,\cdots,L\)[5]. Henceforth, we identify this type of model as "delayed graph neural network" (DGNN). DGNN enables the learning of spatially extended representations in the local network. However, it overlooks the influence of a robot's historical states and the historical states of its neighbors, thereby neglecting the temporal sequence of flock movement. Diffidently, the spatial and temporal expansion of STGNN enables it to gather information from both spatial and temporal dimensions. We illustrate the distinct information collected by DGNN and TGNN, as well as their combined information gathered by STGNN in Figure 1. With this insight, we make the following primary contributions in this paper. * **Design a STGNN-based imitation learning framework for decentralized flocking control.** STGNN enables effective information fusion by integrating delayed states from distant neighbors and previous states from immediate neighbors. To the best of our knowledge, we are the first to design a STGNN-based learning framework for multi-robot flocking with leader following and obstacle avoidance. * **Develop a centralized expert algorithm for flocking, with leader following and obstacle avoidance.** Finding an expert algorithm is the key to success in imitation learning. We develop an expert algorithm that provides effective control over a large swarm of robots based on Tanner's [4] and Olfati-Saber's work [3]. To the best of our knowledge, we are the first to offer a complete global expert algorithm, capable of handling the flocking with both leader following and obstacle avoidance. * **Conduct an extensive evaluations of STGNN.** We comprehensively evaluate STGNN by comparing it to DGNN and TGNN and testing STGNN with varying history horizons. The results demonstrate STGNN's effectiveness and superior performance in completing complex flocking tasks. Additionally, we implement STGNN into a group of Crazyllie drones to achieve flocking with obstacle avoidance. ## II Problem Formulation Consider a collection of \(N\) robots that are identical and possess the same capabilities such as maximum acceleration \(U_{\text{max}}\), maximum velocity \(V_{\text{max}}\), and communication range \(R_{c}\). Each robot \(i\) can be uniquely identified by its position \(\mathbf{p}_{i}\) and its velocity \(\mathbf{v}_{i}\). The task for our learning model is to compute the control input \(\mathbf{u}_{i}\), i.e., acceleration for each robot, based on the current state by itself and its neighbors, represented by the position and velocity \((\mathbf{p},\mathbf{v})\). In this paper, we aim to solve three problems as a whole: robot flocking, leader following, and flocking with obstacle avoidance. Each robot is required to avoid collisions with other robots, follow and maintain proximity to a virtual leader, and navigate around obstacles. Specifically, flocking is to maintain a consistent distance from other robots while synchronizing their movements. The leader following asks the robot swarm to track one or more leader(s) during flocking. We opt for a single virtual leader for the entire swarm, as the primary objective of flocking is to achieve consensus, and introducing multiple leaders could potentially violate this objective. It is important to note that the virtual leader differs from other robots in that it does not need to avoid collisions. Instead, it represents a predefined trajectory known to all robots. Finally, the obstacle avoidance requests the robot swarm to achieve the two tasks above while avoiding crashing into obstacles. ## III Methodology In this research, we employ imitation learning to train STGNN. Specifically, we train an expert model whose outputs serve as labels for training our STGNN model. The expert assumes that the strategy provider possesses real-time information of all robots (i.e., centralized communication), which is impracticable in reality. In contrast, STGNN considered real-world scenarios, where each robot makes decisions by itself with local communication and delayed information (i.e., decentralized communication). Due to the limited information access in decentralized scenarios, STGNN may not generate strategies as effectively as the expert. Nevertheless, after being trained with historical and neighboring data and the output of the expert model, STGNN demonstrates the capability to generate strategies that approach expert-level optimization, despite the limited input data. ### _Problem Statement_ Denote the robot network at timestamp \(t\) as \(\mathcal{G}^{(t)}(\mathbf{V}^{(t)},\mathbf{E}^{(t)},\mathbf{X}^{(t)})\), where \(\mathbf{V}^{(t)}\) is the set of robots, \(\mathbf{E}^{(t)}\) is the set of direct communications between nearby robots, and \(\mathbf{X}^{(t)}\) is the set of states for the robots. For an arbitrary node \(V_{i}\), its neighborhood \(\mathcal{N}_{i}^{(t)}\) at timestamp \(t\) is composed of other robots \(V_{j}\) within its communication range \(R_{c}\), i.e., \(\{V_{j}\in\mathcal{N}_{i}^{(t)}|\forall V_{j},r_{i,j}^{(t)}<R_{c}\}\). \(r_{i,j}^{(t)}\) is the distance between \(V_{i}\) and \(V_{j}\) at \(t\). The problem can then be summarized as below: \[f_{\text{expert}}(X_{i}^{(t)},\mathbf{V}^{(t)},\mathbf{E}^{(t)})\to \mathbf{u}^{(t+1)}, \tag{1}\] \[f_{\text{STGNN}}(X_{i}^{(t-L)}...X_{i}^{(t)},\mathcal{N}_{i}^{(t -L)}...\mathcal{N}_{i}^{(t)},\] (2) \[\mathbf{E}_{i,j;V_{j}\in\mathcal{N}_{i}^{(t-L)}...\mathcal{N}_{ i}^{(t)}},\theta\}\to\mathbf{u}^{(t+1)},\] \[\theta^{*}=\operatorname*{arg\,min}_{\theta}(\sum_{V_{i}\in \mathbf{V}}(\mathbf{u}^{(t+1)}-\mathbf{u}^{(t+1)})^{2}), \tag{3}\] where \(L\) is the number of historical states used in STGNN, \(\theta\) is the set of trainable parameters of STGNN, and \(\theta^{*}\) is the optimized parameter set for STGNN. ### _Expert algorithm_ Previous researchers recognized the challenge of communication delays in the early stages of robot flocking studies, leading them to focus on decentralized scenarios. Consequently, only a limited number of prior algorithms have been designed to address centralized scenarios in the context of robot flocking problems. Therefore, to formulate an expert model for targeting our tasks and considering centralized scenarios, we propose a novel centralized model to generate labels for STGNN training. As introduced in Section II, an expert algorithm should cover three crucial aspects--flocking, the leader following, and obstacle avoidance. In this case, the algorithm should be constrained by both the distances between robots and the distances between robots and obstacles. Similar to [4], the regularized update of speed is computed jointly from the collision avoidance potential and the velocity agreement. The collision avoidance potential ensures that the distances between robots exceed a predefined threshold, while the velocity agreement ensures that robots maintain consistent behavior in relation to the other robots. Specifically, consider there exists a control \(u_{i}\) of robot \(i\) as the input of the expert algorithm. The algorithm then generates an update \(u_{i}\) following specific rules. \(u_{i}\) can be defined by the combination of these components as in Equation 4. In Equation 4, \(c_{\alpha}\), \(c_{\beta}\), and \(c_{\gamma}\) are positive weighting parameters. The \(\alpha\)- term specifies collision avoidance and velocity alignment among the robots, while the \(\beta\)- term specifies collision avoidance and velocity alignment between robots to obstacles. The \(\gamma\) term is peer-to-peer guidance from the virtual leader to each robot \(i\). \[\mathbf{u}_{i}=c_{\alpha}\mathbf{u}_{i}^{\alpha}+c_{\beta}\mathbf{u}_{i}^{ \beta}+c_{\gamma}\mathbf{u}_{i}^{\gamma}. \tag{4}\] The collision avoidance potential \(U\) increases significantly as the distance \(r_{i,j}\) between two robots decreases. This increment is governed by a reciprocal function that dominates when \(r_{i,j}\) approaches zero (Eq. 5). Consequently, \(U_{i,j}\) captures the fact that beyond a certain distance threshold (e.g., communication range \(R_{c}\)), no direct interaction exists between robots in terms of collision avoidance. The velocity agreement is the velocity difference between robot \(i\) and all the other robots (the first term in Eq. 6). The resulting control input \(u_{i}^{\alpha}\) (as defined in Eq. 6) is a centralized solution that takes into account the velocity mismatch among all robots and the local collision potential. Note that the collision avoidance implemented by Equation 5 does not guarantee a minimal distance between robots. Our results show that the minimal distance decreases when the number of robots increases. \[U_{i,j}=\frac{1}{r_{i,j}^{2}}+\texttt{log}||r_{i,j}||^{2},||r_{i,j}||\leq R_{c}. \tag{5}\] \[\mathbf{u}_{i}^{\alpha}=-\sum_{j=1}^{N}(\mathbf{v}_{i}-\mathbf{v}_{j})-\sum_{j =1}^{N}(\nabla_{r_{i,j}}U_{i,j}). \tag{6}\] To incorporate obstacle avoidance in the model, we follow Olfati-Saber [3]'s work by introducing an imaginary robot, i.e., \(\beta\)-robot, which is defined by the projection of a robot \(i\) on the \(k\)-th obstacle \(O^{k}\) within its communication distance \(R_{c}\), to assist with the obstacle avoidance task. In practice, this can be achieved by a robot, equipped with sensors, measuring the relative position and velocity between the closest point on an obstacle and itself [3]. The control input \(\mathbf{u}_{i}^{\beta}\) follows the flocking control (Eq. 6) while focusing on the potential between robot \(i\) and its projection on obstacles. Let \(\mathbf{p}_{k}^{\delta}\) denote the position of the \(k\)-th obstacle. Then the position and velocity of the \(\beta\)-robot, created by projecting the \(i\)-th robot on the \(k\)-th obstacle, can be calculated by Equation 7 and Equation 8, respectively. \[\mathbf{p}_{i,k}=\mu*\mathbf{p}_{i}+(1-\mu)\mathbf{p}_{k}^{o},\;\mu=\frac{r_{k }}{||\mathbf{p}_{i}-\mathbf{p}_{k}^{o}||}. \tag{7}\] \[\mathbf{v}_{i,k}=\mu P\mathbf{v}_{i},\;P=I-\mathbf{a}_{k}\mathbf{a}_{k}^{T},\; a_{k}=\frac{(\mathbf{p}_{i}-\mathbf{p}_{k}^{o})}{||\mathbf{p}_{i}-\mathbf{p}_{k}^{o}||}. \tag{8}\] Define the state of the virtual leader by its position \(p^{r}\) and velocity \(v^{r}\). The leader following control \(u^{r}\) can be defined in Equation 9, where \(c_{1}\), \(c_{2}\) are positive weighting parameters. \[\mathbf{u}_{i}^{\gamma}=-c_{1}(\mathbf{p}_{i}-\mathbf{p}^{r})-c_{2}(\mathbf{v }_{i}-\mathbf{v}^{r}),c_{1}>0,c_{2}=\sqrt{c_{1}}. \tag{9}\] ### _STGNN-based imitation learning_ In this section, we present an overview of the STGNN-based learning model. We begin by describing the strategy used to construct the model input. Then we discuss the architecture of the STGNN model, which involves two levels of state expansion. Local observationWe develop a decentralized solution that only requires local observations of each robot. Inspired by Tolstaya [5], we define the local state of \(i\)-th robot in Equation 10, which consists of the aggregation from neighboring robots (\(\alpha\) term), local observation of obstacles (\(\beta\) term), and local observation of the virtual leader (\(\gamma\) term). The local observation of obstacles is defined as \(k\in\mathcal{N}\) with \(r_{i,k}<R_{c}\). \(r_{i,k}\) is the distance between robot \(i\) and \(\beta\)-robot, the projection of robot \(i\) on the \(k\)-th obstacle. \[X_{i}^{\alpha,\beta,\gamma}=[[\sum_{j\in\mathcal{N}}X_{i,j}^{\alpha}];[\sum_{k \in\mathcal{N}}X_{i,k}^{\beta}];[X_{i}^{\gamma}]]. \tag{10}\] Instead of directly using the position \(\mathbf{p}_{i}\) and velocity \(\mathbf{v}_{i}\) of robot \(i\) as input for our model, we adopt the relative state to be consistent with the expert algorithm. We define the relative state for robot \(i\) to robot \(j\) as \(X_{i,j}\) (Eq. 11). Then we aggregate the local peer-to-peer relative states into a local state \(X_{i}\), which includes \(X_{i}^{\alpha}\) (Eq. 12) and \(X_{i}^{\beta}\) (Eq. 13). \[X_{i,j}=[\mathbf{v}_{i}-\mathbf{v}_{j},\frac{\mathbf{p}_{i}-\mathbf{p}_{j}}{r_{ i,j}},\frac{\mathbf{p}_{i}-\mathbf{p}_{j}}{r_{i,j}^{2}}]. \tag{11}\] \[X_{i}^{\alpha}=\frac{1}{\mathcal{N}}\sum_{j\in\mathcal{N}}X_{i,j}^{\alpha},\;j \in\mathcal{N}\;\text{if }r_{i,j}<R_{c}. \tag{12}\] \[X_{i}^{\beta}=\sum_{k\in\mathcal{N}}X_{i,j}^{\beta},\;k\in\mathcal{N}\;\text{if }r_{i,o}<R_{c}. \tag{13}\] Notably, our model only requires local state information defined by Equation 11, which makes our approach decentralized. StgnnIn order to mimic the control input \(\mathbf{u}\) generated by the expert algorithm that uses the global information, we implement two levels of expansions based on local observation as illustrated in Figure 2. The first level is spatial expansion, aiming at extracting information from additional robots, while the second level is temporal expansion, which centers on local state evolution. For each expansion, we extract information on \(L\) timestamps. For the timestamp \(t\), consider an arbitrary node \(V_{i}\) with state \(X_{i}^{(t)}\) and \(1\)-hop neighborhood \(\mathcal{N}_{i}^{(t)}\). A node \(V_{j}\) is \(V_{i}\)'s \(l\)-hop neighbor if it is a \(1\)-hop neighbor of at least one \(V_{i}\)'s \(l-1\)-hop neighbor. Note that even though we use "\(l\)-hop neighbor" to account for state delays resulting from spatial distances, the state of one hop further is the same as one timestamp earlier in our scenario. In other words, the state of a \(l\)-hop neighbor \(j\), \(X_{j}^{(l)}\), is equivalent to \(X_{j}^{(t-l+1)}\). In spatial expansion, we merge each node's current state \(X_{i}^{(t)}\) to its \(l\)-hop delayed neighbors, where \(l=1,2,...,L\). Specifically, we first aggregate the delayed states for each hop, as indicated in Equation 14. Then, \(X_{i}^{(t)}\) is updated with its neighborhood information as in Equation 15. \[{X^{\prime}}_{i\_s}^{(l)}=\sum_{j\in\mathcal{N}_{i}^{(t)}}X_{j}^{(l)},l=1,2,...,L. \tag{14}\] \[H_{i\_s}^{(t)}=[X_{i}^{(t)}||_{l=1,...,L}{X^{\prime}}_{i}^{(l)}]. \tag{15}\] The same method is applied to other timestamps \(l=t-L,...,t-1\), but with \(t-l+1\)-hop neighbors. Then, \(H_{i\_s}=\{H_{i\_s}^{(l)},l=t-L,...,t\}\) are fed into a transformer [15] for temporal-wise information fusion. The transformer takes three copies of \(H_{i\_s}\) and considers them as query, key, and value separately. The query and key are used to compute relations between different timestamps and the value is used to generate temporal-wise fused embeddings: \[{H^{\prime}}_{i\_s}=\sigma(\frac{WqH_{i\_s}(WkH_{i\_s})^{T}}{\sqrt{d}})WvH_{i\_s} \tag{16}\] where \(\sigma\) is a softmax function and \(d\) is the number of features of \(H_{i\_s}\). The temporal expansion considers the temporal evolution pattern for local spatial state fusion (i.e., considers only \(1\)-hop neighborhood). Specifically, the spatial data fusion method is shown in Equations 17 and 18, and the temporal feature fusion is achieved by a transformer as described in Equation 16. Consider the output of the temporal expansion module as \({H^{\prime}}_{i\_t}\), the concatenation of \({H^{\prime}}_{i\_s}\) and \({H^{\prime}}_{i\_t}\) is fed to a feed-forward network to generate the final flocking controls for the robots. \[{X^{\prime}}_{i\_t}=\sum_{j\in\mathcal{N}_{i}^{(1)}}X_{j}^{(l=1)}. \tag{17}\] \[H_{i\_t}=[X_{i}^{(t)}||{X^{\prime}}_{i}^{(l=1)}]. \tag{18}\] ## IV Simulation and experiment In this section, we discuss the settings of the STGNN-based learning model, evaluation metrics, simulation results, and a real robot experiment. A video of our simulation and experiment is available online. 1 Footnote 1: [https://youtu.be/Hb70fe3Ivva](https://youtu.be/Hb70fe3Ivva) ### _Settings for STGNN-based learning model_ Local observationThe local observation (Eq. 10) denoted as \(X\in\mathbb{R}^{18}\), is passed through a two-layer multi-layer perceptron (MLP) with a hidden size of 128 to extract the local feature embedding \(H\in\mathbb{R}^{128}\) prior to undergoing spatial and temporal expansion. Each \(X\) from the historical data follows the same process. STGNN spatial expansionA single SAGEConv layer with a hidden size of 128 is used to aggregate local neighbor information. Then the \(l\)-hop neighbor information is obtained using Equations 14 and 15. The result is a sequence of \(L\) delayed states. A transformer consisting of two encoder layers, each with a feedforward size of 16 and a head size of 4, is used to extract the spatial expansion results. STGNN temporal expansionFor each local features from the history \(L\), a single SAGEConv layer with hidden size of 128 is used to aggregate neighbor information (Eq. 15). A transformer consisting of 2 encoder layers, each with a feedforward size of 16 and a head size of 4, is used to extract the temporal expansion result. Action generationThe last output from both spatial and temporal expansions is concatenated to get the fusion embedding \(H\in\mathbb{R}^{256}\). Subsequently, this output undergoes a two-layer MLP to generate the predicted control signal \(\mathbf{\hat{u}}\). The ground truth of the training data set is derived by the expert algorithm (Sec. III-B). L2 loss with the Adam optimizer is used to train the model. We trained the STGNN model with \(N=20\), which includes three spherical obstacles and one virtual leader. The size of the swarm is chosen to ensure that spatial expansion Fig. 2: STGNN spatial and temporal expansion module with \(L=3\). Spatial expansion and temporal expansion operate in parallel. Both spatial and temporal expanded states are concatenated and passed to the next layer of the neural network to predict the control input. could encompass a significant number of robots. The virtual leader goes through the obstacles but the robots must avoid collision with obstacles while following the virtual leader. The virtual leader moves continuously at a constant speed of 1m/s along the x-axis. The communication range is \(R_{c}=1\) m, and the sampling period is \(T_{s}=0.01\) s. The initial robot positions are randomized, following a uniform distribution within the range of \([0,0.5R_{c}\sqrt{N}]\). Initial velocity is \(0\) m/s. The maximum allowed acceleration is \(U_{\max}=10\) m/s\({}^{2}\) and the maximum velocity allowed is \(V_{\max}=10\) m/s on each axis. The safety distance is 0.15 m. If \(r_{i,j}\) falls below 0.15 m, the experiment is terminated early. To ensure a valid initial configuration, \(r_{i,j}\) must be greater than 0.15 m. The training episode has 1200 steps to allow all robots to pass obstacles and form flocking behavior on the other side. One trajectory of the expert algorithm which is used in training is shown in Figure 3. We implement our model using PyTorch and the OpenAI gym framework in Python3.9. The server we use has Intel(R) Xeon(R) W-2133 CPU @ 3.60GHz, NVIDIA QUADRO P5000 GPU, and 32 GB RAM. We conduct a comprehensive evaluation of our proposed method, STGNN, with \(L\) set to \(1\), \(2\), \(3\). The compared algorithms include DGNN, TGNN, and Olfati-Saber's decentralized flocking algorithm denoted as Saber [3]. ### _Metrics_ 1. Completion Rate (\(C\%\)): the rate of successfully completed episodes. The episode can be terminated early if any robot hits obstacles or \(r_{i,j}\) falls below 0.15 m. _The higher value is better._ 2. Mean Absolute Error (MAE): the mean absolute error between the expert control \(\mathbf{u}\) and model prediction \(\hat{\mathbf{u}}\). _The lower value is better._ 3. Velocity alignment (\(V\)): the velocity variance of the swarm at the end of the episode. All robots should be velocity aligned at the end of the episode thus _the lower value is better._ 4. Distance to the leader (\(\tau\), Eq. 19): the mean distance from any robot to the leader is an auxiliary measure. A large value indicates robots deviate from the leader and move in different direction, resulting in the failure of swarm formation. However a small value indicates a higher risk of collision within the swarm. _Close to the expert algorithm is better._ \[\tau=\frac{1}{nT}\sum_{i=1}^{n}\sum_{t=1}^{T}\tau_{i}^{t}.\] (19) ### _Experiment results_ We train one model for each setting described in Section IV-A. The training consists of 200 epochs with an initial learning rate of 1e-3. We implement early stopping and exponential learning rate decay to prevent overfitting. Evaluation on swarm size of 20In the first set of experiments, the trained models are tested in the same environment as the training environment, as described in Section IV-A. The only difference lies in the initial positions of the robots, which vary due to random initialization. During the testing phase, the swarm's next state is determined by the model's prediction \(\hat{\mathbf{u}}\). If an episode is terminated early due to collision, the failed episode's metric (MAE, \(V\), \(\tau\)) are not included in the aggregation for reporting. Each model runs over 20 trials and the mean and standard deviation are reported in Table I. For STGNN L1, L2 and L3 models, the results demonstrate consistently improvement as spatial and temporal expansion increases. By increasing L from 1 to 3 in both spatial and temporal expansion, the completion rate increases from 0.85 to 1.0, the MAE decreases from 3.03 to 1.71, and the velocity variance also decreases from 0.14 to 0.01. The distance to the leader does not consistently decrease, but as explained in Section IV-B, a smaller value indicates a higher risk of collision, so both values of 1.03 and 1.06 are acceptable. When considering DGNN L3 and TGNN L3, both models exhibit improved performance compared to STGNN L1, which uses only local neighbor information. Furthermore, STGNN L3, which is the combination of DGNN L3 and TGNN L3, leads to further performance enhancement. Olfati-Saber's decentralized solution [3] achieves a similar success rate as STGNN L1, as both models utilize only local information. However, the Saber algorithm differs from our expert algorithm, and its formations tend to have larger minimal distances, resulting in a larger \(\tau\), which is not directly comparable to the expert algorithm. In Figure 3, a test trajectory of the expert algorithm is plotted on the top and the test trajectory of STGNN L3 is plotted at the bottom. The plots illustrate that STGNN L3 possesses the capability to mimic the expert algorithm's control input \(\mathbf{u}\) and generate similar flocking trajectories for the robots. Evaluation on varying swarm sizesIn the second set of experiments, we examine our model's transferability across various swarm sizes, specifically for \(N=\) 30, 40, and 50. We directly employ previously trained models from larger swarm sizes and conduct 20 trials for each scenario. The mean and standard deviation for each model and swarm size are reported in Table II. To facilitate swarm formation after obstacle avoidance, we increased the maximum episode steps from 1200 to 1500, 1800, and 2100 for swarm sizes \(N\) = 30, 40, and 50, respectively. STGNN L3 successfully achieves flocking with leader following and obstacle avoidance through all testing cases and attains the lowest MAE compared to STGNN models with shorter history. Velocity alignment is achieved in both STGNN L2 and STGNN L3. The large \(\tau\) in STGNN L1 indicates the failure in swarm formation. Our proposed expert algorithm consistently performs well in terms of success rate, velocity alignment, and distance to target, which demonstrates its suitability as the ground truth. Furthermore, the performance of STGNN L3 underscores its ability to provide accurate estimations of the global control inputs using only local information. ### _Real robot experiment_ We further demonstrate the effectiveness of STGNN by implementing it to achieve flocking behaviors of a group of Bitcraze Crazyllie 2.1 drones [16]. The drones are controlled using the Crazyswarm platform [17], which is based on The Robot Operating System (ROS) [18] and allows Crazyllie drones to fly as a swarm in a tight and synchronized formation. The positions of the drones are obtained using the LightHouse positioning system. The system utilizes the SteamVR base stations together with the positioning deck on the drone to estimate the position of the drones. As shown in Figure 4, the six drones start with random locations (top), navigate through and avoid obstacles (middle), and form a flock on the other side of the obstacle (right). Please refer to the online video for a more detailed real robot experiment with 4, 5, and 6 drones (see footnote 1). ## V Conclusion and Future Work We demonstrate the effectiveness of STGNN as a decentralized solution for flocking with the leader following and obstacle avoidance tasks. STGNN overcomes the limitations of relying solely on local information by integrating prediction capabilities into the model, enabling it to capture and respond to global swarm dynamics. Our STGNN-based learning model consistently outperforms the existing decentralized algorithm introduced by Olfati-Saber. Moreover, the performance of the STGNN model improves as \(L\) increases. Furthermore, STGNN outperforms spatial-only models, demonstrating its ability to utilize both spatial and temporal information for enhanced flocking control. The design is flexible in terms of \(L\)-hop spatial and temporal expansion, which enables seamless adaptation to various swarm sizes and history lengths. In the future, we plan to investigate the capabilities of STGNN [19, 20, 21] in a broader Fig. 4: The plots from top to bottom show an experiment of six drones starting from random locations, avoiding an obstacle, and achieving flocking. More experiments can be found in the video attachment. Fig. 3: Simulation result of \(N=20\). Top: the expert model drives 20 robots. Bottom: STGNN with \(L3\) drives 20 robots. The two models are tested in the same environment with the same initialization. range of multi-agent tasks, including target tracking [7, 22], path planning [23, 24] and coverage and exploration [25, 26].
2301.13715
Physics-constrained 3D Convolutional Neural Networks for Electrodynamics
We present a physics-constrained neural network (PCNN) approach to solving Maxwell's equations for the electromagnetic fields of intense relativistic charged particle beams. We create a 3D convolutional PCNN to map time-varying current and charge densities J(r,t) and p(r,t) to vector and scalar potentials A(r,t) and V(r,t) from which we generate electromagnetic fields according to Maxwell's equations: B=curl(A), E=-div(V)-dA/dt. Our PCNNs satisfy hard constraints, such as div(B)=0, by construction. Soft constraints push A and V towards satisfying the Lorenz gauge.
Alexander Scheinker, Reeju Pokharel
2023-01-31T15:51:28Z
http://arxiv.org/abs/2301.13715v1
## Physics-constrained 3D Convolutional Neural Networks for Electrodynamics ## Abstract We present a physics-constrained neural network (PCNN) approach to solving Maxwell's equations for the electromagnetic fields of intense relativistic charged particle beams. We create a 3D convolutional PCNN to map time-varying current and charge densities \(\mathbf{J(r,t)}\) and \(\mathbf{\rho(r,t)}\) to vector and scalar potentials \(\mathbf{A(r,t)}\) and \(\mathbf{\phi(r,t)}\) from which we generate electromagnetic fields according to Maxwell's equations: \(\mathbf{B=\nabla\times A}\), \(\mathbf{E=-\nabla\phi-\partial A/\partial t}\). Our PCNNs satisfy hard constraints, such as \(\nabla\cdot\mathbf{B=0}\), by construction. Soft constraints push \(\mathbf{A}\) and \(\mathbf{\phi}\) towards satisfying the Lorenz gauge. ## Introduction Electrodynamics is ubiquitous in describing physical processes governed by charged particle dynamics including everything from models of universe expansion, galactic disks forming cosmic ray halos, accelerator-based high energy X-ray light sources, achromatic metasurfaces, metasurfaces for dynamic holography and on-chip diffractive neural networks, down to the radiative damping of individual accelerated electrons [1-21]. Despite widely available high-performance computing, numerically calculating relativistic charged particle dynamics is still a challenge and an open area of research for large collections of particles undergoing collective effects in dynamics involving plasma turbulence [22], space charge forces [23,24], and coherent synchrotron radiation [25,26]. For example, the photo-injectors of modern X-ray free electron lasers such as the LCLS, SwissFEL, and EuXFEL and plasma wakefield accelerators such as FACET-II can produce high quality intense bunches with few picosecond rms lengths of up to 2 nC charge per bunch that are accelerated and squeezed down to lengths of tens to hundreds of femtoseconds [27-32]. At low energy near the injector, the 6D phase space (\(x\),\(y\),\(z\),\(px\),\(py\),\(p\),\(z\) ) dynamics of such bunches are strongly coupled through collective space charge (SC) forces. At higher energies, especially in bunch compressors where charged particle trajectories are curved through magnetic chicanes the dynamics are coupled through collective coherent synchrotron radiation (CSR). A 2 nC bunch contains \(N\)\(\simeq\) 1.25 \(\times\) 10\({}^{10}\) electrons for which calculating exact individual particle to particle SC and CSR interactions is a computationally expensive O(\(N^{2}\)) process. For SC calculations, an O(\(N^{2}\)) process, such as the _SpaceCharge3D_ routine in the particle dynamics simulation code General Particle Tracer (GPT) may be necessary for intense low energy beams near the injector where the longitudinal (\(z\)) velocities of individual particles in a bunch have a large variation and are comparable to transverse (\(x\),\(y\)) velocities [33,34]. For relativistic particles, many conventional approaches for SC calculations greatly reduce the number of required calculations by utilizing particle-in-cell methods with macro-particles, such as the _SpaceCharge3DMesh_ routine in GPT. For CSR, relativistic state-of-the-art 3D CSR calculations still rely on a full set of point-to-point calculations [35]. A charged particle's electromagnetic Lagrangian is \[L=-\frac{mc^{2}}{\gamma}+ev\cdot A-e\varphi,\qquad\gamma=\frac{1}{\sqrt{1-v^{ 2}/c^{2}}}, \tag{1}\] where \(e\) is the particle's charge, \(c\) is the speed of light, \(v=|\mathbf{v}|\), and A and \(\mathbf{\phi}\) are the vector and scalar potentials, respectively, which define the magnetic (\(\mathbf{B}\)) and electric (\(\mathbf{E}\)) fields as \[\mathbf{B=\nabla\times A},\qquad\mathbf{E=-\nabla\varphi-\frac{\partial A}{ \partial t}}, \tag{2}\] For which the relativistic Lorentz force law is \[\frac{d\mathbf{p}}{dt}=e(\mathbf{E+\nu\times B}),\qquad\mathbf{p=\gamma mv}. \tag{3}\] The \(\mathbf{E}\) and \(\mathbf{B}\) dynamics are coupled and depend on current and charge densities \(\mathbf{J}\) and \(\mathbf{\rho}\) as described by Maxwell's equations \[\nabla\cdot\mathbf{E=\frac{\rho}{\varepsilon_{0}},\qquad\nabla\cdot B=0}, \tag{4}\] \[\nabla\times\mathbf{E=-\frac{\partial B}{\partial t},\qquad\nabla\times B= \mu_{0}\left(J+\varepsilon_{0}\frac{\partial\mathbf{E}}{\partial t}\right)}. \tag{5}\] A typical approach to numerically solving Eq. 1-5 starts with initial charge \(\mathbf{\rho(x,y,z,t=0)}\) and current profiles \(\mathbf{J(x,\,y,\,z,\,t=0)}\) and their rates of change as well as any external electric and magnetic fields \(\mathbf{E}\)_ext_(\(x\),\(y\),\(z\),\(t=0\)), \(\mathbf{E}\)_ext_(\(x\),\(y\),\(z\),\(t=0\)) and their rates of change, which may be produced by mag- nets and radio-frequency resonant acceleration cavities as is typical in high intensity charged particle accelerators. The total electromagnetic fields are then calculated as the sum of the external fields and the self-fields produced by the current and charge densities themselves according to Eq. 4, 5. The initial fields apply a force on the particles causing a change in momentum and position, as defined by Eq. 3. The most computationally expensive part of the process is the calculation of the self-fields generated by the particle distribution. In this work, we introduce a physics-constrained neural net- work (PCNN) approach to solving Maxwell's equations for the self-fields generated by relativistic charged particle beams. For example, for the problem of mapping current density \(\mathbf{J}\) to an estimate \(\mathbf{B^{{}^{\ast}}}\) of the associated magnetic field \(\mathbf{B}\) we build Eq. 2 into the structure of our NN and generate the vector potential \(\mathbf{A^{{}^{\ast}}}\), which defines the magnetic field as \[\mathbf{\bar{B}=\nabla\times\bar{A}\Rightarrow\nabla\cdot\bar{B}=\nabla\cdot \left(\nabla\times\bar{A}\right)=0}, \tag{6}\] which satisfies the physics constraint by construction. Neural networks (NN) are powerful machine learning (ML) tools which can extract complex physical relationships directly from data and have been used for speeding up the studies of complex physical systems [36-43]. Incredibly powerful and flexible physics-informed neural networks (PINNs), which include soft constraints in the NN's cost function, have been developed and have shown great capabilities for complex fluid dynamics simulations [44], material science [45], for symplectic single particle tracking [46], for learning molecular force fields [47], and for large classes of partial differential equations [48-50]. For the problem of mapping current density \(\lambda\) to an estimate B\({}^{\wedge}\), the PINN approach is to train a neural network with a cost function defined as \[C =w_{B}\left\|\iint\left[B-\beta\right]^{2}dV+w_{\nabla}\iint\left[ \nabla\cdot\beta\right]^{2}dV\right.\] \[=w_{B}\left\|B-\hat{B}\right\|_{2}+w_{\nabla}\left\|\nabla\cdot \hat{B}\right\|_{2}, \tag{7}\] where the first term depends on magnetic field prediction accuracy and the second term penalizes violation of the physics constraint \(\nabla\cdot\text{B}^{\wedge}=0\), as shown in Figure 1. However, with soft PINN-type constraints there is no guarantee that the constraints are always satisfied, which is in contrast to the hard constraints implemented in our approach, which guarantee that constraints are not violated within numerical and finite discretization limits. Furthermore, when utilizing PINN-type soft constraints there is a tradeoff between the minimization of the two terms in Eq. 7 based on the choice of weights \(w_{\text{B}}\) and \(w_{\Delta}\). Intuitively this tradeoff can be understood by the fact that the easiest way for a neural network to satisfy \(\nabla\cdot\text{B}^{\wedge}=0\) is \(\text{B}^{\wedge}\equiv C\) for any constant \(C\). For hard constraints there is no such tradeoff, the cost function only penalizes field accuracy and the constraint itself is built into how the field is constructed. In our PCNN approach, our cost function is simply \[C=\left\|B-\hat{B}\right\|_{2}, \tag{8}\] and there is no tradeoff between reconstruction accuracy and physics constraint enforcement. This is important because when simulating charged particle dynamics, great care must be taken to satisfy the physics constraints as defined by Eq. 1-5. It is very important to enforce well known beam properties such as phase space volume-preserving symplectic maps that satisfy Liouville's theorem so that the beam dynamics are self-consistent [51-56]. Results on physics informed NNs with hard constraints have mostly focused on fluid dynamics and climate modeling and are much more limited than PINN approaches [57-61]. ## 3 Physics-constrained neural networks In Figure 1 we summarize three NN approaches: 1) a NN approach without physics constraints, 2) a PINN approach with soft constraints, and 3) our PCNN approach. We demonstrate our PCNN method with numerical studies of relativistic (5 MeV), short (\(\alpha\)t = 800 fs), high charge (2 nc) electron bunches represented by \(N=50\) million macro particles. We utilize the charged particle dynamics simulation code General Parcticle Tracer (GPT) with 3D space charge forces [33,34]. The charged particle distributions were simulated for 1.2 ns with all of the data saved to a file at each \(\Delta\)t = 12 ps interval so that the beam was displaced 0.36 m over 100 saved steps. The Figure 2 (A) shows the \(x\) and \(y\) trajectories of 10000 random particles sampled from the bunch distribution over the entire 100 saved steps as the beam is compressed by a 0.5 T solenoid magnet whose \(B_{2}\) field is shown in green. Only the first 75 steps, shown in black, were used for training and the final 25 steps were used for testing, shown in red. Figure 2 (B) shows the {\(x\),\(y\)} and {\(x\),\(z\)} projections of the electron bunch density at steps 0 and 74. The training beam we have created is designed to have multiple length scales in order to help the trained PCNN generalize to new unseen distributions. We have created several closely spaced Gaussian bunches of varying \(\alpha\)x and \(\alpha\)y as seen in the (\(x\),\(y\)) projection of step 0. Furthermore as seen in the (\(x\),\(z\)) projection the beam has an overall bunch length of \(\alpha_{Z}=800\)\(\mu\)m with density fluctuations of various \(\alpha_{Z}\) along the length of the beam. By step 75 the beam has been over-compressed in the (\(x\),\(y\)) plane as seen by the (\(x\),\(y\)) projection and the beam density has started to spread in the \(z\) direction due to space charge forces. At each time step we generate discrete versions of J, \(\rho\), E, and B by breaking up the 2.4 mm x 2.4 mm x 4.4 mm volume which is co-moving with the center of the beam into a 128x128x128 pixel cube with sides of length \(\Delta\)y = 18.9 \(\mu\)m, \(\Delta\)y = 18.9 \(\mu\)m, \(\Delta\)z = 34.6 \(\mu\)m, and averaging over all of the macroparticles in each cube. We compare the three neural network approaches to map J to B, as shown in Figure 1: 1) A standard NN using (8) as the cost function for training, 2) A PINN using (7) as the cost function, and 3) A PCNN us - in (8) as the cost function with the physics constraint built into the structure of the ML approach. The NN, PINN, and PCNN are able to achieve similar errors on the training data as they all use a similar 3D convolutional neural network (CNN) encoder-decoder architecture, as shown in Figure 3. There is however an important distinction in terms of neural network size when comparing the NN, PINN, and PCNN approaches. The PCNN is actually smaller while achieving Figure 1: Various NN approaches to generate \(a\) magnetic field \(B^{\wedge}\) (on estimate of B) from current density \(\lambda\) are shown. better test data results and a much smaller violation of the physics constraint. For the NN and PINN all three components (\(\{\textit{{{\rm{}}}}_{x},\textit{{{\rm{}}}}_{y},\textit{{{\rm{}}}}_{z}\}\)) of J must be used as inputs to generate all three components (\(\{\textit{{{\rm{}}}}_{x}^{\prime},\textit{{{\rm{}}}}_{x}^{\prime},\textit{{{\rm{}}} }_{z}\}\)) of B\({}^{\textbf{{{\rm{}}}}_{\textbf{{{\rm{}}}}}}\). Therefore, the inputs and outputs of this 3D CNN are objects of size 128x128x128x3 and the input-output mapping of this 3D CNN, \(\textit{{{\rm{}}}}_{N_{b}}\), is given by \[\left\{{{{\textit{{\rm{}}}}_{x}},{{\textit{{\rm{}}}}_{y}},{{\textit{{\rm{}}}}_{z }}}\right\}\to{{{\textit{{\rm{}}}}}_{B}}\rightarrow\left\{{{{\textit{{\rm{}}} }_{x}},{{\textit{{\rm{}}}}_{y}},{{\textit{{\rm{}}}}_{z}}}\right\}. \tag{9}\] However, for the PCNN we are generating an estimate of **A**, which satisfies \[\textit{{A}}(\textit{{r}},\textit{{t}})=\frac{{{{\textit{{\rm{}}}}_{0}}}}{{{ \textit{{4}}}{\textit{{\pi}}}}}\left[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! to a volume gives the partial derivative \(V\!\rightarrow\!\!V\!\times\!W_{\Delta}\!\!\times\!\!W\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!V\!\times\!\!\partial\!\times\!\!V\!\times\!\!\partial\! \times\!\!V\!\times\!\!\partial\!\times\!\!V\!\times\!\!\partial\!\times\!\!V \!\times\!\!\partial\!\times\!\!V\!\times\!\!\partial\!\times\!\!V\!\times\!\!V \!\times\!\!\partial\!\times\!\!V\!\times\!\!\partial\!\times\!\!V\!\times \!\!V\!\times\!\!\partial\!\times\!\!V\!\times\!\!\partial\!\times\!\!V\! \times\!\!\partial\!\times\!\!V\!\times\!\!\partial\!\times\!\!V\!\times\!\! \partial\!\times\!\!V\!\times\!\!\partial\!\times\!\!V\!\times\!\!\partial \!\times\!\!V\!\times\!\!\partial\!\times\!\!V\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!V\!\times\!\!\partial\!\times\!\!V\!\times\!\!\partial \!\times\!\!V\!\times\!\!\partial\!\times\!\!V\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!V\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!V\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!V\!\times\!\!\partial\! \times\!\!V\!\times\!\!\partial\!\times\!\!V\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\! \partial\!\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\! \!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\! \!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\! \partial\!\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\! \partial\!\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\!\times\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\! \partial\!\times\!\!\partial\!\!\times\!\partial\!\times\!\!\partial\!\! \times\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\!\times\!\! \partial\!\times\!\partial\!\!\times\!\partial\!\!\times\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\partial\!\! \times\!\!\partial\!\times\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\! \partial\!\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\partial\! \times\!\!\partial\!\times\!\!\times\!\partial\!\!\times\!\!\partial\!\times\! \partial\!\!\times\!\!\partial\!\times\!\!\times\!\partial\!\!\times\!\! \partial\!\times\!\!\partial\!\times\!\partial\!\!\times\!\!\times\!\partial\! \times\!\!\partial\!\times\!\!\times\!\partial\!\!\times\!\partial\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\times\!\! \partial\!\times\!\!\partial\!\times\!\!\times\!\partial\!\times\!\! \partial\!\times\!\!\times\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\times\!\partial\!\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\times\!\partial\!\times\!\!\partial\!\times\!\! \times\!\partial\!\!\times\!\!\partial\!\times\!\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\times\!\!\partial\!\times\!\!\times\!\partial\!\! \times\!\!\partial\!\times\!\!\times\!\partial\!\times\!\!\partial\!\times\!\! \partial\!\times\!\!\times\!\partial\!\!\times\!\partial\!\times\!\!\! \times\!\partial\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\times\!\partial\!\! \times\!\partial\!\!\times\!\partial\!\times\!\!\!\times\!\partial\!\!\times\!\! \partial\!\times\!\!\times\!\!\partial\!\times\!\!\partial\!\times\!\!\! \times\!\partial\!\times\!\!\partial\!\!\times\!\!\partial\!\times\!\!\!\times\!\! \partial\!\times\!\!\partial\!\times\ Therefore our initial particle distribution can be thought of as an intense beam surrounded by a cube-shaped halo of diminishing density. This is due to the fact that we defined our initial beam in terms of Gaussian distributions without any hard cut- offs. Once this cube shaped region begins to travel through the solenoid it is rotated and squeezed, resulting in regions of non-zero and zero density that have sharp straight contours, as can be seen in the bottom part of Figure 5 and cause numerical problems for calculating derivatives. The most obvious mitigation for this would be to create a mask that cuts off all field calculations related to the beam beyond some minimal cutoff density. Despite this limitation, which each 3D CNN-based approach will suffer, the PCNN approach can be seen to be more accurate than NN without constraints and also than the PINN approach. In Figure 6 we compare PINN and PCNN predictions for two states of the beam, one within the training data set for which both are highly accurate and one beyond the training set where the accuracy quickly drops off. The next step is to add a prediction E\({}^{\alpha}\) of the electric field E. We generate \(\phi\) from \(\rho\) via a second neural network N\({}_{\phi}\) which gives the mapping \[\rho\to N_{\varphi}\rightarrow\hat{\phi}. \tag{16}\] As above, we approximate \(\partial/\partial t\) as \[\frac{\partial A}{\partial t}=\frac{A(t+\Delta_{t})-A(t-\Delta_{t})}{2\Delta _{t}}+O\big{(}\Delta_{t}{}^{2}\big{)}, \tag{17}\] where \(\Delta_{t}\)=1.2\(\times\)10-11. After \(\partial\Delta^{\alpha}/\partial t\) is calculated, a single forward pass for E\({}^{\alpha}\) is given by \[\{\rho,J\}\rightarrow\big{\{}N_{\varphi},N_{A}\}\rightarrow\big{\{}\tilde{ \varphi},\tilde{A}\big{\}}\rightarrow\mathbf{\tilde{E}}=-\nabla\tilde{\varphi}- \frac{\partial\mathbf{\tilde{A}}}{\partial t}, \tag{18}\] _as shown in Figure 7._ Because uncountably many non-unique choices of A and \(\phi\) generate the same E and B fields, we add the Lorenz gauge as a PINN-type soft constraint to the training cost function \[w_{B}\big{\|}B-\hat{B}\big{\|}_{2}+w_{E}\big{\|}E-\hat{E}\big{\|}_{2}+w_{L} \big{\|}\nabla\cdot\mathbf{\tilde{A}}+\frac{1}{c^{2}}\frac{\partial\hat{\varphi} }{\partial t}\big{\|}_{2}, \tag{19}\] which has the additional benefit that it introduces more data for the magnetic field calculation as the magnetic field is now informed by the Lorenz condition. Predictions for the entire 3D beam at step 1 by the Lorenz PCNN are shown in Figure 8. In Figure 9 we show the Lorenz PCNN-generated (B\({}^{\alpha}\), E\({}^{\alpha}\)) fields at just a single 2D (x, y) slice at various steps including those beyond the training data. ## 4 Discussion Our final demonstration of the strength of building in hard physics constraints in the 3D CNN is a demonstration of its non-catastrophic failure when predicting the electromagnetic fields of two additional 2 nc beams that are very different from both the test and training data shown so Figure 6: The top row is the initial beam state with the (x,y) projection of B\({}^{\alpha}\) as generated by the PINN and PCNN shown along with B and the differences plotted over the (x,y) projection of the beam’s charge density \(\rho\). For training data the methods perform equally well in field reconstruction accuracy. In the middle row we see the first step (76) beyond the training data set and an immediate drop in the accuracy of the PINN. In the third row the (x,z) projections of step (76) are shown, the roughness of the (x,z) projection shows intuitively how the PINN matches the B field in a mean squared error sense, but violates the constraint \(\nabla\cdot\textbf{B}=0\). The first additional beam is three parallel electron beams, each of whose length is \(\alpha_{z}\approx 3mm\) which is similar to the overall length of the various bunches used in the training data. The three parallel beams are different from the training data in having empty space between the individual bunches. The second additional beam is a hollow tube of electrons with the same length as the three parallel beams, but whose topology is entirely different from anything the PCNN has seen so far. Figure 10 shows results of predicting the (**E**, **B**) fields for the three parallel and the hollow beams. As expected, the PCNN performs worse than previously, but is qualitatively very accurate in both E and B field prediction for the three parallel beams. The hollow beam is a much bigger challenge and much larger field errors are seen, but crucially, the predicted fields are still qualitatively correct in terms of direction and flow, with most of the error due to the wrong amplitude being predicted. We believe that this final result honestly shows the generality and strength of the PCNN approach and its limitations. We should not expect a trained CNN to predict well on an entirely unseen data set, this is a well-known problem known as distribution shift in the ML community in which NNs must be re-trained for inputs different from the training data set distribution. The fact that the PCNN produces reasonable outputs for inputs wildly different from that of the training data is a major strength of the approach. As Maxwell's equations are important for describing an extremely wide range of physical phenomenon the applications of such a method for electrodynamics applications are many. Here we will briefly touch on charged particle dynamics in high energy accelerators. There is a growing literature on utilizing ML-based surrogate models as virtual diagnostics in the particle accelerator community. For these approaches the NNs are typically trained as input-output maps utilizing experimental input data together with computationally expensive output data such as measuring a charged particle current density at one location of an accelerator and then running physics codes to map that to another location [41], or mapping input accelerator data directly to beam characteristics at other accelerator locations [39]. For such applications, the PCNN method can enable the development of much more robust real-time virtual diagnostics that satisfy physics constraints. Another large family of applications is for accelerator design. For example, given a fixed input beam charge and current density distributions a beam line may be designed with various electromagnetic magnet and resonant acceleration components. For each design choice, such as distance between magnets or magnetic field strengths, high-fidelity physics-based models must be used to track the charged particle dynamics. With our approach, once a PCNN is trained for a family of input beam distributions, we have demonstrated that we can make accurate field predictions that respect physics constraints even as the beam is significantly changed by the application of external fields based on the accelerator's design. The next step of this work, which is beyond the scope of this paper and an ongoing effort, is to utilize our PCNN approach to quickly push particles and to confirm that the field predictions are accurate enough such that the particle dynamics are physically consistent. As we have already seen some slight numerical limitations as discussed above, this might push us to utilize even higher resolution discretization, such as 512\({}^{3}\) or 1024\({}^{3}\) pixel volumes, which remains to be determined. If this approach is able to provide physically consistent beam dynamics, even if they slightly violate constraints, this will be a fast and powerful way to zoom in on an optimal design estimate, after which more accurate slower physics-based simulations can be used for detailed studies. ## Conclusions A robust PCNN method has been developed in order to explicitly take Maxwell's equations into account in the structure of generative 3D convolutional neural networks in order to more accurately satisfy physics constraints. Although this method is less general than the incredibly flexible PINN approach, in which any partial differential equation can be easily introduced as a soft constraint, the resulting physics constraints are more accurately respected. Furthermore, we have shown how to combine this PCNN approach with the PINN approach in our Lorenz CNN in which hard physics con- straints are enforced in the generation of the E and B fields and the soft penalty on violation of the Lorenz guage is added to the cost function in Equation 19. Figure 7: The Lorenz PCNN generates the vector and scalar potentials and their associated electromagnetic fields. Figure 8: (A) Electromagnetic fields are shown for all positions within the \(128^{3}\) pixel volume for normalized charge density \(\rho>0.0025\) for the first state of the beam. (B) We zoom in on only the part of the electron bunch which has the largest \(\sigma_{2}\) profile and show it from two angles. (C) Fields from only a single \((x,y)\) slice of the 3D volume are shown at two different angles. ## References * [1] A. A. ## Acknowledgments This work was supported by the U.S. Department of En- ergy (DOE), Office of Science, Office of High Energy Physics contract number 89233218CNA000001 and the Los Alamos National Laboratory LDRD Program Directed Research (DR) project 20220074DR. National Laboratory LDRD Program Directed Research (DR) project 20220074DR.
2309.17345
Physics-Informed Neural Network for the Transient Diffusivity Equation in Reservoir Engineering
Physics-Informed machine learning models have recently emerged with some interesting and unique features that can be applied to reservoir engineering. In particular, physics-informed neural networks (PINN) leverage the fact that neural networks are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations. The transient diffusivity equation is a fundamental equation in reservoir engineering and the general solution to this equation forms the basis for Pressure Transient Analysis (PTA). The diffusivity equation is derived by combining three physical principles, the continuity equation, Darcy's equation, and the equation of state for a slightly compressible liquid. Obtaining general solutions to this equation is imperative to understand flow regimes in porous media. Analytical solutions of the transient diffusivity equation are usually hard to obtain due to the stiff nature of the equation caused by the steep gradients of the pressure near the well. In this work we apply physics-informed neural networks to the one and two dimensional diffusivity equation and demonstrate that decomposing the space domain into very few subdomains can overcome the stiffness problem of the equation. Additionally, we demonstrate that the inverse capabilities of PINNs can estimate missing physics such as permeability and distance from sealing boundary similar to buildup tests without shutting in the well.
Daniel Badawi, Eduardo Gildin
2023-09-29T15:52:04Z
http://arxiv.org/abs/2309.17345v3
# Physics-Informed Neural Network for the Transient Diffusivity Equation in Reservoir Engineering ###### Abstract Physics-Informed machine learning models have recently emerged with some interesting and unique features that can be applied to reservoir engineering. In particular, physics-informed neural networks (PINN) leverage the fact that neural networks are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations. The transient diffusivity equation is a fundamental equation in reservoir engineering and the general solution to this equation forms the basis for Pressure Transient Analysis (PTA). The diffusivity equation is derived by combining three physical principles, the continuity equation, Darcy's equation, and the equation of state for a slightly compressible liquid. Obtaining general solutions to this equation is imperative to understand flow regimes in porous media. Analytical solutions of the transient diffusivity equation are usually hard to obtain due to the stiff nature of the equation caused by the steep gradients of the pressure near the well. In this work we apply physics-informed neural networks to the one and two dimensional diffusivity equation and demonstrate that decomposing the space domain into very few subdomains can overcome the stiffness problem of the equation. Additionally, we demonstrate that the inverse capabilities of PINNs can estimate missing physics such as permeability and distance from sealing boundary similar to buildup tests without shutting in the well. Physics-informed Neural Networks (PINNs) Physics-informed Machine Learning Diffusivity Equation Reservoir Engineering Buildup Tests ## 1 Introduction The diffusivity equation is a stiff spatiotemporal non-linear partial differential equation (PDE) that describes fluids flow in porous media. It results from combining three physical principles, the material balance equation, Darcy's law and the equation of state of slightly compressible liquid. Thus it is regarded the most important equation in reservoir engineering. Analytical solutions for some simplified forms of the diffusivity equation exist however their practical applications are very limited especially for history matching applications. For single-phase infinite-acting circular reservoirs analytical solution is obtained using Boltzmann's variable transformation which transforms the PDE to an ordinary differential equation (ODE). For circular bounded reservoirs analytical solutions are available in Laplace domain and thus require inversion algorithms (Stehlest, 1970, Murli and Rizzardi, 1990, de Hoog et al., 1982) for the PDE solution to be expressed in the temporal domain. The stiffness of the PDE which is caused by the steep gradients near the well poses a major challenge even to numerical methods where very fine meshes need to be considered. Numerical methods are very good tools to solve the diffusivity equation, in fact it is the essence of reservoir simulations. However for well testing techniques which are inverse problems, numerical methods are very limited. Another drawback of numerical methods is that they only offer solutions at discrete specified times which limits our ability to calculate the flow rates at all times. For that, we prefer to work with continuous analytical solutions or any good approximation of analytical solutions. Recently, there is noticeable and growing increase in the use of machine learning (ML) applications in various areas in science and engineering which most of them are fully data-driven. One major shortcoming of data-driven machine learning is the failure to honor any physical relationship between inputs and outputs. Consequently, they suffer to extrapolate predictions outside the training domain and therefore falls short when applied to time-dependent reservoir engineering applications where temporal extrapolation is essential. A PINN is a scientific machine learning technique used to solve problems involving PDEs (Raissi et al., 2019). PINNs approximate PDE solutions by training a neural network to minimize a loss function; it includes terms reflecting the initial and boundary conditions along the space-time domain's boundary and the PDE residual at selected points in the domain (called collocation points). Incorporating a residual network that encodes the governing physics equations is a significant novelty with PINNs. The basic concept behind PINN training is that it can be thought of as an unsupervised strategy that does not require labelled data, such as results from prior simulations or experiments. The PINN algorithm is essentially a mesh-free technique that finds PDE solutions by converting the problem of directly solving the governing equations to a loss function optimization problem. It works by integrating the mathematical model into the network and reinforcing the loss function with a residual term from the governing equation (Cuomo et al., 2022). Additionally, using neural network back-propagation, when a PINN model is trained, not only the PDE solution is obtained but also the solution derivatives w.r.t inputs are also obtained. Therefore, PINNs can be regarded as alternatives to analytical solutions. In this paper, our work can be broadly split into two sections. In section 1, we apply the PINN methodology to the 1D cartesian diffusivity equation and show how to utilize the inverse capabilities of PINNs to predict some unknown reservoir properties which previously were estimated using well testing techniques. In section 2, we apply PINNs to the radial diffusivity equation in a circular bounded reservoir under both constant bottom-hole pressure (BHP) and under constant production flow rate. We also present the domain-decomposition method to tackle the stiffness of the radial diffusivity equation. ## 2 Background ### Diffusivity Equation The diffusivity equation describes the flow in porous media and can be derived using the material balance equation and Darcy's law. Taking an infinitesimal volume element as in Fig. 1, the material balance states that the difference between the mass flow rate entering the volume element and the mass flow rate leaving the volume element must equal the rate of change of mass accumulating in the volume element due expansion of the pore volume and change of mass contributed by sources/sinks. Assuming 2D case and putting in equation form gives the following: \[\underbrace{\big{(}q\rho\big{|}_{x}+q\rho\big{|}_{y}\big{)}}_{\text{Mass flow rate}}-\underbrace{\big{(}q\rho\big{|}_{x+dx}+q\rho\big{|}_{y+dy}\big{)}}_{\text{OUT}} =\underbrace{\frac{\partial m}{\partial t}}_{\text{Rate of change}}+ \underbrace{q_{ss}\cdot dV}_{\text{sources}\,/\,\text{sinks}} \tag{1}\] where \(q\rho\big{|}_{x}\) is the flux at \(x\). \(q\) is the volumetric flux in \([m^{3}/sec]\), \(\rho\) is the density of the fluid in \([kg/m^{3}]\) and \(q_{ss}\) is the sources/sinks term in \([kg/m^{3}]\). In this work we don't have sources or sinks and therefore the term will be neglected. \(m\) is the accumulated mass inside the element volume and is given by: \[m=\rho dV\] with \(dV\) is the volume element in \([m^{3}]\). It is important to note that the fluid can occupy a fraction of the element volume called the pore volume and is given by: \[dV_{p} =\phi\cdot dV\] \[dV =dx\cdot dy\cdot dz\] where \(\phi\) is the porosity of the element volume \(dV\) and defined as the volume that can be occupied by the fluid divided by the total geometric volume. As a result of the definition \(\phi\in[0,1]\). Therefore, the element volume \(dV\) can be divided into two volumes: pore volume \(dV_{p}\) and rock volume \(dV_{r}\). \[dV=dV_{p}+dV_{r}\] \[-\big{(}q\rho\big{|}_{x+dx}-q\rho\big{|}_{x}\big{)}-\big{(}q\rho\big{|}_{y+ dy}-q\rho\big{|}_{y}\big{)}=dV\frac{\partial\big{(}\phi\rho\big{)}}{\partial t} \tag{2}\] \[-\frac{1}{A_{x}}\frac{\left(q\rho\big{|}_{x+dx}-q\rho\big{|}_{x}\right)}{dx}- \frac{1}{A_{y}}\frac{\left(q\rho\big{|}_{y+dy}-q\rho\big{|}_{y}\right)}{dy}= \frac{\partial\big{(}\phi\rho\big{)}}{\partial t} \tag{3}\] where \(A_{x}=dy\cdot dz\) and \(A_{y}=dx\cdot dz\). Simplifying eq. 3 gives: \[-\frac{1}{A_{x}}\frac{\partial\big{(}q\rho\big{)}_{x}}{\partial x}-\frac{1}{A_ {y}}\frac{\partial\big{(}q\rho\big{)}_{y}}{\partial y}=\frac{\partial\big{(} \phi\rho\big{)}}{\partial t} \tag{4}\] Darcy's law [20] can be mathematically written as the following: \[q=-\frac{kA}{\mu}(\nabla p+\rho g\nabla z) \tag{5}\] where \(q\) is the volumetric flux in \([m^{3}/sec]\), \(k\) is the permeability \([m^{2}]\), \(A\) is the flux area in \([m^{2}]\), \(\mu\) is the fluid viscosity in [\(pa\cdot sec\)], \(\nabla p\) is the pressure gradient, \(\rho\) is the fluid density in \([kg/m^{3}]\), \(g\) is the gravity constant in \([m/sec^{2}]\), and \(\nabla z\) is the gradient of the elevation (\(z\) is in the gravity direction). Neglecting the gravity term, the 2D Darcy's law can be expressed as: \[q_{x}=-\frac{k_{x}A_{x}}{\mu}\frac{\partial p}{\partial x} \tag{6a}\] \[q_{y}=-\frac{k_{y}A_{y}}{\mu}\frac{\partial p}{\partial y} \tag{6b}\] where \(A_{x}\) and \(A_{y}\) is the flux area perpendicular to \(x\) direction and \(y\) direction respectively. \(k_{x}\) and \(k_{y}\) is the permeability in \(x\) and \(y\) direction respectively. Substituting eqs. 6a, 6b into eq. 4 and expanding the right hand side gives: \[\frac{\partial}{\partial x}\bigg{(}\rho\frac{k_{x}}{\mu}\frac{\partial p}{ \partial x}\bigg{)}+\frac{\partial}{\partial y}\bigg{(}\rho\frac{k_{y}}{\mu} \frac{\partial p}{\partial y}\bigg{)}=\phi\frac{\partial\rho}{\partial t}+ \rho\frac{\partial\phi}{\partial t} \tag{7}\] The time derivative of the density appearing on the right of eq. 7 can be expressed in terms of a time derivative of the pressure by using the basic thermo-dynamic definition of isothermal compressibility of fluids: \[c_{f}=-\frac{1}{V_{f}}\frac{\partial V_{f}}{\partial p} \tag{8}\] where \(c_{f}\) is the fluid compressibility in \([1/pa]\), \(V_{f}\) is the fluid volume in \([m^{3}]\) and \(p\) is the pressure acting on the fluid in the pore volume (pore pressure) in \([pa]\), and since \[\rho=\frac{m}{V_{f}}\] then the compressibility can be alternatively expressed as: \[c_{f}=-\frac{\rho}{m}\frac{\partial}{\partial p}\bigg{(}\frac{m}{\rho}\bigg{)} =\frac{1}{\rho}\frac{\partial\rho}{\partial p} \tag{9}\] and differentiating with respect to time gives: \[\rho c_{f}\frac{\partial p}{\partial t}=\frac{\partial\rho}{\partial t} \tag{10}\] The pore volume also changes with pressure, therefore similar to the fluid compressibility the rock compressibility is defined as: \[c_{r}=\frac{1}{V_{p}}\frac{\partial V_{p}}{\partial p} \tag{11}\] Notice that eq. 11 has no negative sign, this because the rock compressibility is expressed in terms of pore volume \(V_{p}\) and not rock volume \(V_{r}\), and since \[V_{p}=\phi V\] eq. 11 can be expressed as: \[c_{r}=\frac{1}{\phi}\frac{\partial\phi}{\partial p} \tag{12}\] and differentiating with respect to time gives: \[\phi c_{r}\frac{\partial p}{\partial t}=\frac{\partial\phi}{\partial t} \tag{13}\] finally, substituting eqs. 10 and 13 in eq. 7 reduces to the latter: \[\frac{\partial}{\partial x}\bigg{(}\rho\frac{k_{x}}{\mu}\frac{\partial p}{ \partial x}\bigg{)}+\frac{\partial}{\partial y}\bigg{(}\rho\frac{k_{y}}{\mu} \frac{\partial p}{\partial y}\bigg{)}=\phi c_{t}\rho\frac{\partial p}{\partial t} \tag{14}\] [2pt] where \(c_{t}\) is the total compressibility and is given by: \[c_{t}=c_{f}+c_{r}\] Eq. 14 is the basic 2D, partial differential equation of any single phase fluid in a porous medium. The equation of course is highly non-linear due to the implicit pressure dependence of the density, viscosity, and compressibility. For that, it is almost hopeless to find simple analytical solutions of the equation. One option to reduce the non-linearity of the equation is to assume constant compressibility, viscosity, and density and this yields to the following: \[\frac{\partial}{\partial x}\bigg{(}k_{x}\frac{\partial p}{ \partial x}\bigg{)}+\frac{\partial}{\partial y}\bigg{(}k_{y}\frac{\partial p }{\partial y}\bigg{)}=\phi\mu c_{t}\frac{\partial p}{\partial t} \tag{15}\] \[x\in[x_{e1},x_{e2}],\ \ \ y\in[y_{e1},y_{e2}],\ \ \ t\in[0,t_{f}]\] In this paper we will work with the 1D Cartesian case, thus eq. 15 reduces to: \[\frac{\partial}{\partial x}\bigg{(}k_{x}\frac{\partial p}{\partial x}\bigg{)} =\phi\mu c_{t}\frac{\partial p}{\partial t} \tag{16}\] \[x\in[x_{e1},x_{e2}],\ \ \ t\in[0,t_{f}]\] Eq. 15 can be expressed in cylindrical coordinates as follows: \[\frac{1}{r}\frac{\partial}{\partial r}\bigg{(}k_{r}r\frac{ \partial p}{\partial r}\bigg{)}=\phi\mu c_{t}\frac{\partial p}{\partial t} \tag{17}\] \[r\in[r_{w},r_{e}],\ \ \ t\in[0,t_{f}]\] where \(k_{r}\) is the permeability in the radial direction in \([m^{2}]\), \(r_{w}\) and \(r_{e}\) are the wellbore radius and the outer reservoir radius in \([m]\) respectively. As mentioned above, eq. 17 is a stiff PDE and that is due to the steep pressure gradients near the well. One reason for the steep gradients is the small wellbore radius \(r_{w}\) which usually ranges from \(3[in]\) to \(5[in]\). Equations 16 17 are two-point boundary value problems that require an initial condition and two boundary conditions. ## 3 Buildup Test Buildup tests are perhaps the most popular tests in well testing. They are utilized to estimate reservoir properties such as average reservoir permeability, distance from boundary (fault), wellbore storage, skin factor, and many more. A Traditional build up test is performed to a well producing under constant flow rate. The test starts by shutting in the well and recording the bottom-hole pressure change as a function of time. Then the data is plotted on a diagnostic plot. A diagnostic plot is a log-log plot of the pressure change and pressure derivative (vertical axis) versus elapsed time (horizontal axis). It is qualitative plot used to identify flow regimes at different time periods. For each flow regime period we fit an analytical solution and from that we can obtain the reservoir properties. A typical diagnostic plot looks similar to Fig. 2. At early times the pressure buildup is dominated by wellbore storage, this is when \(\Delta p_{wf}\) and \(\Delta p^{\prime}_{wf}\) are equal with slopes equal to unity, next we have a transition period, followed by infinite acting period, this is when \(\Delta p^{\prime}_{wf}\) is constant, and finally boundary dominated period this is when the boundaries felt the shut-in. As mentioned previously, under infinite-acting conditions and constant permeability \(k_{r}\), eq. 17 has analytical solution in terms of exponential integral \[p_{wf}=p_{i}-\frac{q\mu}{2\pi k_{r}h}ei\bigg{(}\frac{\phi\mu c_{t}r_{w}^{2}}{4k_{ r}t}\bigg{)} \tag{18}\] and since \[ei(x)\approx-ln(\gamma x)\hskip 28.452756pt\text{for }x<0.01\] where \(\gamma=1.781\). Therefore, eq. 18 can be approximated as \[p_{wf}=p_{i}-\frac{q\mu}{2\pi k_{r}h}ln\bigg{(}\frac{4k_{r}t}{\gamma\phi\mu c_ {t}r_{w}^{2}}\bigg{)} \tag{19}\] \[\Delta p_{wf}\equiv p_{i}-p_{wf}=\frac{q\mu}{2\pi k_{r}h}ln\bigg{(}\frac{4k_{r }t}{\gamma\phi\mu c_{t}r_{w}^{2}}\bigg{)} \tag{20}\] Figure 1: Infinitesimal Cartesian volume element for the diffusivity equation analysis. Figure 2: Diagnostic plot. The blue points are the bottom-hole pressure recording after well shut-in. The red points are the derivative of the recorded bottom-hole pressure with respect to shut-in time. where \(p_{i}\) is the reservoir initial pressure, \(p_{wf}\) is the bottom-hole pressure, \(q\) is the flow rate prior to shut-in. \(\gamma=1.781\), \(t\) is time, and \(h\) is the producing layer thickness in \([m]\). Diagnostic plots are qualitative plots used to identify different flow periods. For quantitative analysis specialized plots are used. Each flow period has its own specialized plot, for example, the specialized plot associated with the infinite-acting period is a semi-log plot with pressure change and its derivative on the linear vertical axis and elapsed time on the log horizontal axis. The reason for this is simply because it is evident from eq. 19 that the pressure change and elapsed time have a linear relationship on the semi-log plot. Therefore if we plot the pressure data on a semi-log specialized plot we can quantify the value \(m\) which is the slope and subsequently we can estimate \(k_{r}\) using eq. 22. \[m =\frac{\partial\Delta p_{wf}}{\partial ln(t)}=\frac{q\mu}{2\pi k_ {r}h} \tag{21}\] \[k_{r} =\frac{q\mu}{2\pi mh} \tag{22}\] ### Initial and Boundary Conditions This work is divided into two sections. Section 1 is associated with eq. 16 and Section 2 is associated with eq. 17 and both of these equations are two-point boundary problems thus, need an initial condition and two boundary conditions. Each section has two cases in which they differ by their boundary conditions. For all cases of both sections the initial condition is defined as: \[p(r,t=0)=p_{0}=25.0\ [MPa] \tag{2.20}\] **Section 1:** * **Case 1.1**, the well is controlled by constant BHP at \(x_{w}=0\), and a no-flow boundary at \(x_{e}\), thus: \[p(x_{w},t)=p_{wf}=3.0\ [MPa]\] \[x_{e}\frac{\partial p}{\partial x}\Big{|}_{x_{e}}=0\] * **Case 1.2**, the well is placed inside the domain \(x_{w}\in[-500,750]\) at \(x_{w}=0\), and controlled by constant BHP. Also, two no-flow boundaries are set at \(x_{e1}=-500\) and \(x_{e2}=750[m]\), thus: \[p(x_{w}=0,t)=p_{wf}=3.0\ [MPa]\] \[x_{e1}\cdot\frac{\partial p}{\partial x}\Big{|}_{x_{e1}}=x_{e2}\cdot\frac{ \partial p}{\partial x}\Big{|}_{x_{e2}}=0\] **Section 2:** * For both cases of section II, the outer boundary is a no-flow boundary, that is: \[q_{e}=0\] Figure 3: Schematic representation of cases 1.1 and 1.2. The green circle is the well and the brick walls are the no-flow boundaries. Using Darcy's law: \[q(r,t)=\frac{kA}{\mu}\frac{\partial p}{\partial r}\] \(A\) is the flux area. The flux area at the boundary is given by: \[A_{e}=2\pi r_{e}h\] \(h\) is the producing layer thickness. Combining the above yields to: \[r_{e}\frac{\partial p}{\partial r}\Big{|}_{r_{e}}=0\] * **Case 2.1**, the well is producing at a constant BHP: \[p(r_{w},t)=p_{wf}=3.0\ [MPa]\] * **Case 2.2**, the well is producing at a constant rate: \[r_{w}\frac{\partial p}{\partial r}\Big{|}_{r_{w}}=\frac{q_{w}\mu}{2\pi kh}=p_{ch}\] ### Physics-Informed Neural Networks PINNs can be used to approximate solutions of PDEs. For a general PDE of the form: \[\frac{\partial u(\mathbf{x},t)}{\partial t}+\mathcal{N}_{x}(u(\mathbf{x},t))=0,\ \ \ \ \ \mathbf{x}\in\Omega,t\in[0,T] \tag{23a}\] \[u(\mathbf{x},t)=g(\mathbf{x},t),\ \ \ \mathbf{x}\in\partial\Omega,t\in[0,T]\] (23b) \[u(\mathbf{x},0)=u_{0}(\mathbf{x}),\ \ \ \mathbf{x}\in\Omega \tag{23c}\] where \(u(\mathbf{x},t)\) is the solution to eq. 23a, \(\Omega\) is the space domain, \(x\) is a spatial vector variable, \(t\) is time, and \(\mathcal{N}_{x}(.)\) is a differential operator. Equations 23b, 23c are the boundary and initial conditions respectively. Physics-informed neural network \(\tilde{u}(\mathbf{x},t,\mathbf{w})\) can approximate the solution of eq. 23a, where \(\mathbf{w}\) are the weights and biases of the PINN ([Raissi et al., 2019]). The PINN is trained by minimizing the loss function which is given as follows: \[\mathcal{L}(\mathbf{w})=\lambda_{r}\mathcal{L}_{r}(\mathbf{w})+\lambda_{b}\mathcal{L} _{b}(\mathbf{w})+\lambda_{0}\mathcal{L}_{0}(\mathbf{w}) \tag{24}\] where \(\lambda_{r},\lambda_{b},\lambda_{0}\in\mathbb{R}\) are weights and \(\mathcal{L}_{r}(w),\mathcal{L}_{b}(w),\mathcal{L}_{0}(w)\) correspond to the residual, boundary conditions, and initial conditions accordingly, and are given by: \[\mathcal{L}_{r}(\mathbf{w})= \frac{1}{N_{r}}\sum_{i=1}^{N_{r}}r(\mathbf{x}_{r}^{i},t_{r}^{i},\mathbf{ w}), \tag{25a}\] \[\mathcal{L}_{b}(\mathbf{w})= \frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\big{(}\tilde{u}(\mathbf{x}_{b}^{i}, t_{b}^{i},\mathbf{w})-g(\mathbf{x}_{b}^{i},t_{b}^{i})\big{)}^{2},\] (25b) \[\mathcal{L}_{0}(\mathbf{w})= \frac{1}{N_{0}}\sum_{i=1}^{N_{0}}\big{(}\tilde{u}(\mathbf{x}_{0}^{i}, 0^{i},\mathbf{w})-u_{0}(\mathbf{x}_{0}^{i})\big{)}^{2}, \tag{25c}\] Figure 4: Schematic of the 2D cases of Section II. where \(\{x_{r}^{i},t_{r}^{i}\}_{i=1}^{N_{r}}\) is a collocation points list sampled from the spatial and temporal domain 23a, \(\{x_{b}^{i},t_{b}^{i}\}_{i=1}^{N_{b}}\) is a boundary points list sampled from the boundaries 23b, and \(\{x_{0}^{i},0^{i}\}_{i=1}^{N_{0}}\) is an initial condition points list sampled from the initial domain 23c. These points is where the loss function components \(\mathcal{L}_{r}(w),\mathcal{L}_{b}(w),\mathcal{L}_{0}(w)\) are minimized respectively. \(r(\mathbf{x},t;\mathbf{w})\) is the residual of the PDE and is defined as: \[r(\mathbf{x},t;\mathbf{w})=\frac{\partial\tilde{u}(\mathbf{x},t;\mathbf{w})}{ \partial t}+\mathcal{N}_{x}(\tilde{u}(\mathbf{x},t,\mathbf{w})) \tag{26}\] The training of the PINN is achieved by gradient descent, where the gradients are calculated via automatic differentiation [Atilim Gunes Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul, Jeffrey Mark Siskind, 2018]. The gradient descent minimization algorithm is expressed by eq.27 where \(\alpha\) is the learning rate. The architecture of a physics-informed neural network can be schematically visualized as in Fig. 5. \[\mathbf{w}_{j+1}=\mathbf{w}_{j}-\alpha\frac{\partial\mathcal{L}}{\partial \mathbf{w}_{j}} \tag{27}\] ## 4 Physics-Informed Neural Network Setup ### Equation Scaling and Domain Generation In this section we show how to reform the diffusivity equation to fit the requirements of PINNs. In general, for efficient training of neural networks the inputs and outputs are scaled between \([0,1]\) or \([-1,1]\). In our study both inputs (space-time) and the output (pressure) are scaled between \([0,1]\) unless stated otherwise. First, we write eq. 16 and eq. 17 in a dimensionless form. Note that in this work the permeabilities \(k_{x}\) and \(k_{r}\) in eqs. 16,17 are assumed to be constant and therefore can be moved to the right hand side of the equation by division. Figure 5: Schematic of a physics-informed neural network. \[\frac{\partial p_{D}}{\partial t_{D}}=\frac{\partial^{2}p_{D}}{ \partial x_{D}^{2}}\] (28a) \[x_{D}=\frac{x}{x_{e}};\ \ \ t_{D}=\frac{k_{x}t}{\phi\mu c_{t}x_{e}^{2}}; \ \ \ p_{D}=\frac{p-p_{wf}}{p_{0}-p_{wf}}\] (28b) \[\frac{\partial p_{D}}{\partial t_{D}}=\frac{1}{r_{D}}\frac{ \partial p_{D}}{\partial r_{D}}+\frac{\partial^{2}p_{D}}{\partial r_{D}^{2}}\] \[r_{D}=\frac{r}{r_{e}};\ \ \ t_{D}=\frac{k_{r}t}{\phi\mu c_{t}r_{e}^{2}}\] \[p_{D}=\frac{p-p_{wf}}{p_{0}-p_{wf}}\ \ \ \ \ \ \text{for case 2.1}\] \[p_{D}=\frac{p_{0}-p}{p_{ch}}\ ### Loss Components For case 1.1 the components of the loss function are as follows: \[\mathcal{L}_{r}(\mathbf{w})= \frac{1}{N_{r}}\sum_{i=1}^{N_{r}}r(\mathbf{x}_{r,}^{i}t_{r,}^{i},\mathbf{w}), \tag{31}\] \[\mathcal{L}_{w}(\mathbf{w})= \frac{1}{N_{w}}\sum_{i=1}^{N_{w}}\big{(}\tilde{u}(\mathbf{x}_{w}^{i},t _{w}^{i},\mathbf{w})-g(\mathbf{x}_{w}^{i},t_{w}^{i})\big{)}^{2},\] (32) \[\mathcal{L}_{b}(\mathbf{w})= \frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\big{(}\tilde{u}(\mathbf{x}_{b}^{i},t _{b}^{i},\mathbf{w})-g(\mathbf{x}_{b}^{i},t_{b}^{i})\big{)}^{2},\] (33) \[\mathcal{L}_{0}(\mathbf{w})= \frac{1}{N_{0}}\sum_{i=1}^{N_{0}}\big{(}\tilde{u}(\mathbf{x}_{0}^{i}, 0^{i},\mathbf{w})-u_{0}(\mathbf{x}_{0}^{i})\big{)}^{2}, \tag{34}\] where \(\mathcal{L}_{r}\), \(\mathcal{L}_{w}\), \(\mathcal{L}_{b1}\), \(\mathcal{L}_{b2}\), and \(\mathcal{L}_{0}\) are the loss function components associated with PDE residual, well, right no-flow boundary, left no-flow boundary, and initial condition respectively. Figure 6: Domain points: (a) Input points for 1D diffusivity PINN model with a single no-flow boundary. (b) Input points for 1D diffusivity PINN model with two no-flow boundary conditions. (c) Input points for the radial diffusivity PINN model with a single no-flow boundary. Red sets are for the PDE (collocation points), green sets are for the well condition, black sets are for the no-flow boundary conditions, and blue sets are for the initial condition Finally for case 2.1 and 2.2 the components of the loss function are as follows: \[\mathcal{L}_{r}(\mathbf{w})= \frac{1}{N_{r}}\sum_{i=1}^{N_{r}}r(\mathbf{x}_{r}^{i},t_{r}^{i},\mathbf{w}), \tag{35}\] \[\mathcal{L}_{w}(\mathbf{w})= \frac{1}{N_{w}}\sum_{i=1}^{N_{w}}\big{(}\tilde{u}(\mathbf{x}_{w}^{i},t _{w}^{i},\mathbf{w})-g(\mathbf{x}_{w}^{i},t_{w}^{i})\big{)}^{2},\] (36) \[\mathcal{L}_{b1}(\mathbf{w})= \frac{1}{N_{b1}}\sum_{i=1}^{N_{b1}}\big{(}\tilde{u}(\mathbf{x}_{b1}^{i },t_{b1}^{i},\mathbf{w})-g(\mathbf{x}_{b1}^{i},t_{b1}^{i})\big{)}^{2},\] (37) \[\mathcal{L}_{b2}(\mathbf{w})= \frac{1}{N_{b2}}\sum_{i=1}^{N_{b2}}\big{(}\tilde{u}(\mathbf{x}_{b2}^{i },t_{b2}^{i},\mathbf{w})-g(\mathbf{x}_{b2}^{i},t_{b2}^{i})\big{)}^{2},\] (38) \[\mathcal{L}_{0}(\mathbf{w})= \frac{1}{N_{0}}\sum_{i=1}^{N_{0}}\big{(}\tilde{u}(\mathbf{x}_{0}^{i },0^{i},\mathbf{w})-u_{0}(\mathbf{x}_{0}^{i})\big{)}^{2}, \tag{39}\] where \(\mathcal{L}_{r}\), \(\mathcal{L}_{w}\), \(\mathcal{L}_{b1}\), \(\mathcal{L}_{b2}\), and \(\mathcal{L}_{0}\) are the loss function components associated with PDE residual, well, right no-flow boundary, left no-flow boundary, and initial condition respectively. Finally for case 2.1 and 2.2 the components of the loss function are as follows: \[\mathcal{L}_{r}(\mathbf{w})= \frac{1}{N_{r}}\sum_{i=1}^{N_{r}}r(\mathbf{r}_{r}^{i},t_{r}^{i},\mathbf{w }), \tag{40}\] \[\mathcal{L}_{w}(\mathbf{w})= \frac{1}{N_{w}}\sum_{i=1}^{N_{w}}\big{(}\tilde{u}(\mathbf{r}_{w}^{i}, t_{w}^{i},\mathbf{w})-g(\mathbf{r}_{w}^{i},t_{w}^{i})\big{)}^{2},\] (41) \[\mathcal{L}_{b}(\mathbf{w})= \frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\big{(}\tilde{u}(\mathbf{r}_{b}^{i}, t_{b}^{i},\mathbf{w})-g(\mathbf{r}_{b}^{i},t_{b}^{i})\big{)}^{2},\] (42) \[\mathcal{L}_{0}(\mathbf{w})= \frac{1}{N_{0}}\sum_{i=1}^{N_{0}}\big{(}\tilde{u}(\mathbf{r}_{0}^{i}, 0^{i},\mathbf{w})-u_{0}(\mathbf{r}_{0}^{i})\big{)}^{2}, \tag{43}\] where \(\mathcal{L}_{r}\), \(\mathcal{L}_{w}\), \(\mathcal{L}_{b}\), and \(\mathcal{L}_{0}\) are the loss function components associated with PDE residual, well, no-flow boundary, and initial condition respectively. ## 5 Results ### 1D Forward Problems: * **Case 1.1:** In case 1.1, a well producing at constant BHP and a no-flow boundary. The results are shown in Fig. 6(a). Once the PINN is trained production rates can be easily calculated using Darcy's law: \[\tilde{q}=\frac{kA}{\mu}\frac{d\tilde{p}}{dx}\] where \(A\) is flow surface area, and \(\frac{d\tilde{p}}{dx}\) is the pressure gradient which can be obtained using back-propagation. Note that this is a one-dimensional problem, therefore, the flow area \(A\) is set to unity. In Fig. 7 notice that PINNs predictions for both pressure and flow rates are continuous in time whereas the high-fidelity are discrete and user-specified. In other words, the user must specify at which specific times they wish to obtain solutions for pressures and flow rates. * **Case 1.2:** In case 1.2 a well producing at constant BHP between two no-flow boundaries. The results are shown in Fig. 8. ### Dimensionless Variables It is important to note that working with scaled dimensionless form of the PDE enables making instantaneous predictions for all reservoir properties. Any change in reservoir properties \((\phi,\mu,k,c_{t},x_{e})\) is equivalent to change in the dimensionless time \(t_{D}\). Unlike numerical methods where any change in the reservoir properties requires restarting the method from the beginning, in PINNs there is no need to retrain the model since predicting for a different reservoir property is equivalent to predicting for a different dimensionless time. For practical purposes we can generate interactive-plots with sliders for instantaneous predictions as in Fig. 9. Additionally, if we have production rates data placed on the right plot of Fig. 9, we can change reservoir properties until achieving a good match and with that we can estimate reservoir properties associated with the production rates data. Figure 8: PINN pressure prediction (red dashed) versus high-fidelity numerical solution (solid blue). The relative L2 error is less than 0.1% Figure 7: (a) - PINN prediction for pressure (red dashed) versus high-fidelity numerical solution (solid blue). (b) - flow rates prediction using PINN (red dashed) versus high-fidelity numerical solution (blue dots). The relative L2 error is less than 0.1%. ### 1D Inverse Problems: In this section we test the inverse capabilities of PINNs. The problem setup is the following. We introduce a production flow rate history data \(q_{data}\) and we treat one of the reservoir properties as an unknown. We tackle two problems in this section, the first, we treat the permeability \(k_{x}\) as an unknown and the second we treat the distance from the boundary \(x_{e}\) as unknown. For the first problem, eq. 28a now has two unknowns, the pressure and the permeability with one additional boundary condition is introduced that is the flow rates history data shown in Fig. 10. The loss function components now are as follows: \[\mathcal{L}_{r}(\mathbf{w})= \frac{1}{N_{r}}\sum_{i=1}^{N_{r}}r(\mathbf{x}_{r}^{i},t_{r}^{i},\mathbf{w }), \tag{44}\] \[\mathcal{L}_{w}(\mathbf{w})= \frac{1}{N_{w}}\sum_{i=1}^{N_{w}}\big{(}\tilde{u}(\mathbf{x}_{w}^{i},t _{w}^{i},\mathbf{w})-g(\mathbf{x}_{w}^{i},t_{w}^{i})\big{)}^{2},\] (45) \[\mathcal{L}_{b}(\mathbf{w})= \frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\big{(}\tilde{u}(\mathbf{x}_{b}^{i},t _{b}^{i},\mathbf{w})-g(\mathbf{x}_{b}^{i},t_{b}^{i})\big{)}^{2},\] (46) \[\mathcal{L}_{0}(\mathbf{w})= \frac{1}{N_{0}}\sum_{i=1}^{N_{0}}\big{(}\tilde{u}(\mathbf{x}_{0}^{i},0 ^{i},\mathbf{w})-u_{0}(\mathbf{x}_{0}^{i})\big{)}^{2},\] (47) \[\mathcal{L}_{d}(\mathbf{w})= \frac{1}{N_{d}}\sum_{i=1}^{N_{d}}\big{(}\tilde{q}(0^{i},t_{d}^{i },\mathbf{w})-q_{d}(t_{d}^{i})\big{)}^{2}, \tag{48}\] Where \(\tilde{q}\) in eq. 48 is the predicted flow rates by the PINN and \(q_{d}\) is the production flow rates data. As shown in Fig. 11, a PINN model was able to learn the unknown permeability that satisfies the PDE, the initial condition, boundary conditions and the production history data with a relative error of 0.16%. Similarly, for the second problem with unknown boundary distance, a PINN model is trained to satisfy the PDE, the initial condition, boundary conditions and the production history data with a relative error of 0.79% as shown in Fig. 12. It is important to note that both models were trained with the noisy data (blue curve) in Fig. 10. In order for the model to learn an unknown reservoir property, the property is learnt by gradient descent similar to network weights and biases. For the case with unknown permeability we rewrite eq. 16: \[\begin{split}&\frac{\phi\mu c_{t}}{k_{x}}\bigg{(}\frac{k_{m}}{k_{ m}}\bigg{)}\frac{\partial p}{\partial t}=\frac{\partial^{2}p}{\partial x^{2}}\\ & x\in[0,x_{e}],\ \ \ t\in[0,t_{f}]\end{split} \tag{49}\] Figure 9: Interactive plot for permeability \(k\) and boundary distance from well \(x_{e}\). As we change the values with the sliders the plots are instantaneously updated with the new values. where \(k_{m}\) is an upper estimation of the permeability field and \(k_{x}\) is the unknown permeability. Here we take \(k_{m}=300[mD]\). \[\begin{split}&\frac{\phi\mu c_{t}}{k_{m}}\bigg{(}\frac{k_{m}}{k_{x}} \bigg{)}\frac{\partial p}{\partial t}=\frac{\partial^{2}p}{\partial x^{2}}\\ &\eta=\frac{k_{x}}{k_{m}}\\ &\frac{\phi\mu c_{t}}{k_{m}}\frac{\partial p}{\partial t}=\eta \frac{\partial^{2}p}{\partial x^{2}}\end{split} \tag{50}\] where \(\eta\) is called the permeability factor. Now we can rewrite eq. 28a as follows: \[\begin{split}&\frac{1}{t_{D,max}}\frac{\partial p_{D}}{\partial t _{D}}=\eta\frac{\partial^{2}p_{D}}{\partial x_{D}^{2}}\\ & x_{D}=\frac{x}{x_{e}};\ \ \ t_{D}=\frac{k_{m}t}{\phi\mu c_{t}x_{e} ^{2}};\ \ \ p_{D}=\frac{p-p_{wf}}{p_{0}-p_{wf}}\end{split} \tag{51}\] In addition to the previous, notice now that the residual of the PDE of eq. 51 is also a function of \(\eta\), and the loss function of the residual can be written as follows: \[\begin{split}\mathcal{L}_{r}(\mathbf{w},\eta)=&\frac{1 }{N_{r}}\sum_{i=1}^{N_{r}}r(\mathbf{x}_{r}^{i},t_{r}^{i},\mathbf{w},\eta)\end{split} \tag{52}\] The production flow rates history data \(q_{data}\) is also a function of \(\eta\) for the following reason: \[\begin{split}& q_{d}=\frac{kA}{\mu}\frac{dp}{dx}=\frac{kA}{\mu} \frac{k_{m}}{k_{m}}\frac{dp}{dx}=\frac{k_{m}A}{\mu}\eta\frac{dp}{dx}=\eta\frac {k_{m}A}{\mu}\bigg{(}\frac{p_{0}-p_{wf}}{x_{e}}\bigg{)}\frac{dp_{D}}{dx_{D}} \\ &\underbrace{\frac{q_{d}\mu}{k_{m}A}\bigg{(}\frac{x_{e}}{p_{0}-p _{wf}}\bigg{)}}_{\text{known quantity}}\ \ \ =\ \ \ \ \underbrace{\eta}_{\text{learnt quantity}}\ \ \ \ \times\ \ \ \ \underbrace{\frac{dp_{D}}{dx_{D}}}_{\text{back propagation}}\end{split} \tag{53}\] \[\begin{split}\mathcal{L}_{d}(\mathbf{w},\eta)=\frac{q_{d}\mu}{k_{m}A} \bigg{(}\frac{x_{e}}{p_{0}-p_{wf}}\bigg{)}\ -\ \eta\ \times\ \frac{dp_{D}}{dx_{D}}\end{split} \tag{54}\] The total loss function of the network is therefore a function of \(\eta\) and given by: \[\begin{split}\mathcal{L}(\mathbf{w},\eta)=\mathcal{L}_{r}(\mathbf{w}, \eta)+\mathcal{L}_{0}(\mathbf{w})+\mathcal{L}_{w}(\mathbf{w})+\mathcal{L}_{b}(\mathbf{w})+ \mathcal{L}_{d}(\mathbf{w},\eta)\end{split} \tag{55}\] Similar analogy can be applied for the case with unknown boundary distance. The results for both cases are shown in Fig. 11 and Fig. 12. To summarize this section, we showed that from production data we are able to estimate the missing permeability and distance from sealing boundary without having to shut-in the well, whereas, in buildup tests to estimate these properties a well shut-in is required. ### Forward 2D case To this end, we dealt with the 1D diffusivity equation which is not a stiff PDE, and we saw how a single physics-informed neural network was able to learn the solution. When working with the 2D diffusivity equation which is a very stiff Figure 11: Permeability prediction for a given production flow rates history data. Figure 12: Boundary distance prediction for a given production flow rates history data. Figure 10: Production flow rates history PDE (especially close to the well) the problem become much more challenging. For several traditional neural network architectures that we tried, a single neural network was not able to learn the solution for the 2D case. A possible reason for this is because the solution of the equation exhibits highly logarithmic nature close to the well, and linear nature very far from the well with a transition nature in between. Applying the conservative PINNs (cPINNs) methodology [20] to a circular reservoir with well in the center, we decompose the reservoir into multiple subdomains and each subdomain is associated with a completely independent neural network, and at the interface between the subdomains we require the pressures predictions and pressure gradients predictions (fluxes) to be equal. As an example let's take Fig. 13. For domain No.1, the loss function components are: \[\mathcal{L}_{r}^{(1)}\big{(}\mathbf{w_{1}}\big{)}= \frac{1}{N_{r}}\sum_{i=1}^{N_{r}}r^{(1)}(\mathbf{x}_{r}^{i},t_{r}^{i}, \mathbf{w_{1}})^{2},\] \[\mathcal{L}_{0}^{(1)}\big{(}\mathbf{w_{1}}\big{)}= \frac{1}{N_{0}}\sum_{i=1}^{N_{0}}\bigg{(}\tilde{u}^{(1)}(\mathbf{x}_{0 }^{i},0^{i},\mathbf{w_{1}})-u_{0}(\mathbf{x}_{0}^{i})\bigg{)}^{2},\] \[\mathcal{L}_{w}^{(1)}\big{(}\mathbf{w_{1}}\big{)}= \frac{1}{N_{w}}\sum_{i=1}^{N_{w}}\bigg{(}\tilde{u}^{(1)}(\mathbf{x}_{ w}^{i},t_{w}^{i},\mathbf{w_{1}})-g(\mathbf{x}_{w}^{i},t_{w}^{i})\bigg{)}^{2},\] \[\mathcal{L}_{pc,12}^{(1)}\big{(}\mathbf{w_{1}},\mathbf{w_{2}}\big{)}= \frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\bigg{(}\tilde{u}^{(1)}\big{(}\bm {x}_{c,1}^{i},t_{c,1}^{i},\mathbf{w_{1}}\big{)}-\tilde{u}^{(2)}\big{(}\mathbf{x}_{c,2} ^{i},t_{c,2}^{i},\mathbf{w_{2}}\big{)}\bigg{)}^{2},\] \[\mathcal{L}_{fc,12}^{(1)}\big{(}\mathbf{w_{1}},\mathbf{w_{2}}\big{)}= \mathcal{L}_{r}^{(1)}+\mathcal{L}_{0}^{(1)}+\mathcal{L}_{w}^{(1)}+ \mathcal{L}_{pc,12}^{(1)}+\mathcal{L}_{fc,12}^{(1)}\] Domain No.2, the loss function components are: \[\mathcal{L}_{r}^{(2)}\big{(}\mathbf{w_{2}}\big{)}= \frac{1}{N_{r}}\sum_{i=1}^{N_{r}}r^{(2)}(\mathbf{x}_{r}^{i},t_{r}^{i}, \mathbf{w_{2}})^{2},\] \[\mathcal{L}_{0}^{(2)}\big{(}\mathbf{w_{2}}\big{)}= \frac{1}{N_{0}}\sum_{i=1}^{N_{0}}\bigg{(}\tilde{u}^{(2)}(\mathbf{x}_{0 }^{i},0^{i},\mathbf{w_{2}})-u_{0}(\mathbf{x}_{0}^{i})\bigg{)}^{2},\] \[\mathcal{L}_{pc,21}^{(2)}\big{(}\mathbf{w_{1}},\mathbf{w_{2}}\big{)}= \frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\bigg{(}\tilde{u}^{(1)}\big{(}\bm {x}_{c,1}^{i},t_{c,1}^{i},\mathbf{w_{1}}\big{)}-\tilde{u}^{(2)}\big{(}\mathbf{x}_{c,2} ^{i},t_{c,2}^{i},\mathbf{w_{2}}\big{)}\bigg{)}^{2},\] \[\mathcal{L}_{fc,21}^{(2)}\big{(}\mathbf{w_{1}},\mathbf{w_{2}}\big{)}= \frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\bigg{(}\tilde{u}_{x}^{(1)}\big{(} \mathbf{x}_{c,1}^{i},t_{c,1}^{i},\mathbf{w_{1}}\big{)}-\tilde{u}_{x}^{(2)}\big{(}\mathbf{x}_ {c,2}^{i},t_{c,2}^{i},\mathbf{w_{2}}\big{)}\bigg{)}^{2},\] \[\mathcal{L}_{pc,23}^{(2)}\big{(}\mathbf{w_{2}},\mathbf{w_{3}}\big{)}= \frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\bigg{(}\tilde{u}_{x}^{(2)}\big{(} \mathbf{x}_{c,2}^{i},t_{c,2}^{i},\mathbf{w_{2}}\big{)}-\tilde{u}^{(3)}\big{(}\mathbf{x}_{c, 3}^{i},t_{c,3}^{i},\mathbf{w_{3}}\big{)}\bigg{)}^{2},\] \[\mathcal{L}^{(1)}\big{(}\mathbf{w_{1}},\mathbf{w_{2}},\mathbf{w_{3}}\big{)}= \mathcal{L}_{r}^{(2)}+\mathcal{L}_{0}^{(2)}+\mathcal{L}_{pc,21}^{(2 )}+\mathcal{L}_{fc,21}^{(2)}+\mathcal{L}_{pc,23}^{(2)}+\mathcal{L}_{fc,23}^{(2)}\] Domain No.3, the loss function components are: \[\mathcal{L}_{r}^{(3)}(\mathbf{w_{3}})= \frac{1}{N_{r}}\sum_{i=1}^{N_{r}}r^{(3)}(\mathbf{x}_{r}^{i},t_{r}^{i}, \mathbf{w_{3}})^{2},\] \[\mathcal{L}_{0}^{(3)}(\mathbf{w_{3}})= \frac{1}{N_{0}}\sum_{i=1}^{N_{0}}\bigg{(}\tilde{u}^{(3)}(\mathbf{x}_{0 }^{i},0^{i},\mathbf{w_{3}})-u_{0}(\mathbf{x}_{0}^{i})\bigg{)}^{2},\] \[\mathcal{L}_{b}^{(3)}(\mathbf{w_{3}})= \frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\bigg{(}\tilde{u}^{(3)}(\mathbf{x}_{ b}^{i},t_{b}^{i},\mathbf{w_{3}})-g(\mathbf{x}_{b}^{i},t_{b}^{i})\bigg{)}^{2},\] \[\mathcal{L}_{pc,32}^{(1)}(\mathbf{w_{3}},\mathbf{w_{2}})= \frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\bigg{(}\tilde{u}^{(3)}(\mathbf{x}_ {c,3}^{i},t_{c,3}^{i},\mathbf{w_{3}})-\tilde{u}^{(2)}\big{(}\mathbf{x}_{c,2}^{i},t_{c,2}^{i},\mathbf{w_{2}}\big{)}\bigg{)}^{2},\] \[\mathcal{L}_{fc,32}^{(1)}(\mathbf{w_{3}},\mathbf{w_{2}})= \frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\bigg{(}\tilde{u}^{(3)}_{x}(\mathbf{ x}_{c,3}^{i},t_{c,3}^{i},\mathbf{w_{3}})-\tilde{u}^{(2)}_{x}\big{(}\mathbf{x}_{c,2}^{i},t _{c,2}^{i},\mathbf{w_{2}}\big{)}\bigg{)}^{2},\] \[\mathcal{L}^{(3)}(\mathbf{w_{3}},\mathbf{w_{2}})= \mathcal{L}_{r}^{(3)}+\mathcal{L}_{0}^{(3)}+\mathcal{L}_{b}^{(3) }+\mathcal{L}_{pc,32}^{(3)}+\mathcal{L}_{fc,32}^{(3)}\] For case 2.1, the well is controlled by bottom-hole pressure the results are displayed in Fig. 14, and for case 2.2, with well being controlled by constant production rate the results are displayed in Fig. 16. Fig. 15 shows the flow rates predictions for case 2.1 calculated using Darcy's equation 3.1 and the back-propagation to calculate \(\frac{d\tilde{p}}{dx}\). Figure 14: Pressure predictions using the decomposition method for a circular reservoir with a well in the center controlled by constant BHP and a no-flow outer boundary versus high-fidelity (HF) numerical solution. The left figure is linear-linear and the right figure is semi-log. The relative L2 error is less than 0.2% Figure 13: Decomposition of circular reservoir into 3 subdomains. The reservoir has a a well (black) in the center and no-flow outer boundary. ## 6 Conclusions and Future Work In this work we showed that physics-informed neural network (PINN) can approximate partial differential equations to very good accuracy both for pressure predictions and flow rates predictions. In relation to reservoir engineering, a PINN model can be regarded as a good replacement for the analytical solution and therefore enables us to perform instantaneous predictions. Utilizing the inverse features of PINNs and a production flow rates history data we managed to estimate reservoir properties such as permeability and boundary distance without having to shut in the well. Alternatively, such quantities are estimated using pressure transient or rate transient tests which both require shutting-in the well for number of days. For the 2D case, we proposed a decomposition method which is essential to overcome the stiffness of the PDE. With that we were able to train a model that can accurately predict the solution of the diffusivity equation. Finally, it is imperative to mention that the inverse applications that were applied to the 1D case can be applied to the 2D case. ## 7 Acknowledgements Part of this work was done during my internship at OXY in summer 2022. I would like to thank OXY for providing all the reservoir simulations support needed to achieve these results.
2308.16372
Artificial to Spiking Neural Networks Conversion for Scientific Machine Learning
We introduce a method to convert Physics-Informed Neural Networks (PINNs), commonly used in scientific machine learning, to Spiking Neural Networks (SNNs), which are expected to have higher energy efficiency compared to traditional Artificial Neural Networks (ANNs). We first extend the calibration technique of SNNs to arbitrary activation functions beyond ReLU, making it more versatile, and we prove a theorem that ensures the effectiveness of the calibration. We successfully convert PINNs to SNNs, enabling computational efficiency for diverse regression tasks in solving multiple differential equations, including the unsteady Navier-Stokes equations. We demonstrate great gains in terms of overall efficiency, including Separable PINNs (SPINNs), which accelerate the training process. Overall, this is the first work of this kind and the proposed method achieves relatively good accuracy with low spike rates.
Qian Zhang, Chenxi Wu, Adar Kahana, Youngeun Kim, Yuhang Li, George Em Karniadakis, Priyadarshini Panda
2023-08-31T00:21:27Z
http://arxiv.org/abs/2308.16372v1
# Artificial to Spiking Neural Networks Conversion for Scientific Machine Learning + ###### Abstract We introduce a method to convert Physics-Informed Neural Networks (PINNs), commonly used in scientific machine learning, to Spiking Neural Networks (SNNs), which are expected to have higher energy efficiency compared to traditional Artificial Neural Networks (ANNs). We first extend the calibration technique of SNNs to arbitrary activation functions beyond ReLU, making it more versatile, and we prove a theorem that ensures the effectiveness of the calibration. We successfully convert PINNs to SNNs, enabling computational efficiency for diverse regression tasks in solving multiple differential equations, including the unsteady Navier-Stokes equations. We demonstrate great gains in terms of overall efficiency, including Separable PINNs (SPINNs), which accelerate the training process. Overall, this is the first work of this kind and the proposed method achieves relatively good accuracy with low spike rates. Spiking Neural Networks Conversion Nonlinear activation ## 1 Introduction The use of machine learning techniques in the scientific community has been spreading widely, reaching many fields such as physics [1, 2, 3, 4], chemistry [5, 6], biology [7, 8, 9], geophysics [10, 11], epidemiology[12, 13] and many more. The advances in computation capabilities have enabled many researchers to reformulate diverse problems as data-driven problems, by combining prior knowledge of the problem with fitting a model for the available data. A prominent drawback of Scientific Machine Learning (SciML) techniques is that they are usually expensive in terms of computational cost. They require either knowledge of the governing equations that determine the process (approximating them is a costly procedure), or a large amount of data to fit (expensive as well). The SciML community is striving for a more efficient method for training and inferring neural networks. Neuromorphic chips are one edge computing component that SciML applications could benefit from. In this work, we explore methods for enabling this. An important breakthrough in the field of SciML was the invention of the Physics-Informed Neural Networks (PINNs) [14, 15]. PINNs incorporate the knowledge of the physical experiment and governing equations into the network training step, making it a hybrid (physics and data) training. PINNs and its extensions [16, 17, 18, 19] have achieved great success in many fields and applications [20, 1, 21, 22, 23, 24, 7, 25, 26]. A disadvantage of PINNs is that like other deep neural networks, they are prone to long training times. In addition, when changing the problem conditions (initial conditions, boundary conditions, domain properties, etc.), the PINN has to be trained from scratch. Therefore, for real-time applications, a more efficient solution is sought after. For training a PINN, one usually uses smooth activation functions (such as the Tanh or Sine activation functions), where in most ANNs the ReLU activation is dominant. Using smooth activation function in a SNN is a new challenge we address in this paper with theoretical justification. Spiking Neural Networks (SNNs) have been gaining traction in the machine learning community for the past few years. The main reason is their expected efficiency [27; 28; 29; 30], in terms of energy consumption, compared to their Artificial Neural Network (ANN) counterparts that are commonly used for many applications. In addition, the advances in neuromorphic hardware (such as Intel's Loihi 2 chip [31; 32]), call for innovative algorithms and software that can utilize the chips and produce lighter and faster machine learning models. However, developing an SNN is a challenging task, especially for regression [33; 34]. Studies have been conducted for translating components from the popular ANNs into a spiking framework [35], but many components are not yet available in the spiking regime. In this paper we focus on that specific aspect. There are three popular approaches for training SNNs. The first involves using mathematical formulations of the components of the brain, such as the membrane [36; 37; 38], the synapse [39], etc. In this case, one uses a Hebbian learning rule [40] to find the weights of the synapses (the trainable parameters) using forward propagation (without backward propagation [41; 42]). The second method involves building surrogate models for the elements in the SNN that block the back-propagation, such as the non-differentiable activation functions used in SNNs. The third method, which is discussed in this paper, addresses converting a trained ANN into a SNN. The main contributions of this paper are as follows: 1. We propose a method to convert PINNs, a type of neural network commonly used for regression tasks, to Spiking Neural Networks (SNNs). The conversion allows for utilizing the advantages of SNNs in the inference stage, such as computational efficiency, in regression tasks. 2. We extend the calibration techniques used in previous studies to arbitrary activation functions, which significantly increases the applicability of the conversion method. Furthermore, we provide a convergence theorem to guarantee the effectiveness of the calibration. 3. We apply the conversion to separable PINNs (SPINNs), which accelerates the training process of the PINNs. Overall, the proposed method extends the application of SNNs in regression tasks and provides a systematic and efficient approach to convert existing neural networks for diverse regression tasks to SNNs. ## 2 Related Work Physics-informed neural networks (PINNs):An innovative framework that combines neural networks with physical laws to learn complex physical phenomena. In PINNs, the physical equations are integrated into the loss function, which allows the network to learn from both the given data and the underlying physics. This approach significantly improves the network's ability to handle incomplete or noisy data and performs well with limited training data. PINNs have been successfully applied to a range of problems in fluid dynamics, solid mechanics, and more [22; 1; 21; 23; 7; 24]. Separable PINNs (SPINNs):Cho et al. [43] proposed a novel neural network architecture called SPINNs, which aims to reduce the computational demands of PINNs and alleviate the curse of dimensionality. Unlike vanilla PINNs that use point-wise processing, SPINN works on a per-axis basis, thereby reducing the number of required network forward passes. SPINN utilizes factorized coordinates and separated sub-networks, where each sub-network takes an independent one-dimensional coordinate as input, and the final output is generated through an outer product and element-wise summation. Because SPINN eliminates the need to query every multidimensional coordinate input pair, it is less affected by the exponential growth of computational and memory costs associated with grid resolution in standard PINNs. Furthermore, SPINNs operate on a per-axis basis, which allows for parallelization with multiple GPUs. Spiking Neural Networks (SNNs):A type of Artificial Neural Network (ANN), that differs in the implementation of the core components. The purpose is to create more biologically-plausible training and inference procedures. Unlike traditional ANNs, which process information through numerical values, SNNs process information through spikes, which occur in response to stimulation (much like the human brain). SNNs are becoming increasingly popular as they can mimic the temporal nature of biological neurons. Additionally, SNNs are computationally efficient and have the potential for efficient hardware implementation, making them well-suited for real-time applications. Combining SNN implementation with edge computing, training could be significantly faster, and inference as well [27; 28; 29]. Recent results have shown that SNNs can achieve high accuracy on image classification tasks, with up to 99.44% on the MNIST dataset [44] and up to 79.21% on ImageNet [45]. SNN conversionA technique to transform a trained ANN into a SNN. SNN conversion usually involves mapping the weights and activation functions of the ANN to the synaptic strengths and spike rates of SNNs. It is considered the most efficient way to train deep SNNs, as it avoids the challenges of direct SNN training, such as gradient estimation and spike generation [46; 47; 48]. The algorithm of SNN conversion can be divided into two steps: offline conversion and online inference. In the offline conversion step, the trained ANN model is converted into an equivalent SNN model by adjusting the network parameters. In the online inference step, the converted SNN model is used for the inference. In the online step, the SNN is intended to be deployed on neuromorphic hardware, to unlock its full potential and energy efficiency. ## 3 Method A complementary process to the conversion technique. The SNN calibration [49] is a method that minimizes the loss of accuracy and efficiency when converting an ANN into a SNN. SNN calibration leverages the knowledge of the pre-trained ANN and corrects the conversion error layer by layer. The algorithm consists of two steps: replacing the ReLU activation function with a spike activation function, and applying the calibration to adjust only the biases (light) or weights and biases (advanced) of each layer. The calibration method is based on the theoretical analysis of the conversion error and its propagation through the network. SNN calibration can achieve comparable or even better performance than the original ANN on various datasets and network architectures. This paper is a generalization of this work. We propose an extension to the SNN conversion, which is approriate for regression tasks. ### SNN Conversion setup We consider a dataset \(D=(X,Y)\), and an ANN \(\mathcal{A}\) with \(n\) hidden layers, trained to fit it. Let \(\mathbf{x}^{(n)}=\mathcal{A}(X)\) be the output of \(\mathcal{A}\). The goal of SNN conversion is to find an SNN \(\mathcal{S}\), whose (averaged) output is \(\bar{\mathbf{s}}^{(n)}=\mathcal{S}(X)\), such that \(\mathbf{x}^{(n)}\) is close to \(\bar{\mathbf{s}}^{(n)}\). In other words, we want to minimize the norm of the error \(\mathbf{e}^{(n)}\stackrel{{ d}}{{=}}\mathbf{x}^{(n)}-\bar{ \mathbf{s}}^{(n)}\) for a given \(\mathcal{A}\) and \(D\). The same network structure as \(\mathcal{S}\) is used for \(\mathcal{A}\) and the activation function of \(\mathcal{A}\) is replaced with IF. Then we can analyze the factors that influence the total conversion error \(\mathbf{e}^{(n)}\). ### SNN Conversion with Calibration In this section, we will briefly explain the SNN conversion with calibration proposed in [49]. Consider an MLP model with \(n\) layers, the first \(n-1\) layers use ReLU activation and the last layer has no activation. We denote \(\mathbf{W}^{(l)}\) as the weights and bias for layer \(l\). The naive way of SNN conversion is simply replacing the ReLU activation layers with IF activation layers. In this case, we define \(\mathbf{x}^{(l)}\) as the output of layer \(l\) recursively, which is \(\mathbf{x}^{(l)}=\mathrm{ReLU}(\mathbf{W}^{(l)}\mathbf{x}^{(l-1)})\) and \(\mathbf{x}^{(0)}\) is the input. Similarly, we define \(\bar{\mathbf{s}}^{(l)}\), the output of the layer \(l\) of converted SNN, as \(\bar{\mathbf{s}}^{(l)}=\mathrm{IF}(\mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l-1)})\) and \(\bar{\mathbf{s}}^{(0)}=\mathbf{x}^{(0)}\) is the input. In fact, we can compute the expected output spikes as \(\bar{\mathbf{s}}^{(l)}=\mathrm{ClipFloor}(\mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l -1)})\). The \(\mathrm{ClipFloor}\) function is an approximation to \(\mathrm{ReLU}\) and is illustrated in Figure 1. Then we can define the conversion error of layer \(l\) as \(\mathbf{e}^{(l)}=\mathbf{x}^{(l)}-\bar{\mathbf{s}}^{(l)}\) and decompose it as \[\begin{split}\mathbf{e}^{(l)}&=\mathbf{x}^{(l)}- \bar{\mathbf{s}}^{(l)}\\ &=\mathrm{ReLU}(\mathbf{W}^{(l)}\mathbf{x}^{(l-1)})-\mathrm{ ClipFloor}(\mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l-1)})\\ &=\mathrm{ReLU}(\mathbf{W}^{(l)}\mathbf{x}^{(l-1)})-\mathrm{ ReLU}(\mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l-1)})+\mathrm{ReLU}(\mathbf{W}^{(l)} \bar{\mathbf{s}}^{(l-1)})-\mathrm{ClipFloor}(\mathbf{W}^{(l)}\bar{\mathbf{s}}^ {(l-1)})\\ &=\mathbf{e}^{(l)}_{r}+\mathbf{e}^{(l)}_{c}\end{split} \tag{1}\] Here \(\mathbf{e}^{(l)}_{r}\) represents the error caused by approximating the continuous input by the spiking input, and \(\mathbf{e}^{(l)}_{c}\) represents the local conversion error caused by changing the smooth activation function to the spiking activation (IF). A major Figure 1: Illustration of \(\mathrm{ClipFloor}\) function and its relationship to \(\mathrm{ReLU}\) (left) and \(\tanh\) (right). result in [49] is that we can bound the conversion error \(\mathbf{e}^{(n)}\), with the weighted sum of the local conversion errors \(\mathbf{e}^{(l)}_{c}\). This allows us to minimize the conversion error via optimizing each \(\mathbf{e}^{(l)}_{c}\). ### Results for general activation functions In fact, the results can be generalized to activation functions other than \(\mathrm{ReLU}\) by similar techniques in [49]. Since the generalized activation functions may have negative values, we introduce the idea of negative threshold, a concept in SNNs that allows neurons to fire both positive and negative spikes, depending on their membrane potential [50]. A positive spike occurs when the membrane potential exceeds the positive threshold, and a negative spike occurs when it is below the negative threshold. This mimics the biological behavior of neurons that do not fire a spike when the membrane potential does not reach the threshold. Negative threshold can be applied to different types of SNNs and learning functions depending on the problem domain and the data characteristics. It is very helpful when the dataset contains negative values. To formulate the conversion with calibration for generalized activations, we consider the conversion error decomposition \[\begin{split}\mathbf{e}^{(l)}&=\mathbf{x}^{(l)}- \bar{\mathbf{s}}^{(l)}\\ &=f(\mathbf{W}^{(l)}\mathbf{x}^{(l-1)})-\mathrm{ClipFloor}( \mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l-1)})\\ &=f(\mathbf{W}^{(l)}\mathbf{x}^{(l-1)})-f(\mathbf{W}^{(l)}\bar{ \mathbf{s}}^{(l-1)})+f(\mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l-1)})-\mathrm{ ClipFloor}(\mathbf{W}^{(l)}\bar{\mathbf{s}}^{(l-1)})\\ &=\mathbf{e}^{(l)}_{r}+\mathbf{e}^{(l)}_{c}\end{split} \tag{2}\] where \(\mathbf{e}^{(l)}_{r}\) and \(\mathbf{e}^{(l)}_{c}\) have the same meaning as in Eq 1. Then we can also use the weighted sum of the local conversion errors \(\mathbf{e}^{(l)}_{c}\) to bound the total conversion error \(\mathbf{e}^{(n)}\) by the following theorem. **Theorem 1**: _For any activation function whose function values and first-order derivatives can be uniformly approximated by piecewise linear functions and up to second-order derivatives are bounded, then the conversion error in the final network output space can be bounded by a weighted sum of local conversion errors, given by_ \[\mathbf{e}^{(n),\top}\mathbf{H}^{(n)}e^{(n)}\leq\sum_{l=1}^{n}2^{n-l+1} \mathbf{e}^{(l),\top}_{c}(\mathbf{H}^{(l)}+K_{L}^{(l)}\sqrt{L}\mathbf{I}) \mathbf{e}^{(l)}_{c} \tag{3}\] _where \(L\) is the training loss._ We present the detailed proof in the Appendix. The main technique is to find piecewise linear functions that approximate the smooth activation function. Therefore, the total conversion error is bounded from above by the local conversion errors. To achieve more accurate conversion performance, we can minimize the total conversion error by minimizing the local conversion error layerwise, which can be easily implemented as in [49]. We enlarge the last layer threshold to preserve the maximum value of the output. The details are discussed in the Appendix. ## 4 Results ### Function regression We first present an example of function regression: using neural networks to approximate the \(\sin\) function. For the training dataset, the input is the uniform mesg points on \([-\pi.\pi]\), and the output is the values of \(\sin\) on these points. The ANN model has two intermediate layers, each of them has 40 neurons. The activation function is \(\tanh\) except for the last layer; there is no activation function for the last layer. The network is trained with the Adam optimizer until the training error is less then \(1^{-7}\). Then, we convert the ANN to SNN with advanced calibration with different numbers of time steps. The results are shown in Figure 2. To further investigate the impact of \(T\) on the conversion error, we train networks with different intermediate layer numbers \(L=2,3,4\) and neuron numbers per layer \(N=20,40,60,80,100\). All the other setups are the same. We obtain the results shown in Figure 3. We can find that the conversion error decreases with larger \(T\) (more specifically, conversion error \(\sim 1/T\) where ) when \(T<32\), but it becomes stable or larger after \(T\geq 32\). And for neural network with fixed depth, larger layer width usually leads to smaller conversion error. However, for fixed layer width, deeper neural networks will not bring significantly better conversion performance. To validate the Theorem 1, we need to compute \(\mathbf{e}^{(l),\top}\mathbf{H}^{(l)}\mathbf{e}^{(l)}\) for \(l=1,2,\dots,n\). Since \(\mathbf{H}^{(l)}\) are intractable, we can only use the identity matrix to replace it and obtain some qualitative results. That is, compute \(\left\|\mathbf{e}^{(l)}\right\|^{2}\) instead of \(\mathbf{e}^{(l),\top}\mathbf{H}^{(l)}\mathbf{e}^{(l)}\). So the computed RHS is \(\sum_{l=1}^{n}2^{n-l+1}\mathbf{e}^{(l),\top}\mathbf{e}^{(l)}\). We train the ANN with layer numbers \(L=2,3,4\) with \(100\) neurons per layer. All the other setups are the same. The results are shown in Figure 4. Although the RHS term is not exactly the same as in 1, the trend agrees with our statement, that conversion error decreases as the RHS term does. Figure 4: The number of layers are \(L=2,3,4\) from left to right. The error here refers to \(\mathbf{e}^{(n)}\). RHS is defined as before. We can find that the total conversion error is smaller than the computed RHS and decrease with the RHS as well, which is an emprical validation of the Theorem 1. Figure 3: The number of layers are \(L=2,3,4\) from left to right. The error here refers to \(\mathbf{e}^{(n)}\). When \(T\leq 64\), the conversion error follows \(\mathbf{e}^{(n)}\approx 1/T\), and \(\mathbf{e}^{(n)}\) will get stable after \(T\) is already very large. Figure 2: Results of converting an ANN, which is trained to approximate \(\sin\), to SNN. The number of time steps is \(T=8,32,128\) from left to right. The output of SNN is close to the ground truth, which is the output of ANN. With increasing T, the conversion error becomes smaller. When \(T=128\), the SNN output curve is almost smooth and similar to the ground truth. ### PINNs To show the power of SNN conversion in regression tasks, we train PINNs, a MLP-based neural network which can solve PDEs, and convert them to SNNs. We present results for the Poisson equation, the diffusion-reaction equation, the wave equation, the Burgers equation, and the Navier-Stokes equations. Poisson equationThe Poisson equation is often used to describe the potential field in electromagnetic theory. Here we solve the following boundary value problem of Poisson equation \[\begin{split}-\Delta u(x)&=1,\quad x\in\Omega=[-1, 1]\times[-1,1]\\ u(x)&=0,\quad x\in\partial\Omega\end{split} \tag{4}\] with PINN. The network has \(3\) intermediate layers, each of which has 100 neurons. The activation function is \(\tanh\) except for the last layer. The network is trained for 50,000 epochs. Then we convert it into SNN. The results are shown in Figure 5. We observe that the SNN converted with calibration can achieve the same magnitude of error as the PINN evaluated as an ANN, while the SNN converted without using calibration can just obtain a rough shape of the solution and has much larger error. Diffusion-reaction equationThis equation models reactive transport in physical and biologicak systems. Here we use PINN to solve the diffusion-reaction equation with the following initial condition: \[\begin{split} u_{t}-u_{xx}&=ku^{2},\quad x\in \Omega=[-1,1]\\ u(x,0)&=\exp\!\left(-\frac{x^{2}}{2\sigma^{2}}\right) \end{split} \tag{5}\] where \(k=1\), \(\sigma=0.25\) up to time \(T=0.01\). The network has \(3\) intermediate layers, each of which has 100 neurons. The activation function is \(\tanh\) except for the last layer. The network is trained for 100,000 epochs. Then we convert it into SNN. The results are shown in 6. Figure 5: Poisson equation: The results of converting a PINN solving the poisson equation (4). Figure 4(a) is the reference solution. Figure 4(b) is the PINN result. Figure 4(c) is the result of the SNN converted from the PINN without using calibration. Figure 4(c) is the result of the SNN converted from the PINN using calibration. Here the L2 error and relative error are defined as \(\left\|\mathbf{x}^{(n)}-\mathbf{s}^{(n)}\right\|_{2}\) and \(\left\|\mathbf{x}^{(n)}-\mathbf{s}^{(n)}\right\|_{2}\)/\left\|\mathbf{x}^{(n) }\right\|_{2}\), where \(\mathbf{x}^{(n)}\) is the reference solution and \(\mathbf{s}^{(n)}\) is the neural network output. \(\left\|\cdot\right\|_{2}\) is the \(l^{2}\) norm, which is the root of mean square error. Figure 6: Reaction-diffusion equation: The results of converting a PINN solving the nonlinear heat equation (5). Figure 5(a) is the reference solution. Figure 5(b) is the PINN result. Figure 5(c) is the result of the SNN converted from the PINN without using calibration. Figure 5(c) is the result of the SNN converted from the PINN using calibration. Wave equationHere we use PINN to solve the a wave equation with the following initial and boundary conditions: \[\begin{split} u_{tt}-u_{xx}&=0,\quad x\in\Omega=[-1,1]\\ u(x,0)&=\begin{cases}1&x\in[-0.245,0.245]\\ 0&x\in[-1,-0.6]\cap[0.6,1]\\ \mathrm{linear}&\mathrm{otherwise}\end{cases}\\ u(-1,t)&=u(1,t)=0\end{split} \tag{6}\] up to time \(T=0.5\). The network has \(3\) intermediate layers, each of which has 100 neurons. The activation function is \(\tanh\) except for the last layer. The network is trained for 100,000 epochs. Then we convert it into SNN. The results are shown in 7. Viscous Burgers equationThe Burgers equation is a prototype PDE representing nonlinear advection-diffusion occurs fluid mechanics. Here we solve the following problem of viscous Burgers equation: \[\begin{split}\frac{\partial u}{\partial t}-\frac{\partial}{ \partial x}(\frac{1}{2}u^{2})&=\nu\frac{\partial^{2}u}{\partial x ^{2}},\quad(x,t)\in[0,2\pi]\times[0,4]\\ u(x,0)&=\sin(x)\end{split} \tag{7}\] with PINN. The network has \(6\) intermediate layers, each of which has \(40\) neurons. The activation function is \(\tanh\) except for the last layer. The network is trained for 100,000 epochs. Then we convert it into SNN. The results are shown in 8. Here we can find that conversion without calibration does not give correct position of steep gradient as the position is moving left (see Figure 7(c)). But the conversion with calibration keeps the steep gradient position correct, which is important for physics. Due to the discontinuous nature of SNN, the conversion results are not smooth. However, we can still apply some filters to smooth the outputs. For example, we apply FFT to the conversion results and remove the high frequencies, and the results are shown in Figure After the smoothing, conversion error becomes lower, hence can be adopted as the postprocessing procedure. Figure 8: Burgers equation: The results of converting a PINN solving the viscous Burgers equation (7). Figure 7(a) is the reference solution. Figure 7(b) is the PINN result. Figure 7(c) is the result of the SNN converted from the PINN without using calibration. Figure 7(c) is the result of the SNN converted from the PINN using calibration. Figure 7: Wave equation: The results of converting a PINN solving the wave equation (6). Figure 6(a) is the reference solution. Figure 6(b) is the PINN result. Figure 7(c) is the result of the SNN converted from the PINN without using calibration. Figure 7(c) is the result of the SNN converted from the PINN using calibration. ### Accelerating training with separable PINNs (SPINNs) A current limitation of PINN-SNN conversion is the computational resources it requires, especially for training the ANN. Herein, we highlight the effectiveness of using the separable physics-informed neural networks to spiking neural networks (SPINN-SNN) conversion. The SPINN-SNN conversion pipeline enhances the speed of operations and provides great computational efficiency compared to the direct application of PINN. We implement the SPINN-SNN conversion to address two PDEs: a two-dimensional viscous Burgers equation and a three-dimensional unsteady Beltrami flow. We provide a comparative analysis of the training time taken by standard PINN and the SPINN-SNN conversion pipeline. Experimental results reveal that the application of SPINN-SNN conversion greatly enhances the speed of the training, particularly when solving high dimensional PDEs. Figure 10 displays a runtime comparison between SPINN-SNN conversion and PINN, applied to two and three-dimensional problems. When addressing the two-dimensional Burgers equation, the SPINN-SNN conversion process is approximately 1.7 times faster than PINN. The superiority of SPINN-SNN conversion becomes more apparent while solving the three-dimensional Beltrami flow problem, which is over 60 times faster than PINN. Notably, the time necessary for the SNN calibration remains relatively constant, regardless of the problem dimensionality. This indicates that the benefits of the SPINN-SNN conversion pipeline become increasingly prominent with the rise in the dimensionality of the problem. Viscous Burgers equationIn order to conduct a fair comparison between the PINN-SNN and SPINN-SNN conversions, the setup of the viscous Burgers equation is kept consistent with that described in Equation 7, and the same set of hyperparameters is utilized. Figure 11 presents the results of converting a SPINN to solve the Burgers' equation. The SPINN has individual subnetworks for each independent variable, \(x\) and \(t\). Each of these subnetworks comprises three intermediate layers, each layer containing 40 neurons, and employs the tanh activation function, except in the last layer. As depicted in Figure 11, the conversion from SPINN provides an accuracy level comparable to that of PINN. An SNN converted with calibration achieves a significantly smaller error than one converted without calibration. Beltrami FlowThe Navier-Stokes equations are fundamental in fluid mechanics as they mathematically represent the conservation of momentum in fluid systems, and there has been significant advancement in solving Navier-Stokes flow problems using scientific machine learning methods [51; 52; 53]. The Navier-Stokes equations can be presented in two forms: the velocity-pressure (VP) form and the vorticity-velocity (VV) form. The incompressible Navier-Stokes equations, in their VP form, are as follows: \[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla) \mathbf{u} =-\nabla p+\frac{1}{Re}\nabla^{2}\mathbf{u} \tag{8}\] \[\nabla\cdot\mathbf{u} =0\] Figure 10: The runtimes of the SPINN-SNN conversions solving the viscous Burgers equation (7) and Beltrami flow (8). Figure 10a is the runtime for the 2D viscous Burgers equation. Figure 10b is the runtime for the 3D Beltrami flow. Figure 9: Figure 9a is the original conversion result. Figure 9b is the smoothed conversion result. We chose the spatial domain to be \(\Omega\in[-1,1]^{2}\) and time interval to be \(\Gamma\in[0,1]\). Here, \(t\) is non-dimensional time, \(\mathbf{u}(\mathbf{x},t)=[u,v]^{T}\) the non-dimensional velocity in the \((x,y)\)-directions, \(p\) the non-dimensional pressure, and the Reynolds number \(Re=\frac{U_{ref}D_{ref}}{v}\) is defined by characteristic length (\(D_{ref}\)), reference velocity (\(U_{ref}\)), and kinematic viscosity (\(v\)). In this example, we simulate a three-dimensional unsteady laminar Beltrami flow where \(Re=1\). The analytical solution of the Beltrami flow is [54]: \[\begin{split} u(x,y,t)&=-\cos x\sin y\ e^{-2t}\\ v(x,y,t)&=\sin x\cos y\ e^{-2t}\\ p(x,y,t)&=-\frac{1}{4}(\cos 2x+\cos 2y)\ e^{-4t} \end{split} \tag{9}\] The boundary and initial conditions are extracted from this exact solution. The PINN network comprises 4 intermediate layers, with each one containing 128 neurons. The activation function is tanh applied taoi all layers except the final layer. The network is trained for 20,000 epochs before being converted into an SNN. The SPINN consists of separate subnetworks for each independent variable, u, v, and p. Every subnetwork has 2 intermediate layers, with each layer consisting of 50 neurons. They all utilize the tanh activation function, apart from the last layer. Figure 12 illustrates that the SPINN conversion offers similar accuracy compared to PINN. The error for pressure is slightly more than the velocity errors in both the \(x\) and \(y\) axes. Nevertheless, the SNN, once calibrated and converted using SPINN, attains good accuracy and notable speed improvements in comparison to PINN. ### Firing rates of the converted SNN To demonstrate the potential efficiency of converting the ANN to the SNN, we computed the spiking rates of different equations. The spiking rate is defined as the ratio of non-zero values in the output of each layer. Prior works have suggested that SNNs with lower spiking rates will translate to energy efficient impelmentations on neurromorphic hardware. The results are shown in Table 1. We observe \(<0.5\) spiking rate in most cases demonstrating that SNNs only expend \(<50\%\) of their network computations. \begin{table} \begin{tabular}{c|c c} \hline Equations & Number of parameters & Spiking rate \\ \hline Poisson & 20601 & 0.3727 \\ Diffusion-reaction & 20601 & 0.2879 \\ Wave & 20601 & 0.5721 \\ Burgers & 8361 & 0.7253 \\ Burgers (Separable PINN) & 15500 & 0.2831 \\ N-S (Beltrami flow, Separable PINN) & 30900 & 0.1754 \\ \hline \end{tabular} \end{table} Table 1: The spiking rate of SNNs for different equations. Figure 11: Burgers equation with Separable PINN (SPINN): The results of converting a Separable PINN solving the viscous Burgers equation (7). Figure 11a is the reference solution. Figure 11b is the Separable PINN result. Figure 11c is the result of the SNN converted from the Separable PINN without using calibration. Figure 11c is the result of the SNN converted from the Separable PINN using calibration. ## 5 Conclusion We have successfully extended the SNN calibration method proposed in [49] to a more generalized class of activation functions beyond ReLU. The original proof relied on the specific property of ReLU's second-order derivative being zero, but our approach relaxed this constraint by incorporating the training loss as an additional term in the bound. We demonstrated the effectiveness of our method through various examples, including PINN and Separable PINN [43] (a variant of PINN), where the activation functions are not ReLU. The results demonstrated that our approach achieved good accuracy with low spike rates, making it a promising and energy-efficient solution for scientific machine learning. By enabling the conversion of a wider range of neural networks to SNNs, this method opens up new possibilities for \begin{table} \begin{tabular}{c|c c} \hline \hline Equations & \(L^{2}\) error & Relative \(L^{2}\) error \\ \hline Poisson & \(7.8508\times 10^{-3}\) & \(4.7095\times 10^{-2}\) \\ Diffusion-reaction & \(2.8766\times 10^{-2}\) & \(6.3267\times 10^{-2}\) \\ Wave & \(3.8965\times 10^{-2}\) & \(6.5308\times 10^{-2}\) \\ Burgers & \(3.9884\times 10^{-2}\) & \(6.9781\times 10^{-2}\) \\ Burgers (Separable PINN) & \(6.6495\times 10^{-2}\) & \(1.1634\times 10^{-1}\) \\ N-S (Beltrami flow, Separable PINN) & \(8.2512\times 10^{-3}\) & \(4.3273\times 10^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 2: The \(L^{2}\) and relative \(L^{2}\) error of converted SNN for different equations. Figure 12: Beltrami flow with Separable PINN (SPINN): The results of converting a SPINN solving the Beltrami flow problem(7). Figure 12(a,e,i) are the reference solutions for \(\mathbf{u},\mathbf{v},\mathbf{p}\) respectively. Figure 12(b,f,j) are the SPINN results. Figure 12(c,g,k) are the PINN results. Figure 12(d,h,i) are the results of the SNN converted from the SPINN with calibration. harnessing the temporal processing capabilities and computational efficiency of SNNs in various scientific applications. The proposed approach contributes to advancing the field of spiking neural networks and their potential for practical real-world implementations in edge computing. Future research can explore further extensions of the calibration technique to other types of activation functions and investigate its performance on more complex neural network architectures. ## 6 Acknowledgements This work was supported in part by the DOE SEA-CROGS project (DE-SC0023191), the ONR Vannevar Bush Faculty Fellowship (N00014-22-1-2795), CoCoSys- a JUMP2.0 center sponsored by DARPA and SRC, and the National Science Foundation CAREER Award.
2303.00105
Scalability and Sample Efficiency Analysis of Graph Neural Networks for Power System State Estimation
Data-driven state estimation (SE) is becoming increasingly important in modern power systems, as it allows for more efficient analysis of system behaviour using real-time measurement data. This paper thoroughly evaluates a phasor measurement unit-only state estimator based on graph neural networks (GNNs) applied over factor graphs. To assess the sample efficiency of the GNN model, we perform multiple training experiments on various training set sizes. Additionally, to evaluate the scalability of the GNN model, we conduct experiments on power systems of various sizes. Our results show that the GNN-based state estimator exhibits high accuracy and efficient use of data. Additionally, it demonstrated scalability in terms of both memory usage and inference time, making it a promising solution for data-driven SE in modern power systems.
Ognjen Kundacina, Gorana Gojic, Mirsad Cosovic, Dragisa Miskovic, Dejan Vukobratovic
2023-02-28T22:09:12Z
http://arxiv.org/abs/2303.00105v2
Scalability and Sample Efficiency Analysis of Graph Neural Networks for Power System State Estimation ###### Abstract Data-driven state estimation (SE) is becoming increasingly important in modern power systems, as it allows for more efficient analysis of system behaviour using real-time measurement data. This paper thoroughly evaluates a phasor measurement unit-only state estimator based on graph neural networks (GNNs) applied over factor graphs. To assess the sample efficiency of the GNN model, we perform multiple training experiments on various training set sizes. Additionally, to evaluate the scalability of the GNN model, we conduct experiments on power systems of various sizes. Our results show that the GNN-based state estimator exhibits high accuracy and efficient use of data. Additionally, it demonstrated scalability in terms of both memory usage and inference time, making it a promising solution for data-driven SE in modern power systems. State Estimation, Graph Neural Networks, Machine Learning, Power Systems, Real-Time Systems ## I Introduction **Motivation and literature review:** The state estimation (SE) algorithm is a key component of the energy management system that provides an accurate and up-to-date representation of the current state of the power system. Its purpose is to estimate complex bus voltages using available measurements, power system parameters, and topology information [1]. In this sense, the SE can be seen as a problem of solving large, noisy, sparse, and generally nonlinear systems of equations. The measurement data used by the SE algorithm usually come from two sources: the supervisory control and data acquisition (SCADA) system and the wide area monitoring system (WAMS) system. The SCADA system provides low-resolution measurements that cannot capture system dynamics in real-time, while the WAMS system provides high-resolution data from phasor measurement units (PMUs) that enable real-time monitoring of the system. The SE problem that considers measurement data from both WAMS and SCADA systems is formulated in a nonlinear way and solved in a centralized manner using the Gauss-Newton method [1]. On the other hand, the SE problem that considers only PMU data provided by WAMS has a linear formulation, enabling faster, non-iterative solutions. In this work, we will focus on the SE considering only phasor measurements, described with a system of linear equations [2], which is becoming viable with the increasing deployment of PMUs. This formulation is usually solved using linear weighted least-squares (WLS), which involve matrix factorizations and can be numerically sensitive [3]. To address the numerical instability issues that often arise when using traditional SE solvers, researchers have turned to data-driven deep learning approaches [4, 5]. These approaches, when trained on relevant datasets, are able to provide solutions even when traditional methods fail. For example, in [4], a combination of feed-forward and recurrent neural networks was used to predict network voltages using historical measurement data. In the nonlinear SE formulation, the study [5] demonstrates the use of deep neural networks as fast and quality initializers of the Gauss-Newton method. Both linear WLS and common deep learning SE methods at its best approach quadratic computational complexity regarding the power system size. To fully utilize high sampling rates of PMUs, there is a motivation to develop SE algorithms with a linear computational complexity. One way of achieving this could be using increasingly popular graph neural networks (GNNs) [6, 7]. GNNs have several advantages when used in power systems, such as permutation invariance, the ability to handle varying power system topologies, and requiring fewer trainable parameters and less storage space compared to conventional deep learning methods. One of the key benefits of GNNs is the ability to perform distributed inference using only local neighbourhood measurements, which can be efficiently implemented using the emerging 5G network communication infrastructure and edge computing [8]. This allows for real-time and low-latency decision-making even in large-scale networks, as the computations are performed at the edge of the network, closer to the data source, reducing the amount of data that needs to be transmitted over the network. This feature is particularly useful for utilizing the high sampling rates of PMUs, as it can reduce communication delays in PMU measurement delivery that occur in centralized SE implementations. GNNs are being applied in a variety of prediction tasks in the field of power systems, including fault location [9], stability assessment [10], and load forecasting [11]. GNNs have also been used for power flow problems, both in a supervised [12] and an unsupervised [13] manner. A hybrid nonlinear SE approach [14] combines a model and data-based approach using a GNN that outputs voltages which are used a regularization term in the SE loss function. **Contributions**: In our previous work [15], we proposed a data-driven linear PMU-only state estimator based on GNNs applied over factor graphs. The model demonstrated good approximation capabilities under normal operating conditions and performed well in unobservable and underdetermined scenarios. This work significantly extends our previous work in the following ways: * We conduct an empirical analysis to investigate how the same GNN architecture could be used for power systems of various sizes. We assume that the local properties of the graphs in these systems are similar, leading to local neighbourhoods with similar structures which can be represented using the same embedding space size and the same number of GNN layers. * To evaluate the sample efficiency of the GNN model, we run multiple training experiments on different sizes of training sets. Additionally, we assess the scalability of the model by training it on various power system sizes and evaluating its accuracy, training convergence properties, inference time, and memory requirements. * As a side contribution, the proposed GNN model is tested in scenarios with high measurement variances, using which we simulate phasor misalignments due to communication delays, and the results are compared with linear WLS solutions of SE. ## II Linear State Estimation with PMUs The SE algorithm has a goal of estimating the values of the state variables \(\mathbf{x}\), so that they are consistent with measurements, as well as the power system model defined by its topology and parameters. The power system's topology is represented by a graph \(\mathcal{G}=(\mathcal{N},\mathcal{E})\), where \(\mathcal{N}=1,\ldots,n\) is the set of buses and \(\mathcal{E}\subseteq\mathcal{N}\times\mathcal{N}\) is the set of branches. PMUs measure complex bus voltages and complex branch currents, in the form of magnitude and phase angle [16, Sec. 5.6]. PMUs placed at a bus measure the bus voltage phasor and current phasors along all branches incident to the bus [17]. The state variables are given as \(\mathbf{x}\) in rectangular coordinates, and therefore consist of real and imaginary components of bus voltages. The PMU measurements are transformed from polar to rectangular coordinate system, since then the SE problem can be formulated using a system of linear equations [15]. The solution to this sparse and noisy system can be found by solving the linear WLS problem: \[\left(\mathbf{H}^{T}\boldsymbol{\Sigma}^{-1}\mathbf{H}\right)\mathbf{x}= \mathbf{H}^{T}\boldsymbol{\Sigma}^{-1}\mathbf{z}, \tag{1}\] where the Jacobian matrix \(\mathbf{H}\in\mathbb{R}^{m\times 2n}\) is defined according to the partial first-order derivatives of the measurement functions, and \(m\) is the total number of linear equations. The observation error covariance matrix is \(\boldsymbol{\Sigma}\in\mathbb{R}^{m\times m}\), while the vector \(\mathbf{z}\in\mathbb{R}^{m}\) contains measurement values in rectangular coordinate system. The aim of the WLS-based SE is to minimize the sum of residuals between the measurements and the corresponding values that are calculated using the measurement functions [1]. This approach has the disadvantage of requiring a transformation of measurement errors (magnitude and angle errors) from polar to rectangular coordinates, making them correlated, resulting in a non-diagonal covariance matrix \(\boldsymbol{\Sigma}\) and increased computational effort. To simplify the calculation, the non-diagonal elements of \(\boldsymbol{\Sigma}\) are often ignored, which can impact the accuracy of the SE [17]. We can use the classical theory of propagation of uncertainty to compute variances in rectangular coordinates from variances in polar coordinates [18]. The solution to (1) obtained by ignoring the non-diagonal elements of the covariance matrix \(\boldsymbol{\Sigma}\) to avoid its computationally demanding inversion is referred to as the _approximative WLS SE solution_. In the rest of the paper, we will explore whether using a GNN model trained with measurement values, variances, and covariances labelled with the exact solutions of (1) leads to greater accuracy compared to the approximative WLS SE, which ignores covariances. The GNN model, once trained, scales linearly with respect to the number of power system buses, allowing for lower computation time compared to both the approximate and exact solvers of (1). ## III Methods In this section, we introduce spatial GNNs on a high-level and describe how can they be applied to the linear SE problem. ### _Spatial Graph Neural Networks_ Spatial GNNs are a type of machine learning models that process graph-structured data by iteratively applying message passing to local subsets of the graph. The goal of GNNs is to transform the inputs from each node and its connections into a higher-dimensional space, creating a \(s\)-dimensional vector \(\mathbf{h}\in\mathbb{R}^{s}\) for each node. GNNs contain \(K\) layers, with each layer representing a single iteration \(k\) of the message passing process. Each GNN layer includes trainable functions, which are implemented as neural networks, such as a message function, an aggregation function, and an update function, as shown in Fig. 1. The message function calculates the message \(\mathbf{m}_{i,j}\in\mathbb{R}^{u}\) between two node embeddings, the aggregation function combines the incoming messages in a specific way, resulting in an aggregated message \(\mathbf{m_{j}}\in\mathbb{R}^{u}\), and the update function calculates the update to each node's embedding. The message passing process is repeated a fixed number of times, with the final node embeddings passed through additional neural network layers to generate predictions. GNNs are trained by optimizing their parameters using a variant of gradient descent, with the loss function being a measure of the distance between the ground-truth values and the predictions. ### _State Estimation using Graph Neural Networks_ The proposed GNN model is designed to be applied over a graph with a SE factor graph topology [19], which consists of factor and variable nodes with edges between them. The variable nodes are used to create a \(s\)-dimensional embedding for the real and imaginary parts of the bus voltages, which are used to generate state variable predictions. The factor nodes serve as inputs for measurement values, variances, and covariances. Factor nodes do not generate predictions, but they participate in the GNN message passing process to send input data to their neighbouring variable nodes. To improve the model's representation of a node's neighbourhood structure, we use binary index encoding as input features for variable nodes. This encoding allows the GNN to better capture relationships between nodes and reduces the number of input neurons and trainable parameters, as well as training and inference time, compared to the one-hot encoding used in [15]. The GNN model can be applied to various types and quantities of measurements on both power system buses and branches, and the addition or removal of measurements can be simulated by adding or removing factor nodes. In contrast, applying a GNN to the bus-branch power system model would require assigning a single input vector to each bus, which can cause problems such as having to fill elements with zeros when not all measurements are available and making the output sensitive to the order of measurements in the input vector. Connecting the variable nodes in the \(2\)-hop neighbourhood of the factor graph topology significantly improves the model's prediction quality in unobservable scenarios [15]. This is because the graph remains connected even when simulating the removal of factor nodes (e.g., measurement loss), which allows messages to be propagated in the entire \(K\)-hop neighbourhood of the variable node. This allows for the physical connection between power system buses to be preserved when a factor node corresponding to a branch current measurement is removed. The proposed GNN for a heterogeneous graph has two types of layers: one for factor nodes and one for variable nodes. These layers, denoted as \(\mathrm{Layer^{f}}\) and \(\mathrm{Layer^{v}}\), have their own sets of trainable parameters, which allow them to learn their message, aggregation, and update functions separately. Different sets of trainable parameters are used for variable-to-variable and factor-to-variable node messages. Both GNN layers use two-layer feed-forward neural networks as message functions, single layer neural networks as update functions, and the attention mechanism [7] in the aggregation function. Then, a two-layer neural network \(\mathrm{Pred}\) is applied to the final node embeddings \(\mathbf{h}^{K}\) of variable nodes only, to create state variable predictions. The loss function is the mean-squared error (MSE) between the predictions and the ground-truth values, calculated using variable nodes only. All trainable parameters are updated via gradient descent and backpropagation over a mini-batch of graphs. The high-level computational graph of the GNN architecture specialized for heterogeneous augmented factor graphs is depicted in Figure 2. The proposed model uses an inference process that requires measurements from the \(K\)-hop neighbourhood of each node, allowing for computational and geographical distribution. Additionally, since the node degree in the SE factor graph is limited, the computational complexity for the inference process is constant. As a result, the overall GNN-based SE has a linear computational complexity, making it efficient and scalable for large networks. ## IV Numerical Results In this section, we conduct numerical experiments to investigate the scalability and sample efficiency of the proposed GNN approach. By varying the power system and training set sizes, we are able to assess the model's memory requirements, prediction speed, and accuracy and compare them to those of traditional SE approaches. Fig. 1: A GNN layer, which represents a single message passing iteration, includes multiple trainable functions, depicted as yellow rectangles. The number of first-order neighbours of the node \(j\) is denoted as \(n_{j}\). Fig. 2: Proposed GNN architecture for heterogeneous augmented factor graphs. Variable nodes are represented by circles and factor nodes are represented by squares. The high-level computational graph begins with the loss function for a variable node, and the layers that aggregate into different types of nodes have distinct trainable parameters. We use the IEEE 30-bus system, the IEEE 118-bus system, the IEEE 300-bus system, and the ACTIVSg 2000-bus system [20], with measurements placed so that measurement redundancy is maximal. For the purpose of sample efficiency analysis, we create training sets containing 10, 100, 1000, and 10000 samples for each of the mentioned power systems. Furthermore, we use validation and test sets comprising 100 samples. These datasets are generated by solving the power flow problem using randomly generated bus power injections and adding Gaussian noise to obtain the measurement values. All the data samples were labelled using the traditional SE solver. An instance of the GNN model is trained on each of these datasets. In contrast to our previous work, we use higher variance values of \(5\times 10^{-1}\) to examine the performance of the GNN algorithm under conditions where input measurement phasors are unsynchronized due to communication delays [21]. While this is usually simulated by using variance values that increase over time, as an extreme scenario we fix the measurement variances to a high value. In all the experiments, the node embedding size is set to \(64\), and the learning rate is \(4\times 10^{-4}\). The minibatch size is \(32\), and the number of GNN layers is \(4\). We use the ReLU activation function and a gradient clipping value of \(5\times 10^{-1}\). The optimizer is Adam, and we use mean batch normalization. ### _Properties of Power System Augmented Factor Graphs_ For all four test power systems, we create augmented factor graphs using the methodology described in Section III-B. Fig. 3 illustrates how the properties of the augmented factor graphs, such as average node degree, average path length, average clustering coefficient, along with the system's maximal measurement redundancy, vary across different test power systems. The average path length is a property that characterizes the global graph structure, and it tends to increase as the size of the system grows. However, as a design property of high-voltage networks, the other graph properties such as the average node degree, average clustering coefficient, as well as maximal measurement redundancy do not exhibit a clear trend of change with respect to the size of the power system. This suggests that the structures of local, \(K\)-hop neighbourhoods within the graph are similar across different power systems, and that they contain similar factor-to-variable node ratio. Consequently, it is reasonable to use the same GNN architecture (most importantly, the number of GNN layers and the node embedding size) for all test power systems, regardless of their size. In this way, the proposed model achieves scalability, as it applies the same set of operations to the local, \(K\)-hop neighbourhoods of augmented factor graphs of varying sizes without having to adapt to each individual case. ### _Training Convergence Analysis_ First, we analyse the training process for the IEEE 30-bus system with four different sizes of the training set. As mentioned in III-B, the training loss is a measure of the error between the predictions and the ground-truth values for data samples used in the training process. The validation loss, on the other hand, is a measure of the error between the predictions and the ground-truth values on a separate validation set. In this analysis, we used a validation set of 100 samples. The training losses for all the training processes converged smoothly, so we do not plot them for the sake of clarity. Figure 4 shows the validation losses for 150 epochs of training on four different training sets. For smaller training sets, the validation loss decreases initially but then begins to increase, which is a sign of overfitting. In these cases, a common practice in machine learning is to select the model with the lowest validation loss value. As it will be shown in IV-C, the separate test set results for models created using small training sets are still satisfactory. As the number of samples in the training set increases, the training process becomes more stable. This is because the model has more data to learn from and is therefore less prone to overfitting. Next, in Table I, we present the training results for the other power systems and training sets of various sizes. The numbers in the table represent the number of epochs after which either the validation loss stopped changing or began to increase. Similarly to the experiments on the IEEE 30 Fig. 4: Validation losses for trainings on four different training set sizes. Fig. 3: Properties of augmented factor graphs along with the system’s measurement redundancy for different test power systems, labelled with their corresponding number of buses. bus system, the trainings on smaller training sets exhibited overfitting, while others converged smoothly. For the former, the number in the table indicates the epoch at which the validation loss reached its minimum and stopped improving. For the latter, the number in the table represents the epoch when there were five consecutive validation loss changes less than \(10^{-5}\). Increasing the size of the training set generally results in a lower number of epochs until the validation loss reaches its minimum. However, the epochs until the validation loss reaches its minimum vary significantly between the different power systems. This could be due to differences in the complexity of the systems or the quality of the data used for training. ### _Accuracy Assessment_ Fig. 5 reports the mean squared errors (MSEs) between the predictions and the ground-truth values on 100-sample sized test sets for all trained models and the approximate WLS SE. These results indicate that even the GNN models trained on small datasets outperform the approximate WLS SE, except for the models trained on the IEEE 30-bus system with 10 and 100 samples. These results suggest that the quality of the GNN model's predictions and the generalization capabilities improve as the amount of training data increases, and the models with the best results (highlighted in bold) have significantly smaller MSEs compared to the approximate WLS SE. While we use randomly generated training sets in this analysis, using carefully selected training samples based on historical load consumption data could potentially lead to even better results with small datasets. ### _Inference Time and Memory Requirements_ The plot in Fig. 6 shows the ratio of execution times between WLS SE and GNN SE inference as a function of the number of buses in the system. These times are measured on a test set of 100 samples. As expected, the difference in computational complexity between GNN, with its linear complexity, and WLS, with more than quadratic complexity, becomes apparent as the number of buses increases. From the results, it can be observed that GNN significantly outperforms WLS in terms of inference time on larger power systems. The number of trainable parameters in the GNN model remains relatively constant, as the number of power system buses increases. The number of input neurons for variable node binary index encoding does grow logarithmically with the number of variable nodes. However, this increase is relatively small compared to the total number of GNN parameters1. This Fig. 5: Test set results for various power systems and training set sizes. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Power system** & IEEE 118 & IEEE 300 & ACTIVSg 2000 \\ \hline **10 samples** & \(61\) & \(400\) & \(166\) \\ \hline **100 samples** & \(38\) & \(84\) & \(200\) \\ \hline **1000 samples** & \(24\) & \(82\) & \(49\) \\ \hline **10000 samples** & \(12\) & \(30\) & \(15\) \\ \hline \end{tabular} \end{table} TABLE I: Epoch until validation loss minimum for various power systems and training set sizes. indicates that the GNN approach is scalable and efficient, as the model's complexity does not significantly increase with the size of the power system being analysed. ## V Conclusions In this study, we focused on thoroughly testing a GNN-based state estimation algorithm in scenarios with large variances, and examining its scalability and sample efficiency. The results showed that the proposed approach provides good results for large power systems, with lower prediction errors compared to the approximative SE. The GNN model used in this approach is also fast and maintains constant memory usage, regardless of the size of the scheme. Additionally, the GNN was found to be an effective approximation method for WLS SE even with a relatively small number of training samples, particularly for larger power systems, indicating its sample efficiency. Given these characteristics, the approach is worthy of further consideration for real-world applications.
2301.13714
Recursive Neural Networks with Bottlenecks Diagnose (Non-)Compositionality
A recent line of work in NLP focuses on the (dis)ability of models to generalise compositionally for artificial languages. However, when considering natural language tasks, the data involved is not strictly, or locally, compositional. Quantifying the compositionality of data is a challenging task, which has been investigated primarily for short utterances. We use recursive neural models (Tree-LSTMs) with bottlenecks that limit the transfer of information between nodes. We illustrate that comparing data's representations in models with and without the bottleneck can be used to produce a compositionality metric. The procedure is applied to the evaluation of arithmetic expressions using synthetic data, and sentiment classification using natural language data. We demonstrate that compression through a bottleneck impacts non-compositional examples disproportionately and then use the bottleneck compositionality metric (BCM) to distinguish compositional from non-compositional samples, yielding a compositionality ranking over a dataset.
Verna Dankers, Ivan Titov
2023-01-31T15:46:39Z
http://arxiv.org/abs/2301.13714v1
# Recursive Neural Networks with Bottlenecks Diagnose ###### Abstract A recent line of work in NLP focuses on the (dis)ability of models to generalise compositionally for artificial languages. However, when considering natural language tasks, the data involved is not strictly, or _locally_, compositional. Quantifying the compositionality of data is a challenging task, which has been investigated primarily for short utterances. We use recursive neural models (Tree-LSTMs) with bottlenecks that limit the transfer of information between nodes. We illustrate that comparing data's representations in models with and without the bottleneck can be used to produce a compositionality metric. The procedure is applied to the evaluation of arithmetic expressions using synthetic data, and sentiment classification using natural language data. We demonstrate that compression through a bottleneck impacts non-compositional examples disproportionately and then use the bottleneck compositionality metric (BCM) to distinguish compositional from non-compositional samples, yielding a compositionality ranking over a dataset. ## 1 Introduction _Compositional generalisation_ in contemporary NLP research investigates models' ability to compose the meanings of expressions from their parts and is often investigated with artificial languages (e.g. Lake and Baroni, 2018; Hupkes et al., 2020) or highly-structured natural language data (e.g. Keysers et al., 2019). For such tasks, the **local compositionality** definition of Szabo (2012, p. 10) illustrates how meaning can be algebraically composed: "The meaning of a complex expression is determined by the meanings its constituents have _individually_ and the way those constituents are combined." In natural language, there are fragments whose meaning can be composed as with arithmetic (e.g. "the cat is in the house"), while others carry contextual dependencies (e.g. "the kiwi grows on the farm"). Can we characterise whether an input's meaning arises from strictly local compositions? Existing work in that direction mostly focuses on providing a 'compositionality rating'1 for figurative utterances since figurative language is assumed to be less compositional (Ramisch et al., 2016; Nandakumar et al., 2019; Reddy et al., 2011). Andreas (2018) suggests a general-purpose formulation for measuring the compositionality of examples using their numerical representations, through the _Tree Reconstruction Error_ (TRE), expressing the distance between a model's representation of an input and a strictly compositional reconstruction of that representation. Determining how to compute that reconstruction is far from trivial. Footnote 1: We colloquially refer to the ‘compositionality ratings’ of phrases, but a more appropriate way to express the same would be to refer to the extent to which the meaning of a phrase arises from a compositional syntax and semantics’. After all, compositionality is a property of a language, not of a phrase. Inspired by TRE, we use recursive neural networks, Tree-LSTMs (Tai et al., 2015), to process inputs according to their syntactic structure. We augment Tree-LSTMs with bottlenecks to compute Figure 1: When processing this phrase, “the ruler” is interpreted differently when comparing recursive processing with local processing. We enforce local processing by equipping models with bottlenecks, and our **bottleneck compositionality metric (BCM)** then compares inputs’ representations before and after compression through the bottleneck. the task-specific meaning of an input in a more locally compositional manner. We use these models to distinguish more compositional examples from less compositional ones in a **bottleneck compositionality metric (BCM)**. Figure 1 provides an intuition for how a bottleneck can provide a metric. For fragments that violate the assumption that meanings of subexpressions can be computed locally (on the left side), one could end up with different interpretations when comparing a contextualised interpretation (in blue) with one locally computed (in green): disambiguating "ruler" requires postponed meaning computation, and thus local processing is likely to lead to different results from regular processing. For fragments that are non-ambiguous (on the right side) the two types of processing can yield the same interpretation because the interpretation of "pencil" is likely to be the same, with or without the context. The bottleneck hinders the model in postponing computations and more strongly impacts non-compositional samples compared to compositional ones, thus acting as a metric. In the remainder of the paper, we firstly discuss the related work in SS2. SS3 elaborates on the models used that either apply a _deep variational information bottleneck_ (DVIB) (Alemi et al., 2017) or compress representations through increased dropout or smaller hidden dimensionalities. In SS4, we provide a proof-of-concept in a controlled environment where non-compositional examples are manually introduced, after which SS5 elaborates on the natural language example of sentiment analysis. For both tasks, we (1) demonstrate that compression through a bottleneck encourages local processing and (2) show that the bottleneck can act as a metric distinguishing compositional from less compositional examples. ## 2 Related Work Multi-word expressionsThe majority of the related work in the past two decades has discussed the compositionality of phrases in the context of figurative language, such as phrasal verbs ("to eat up") (McCarthy et al., 2003), noun compounds ("cloud nine" vs "swimming pool") (Reddy et al., 2011; Yazdani et al., 2015; Ramisch et al., 2016; Nandakumar et al., 2019), verb-noun collocations ("take place" vs "take a gift") (Venkatapathy and Joshi, 2005; McCarthy et al., 2007), and adjective-noun pairs ("nice house") (Guevara, 2010). Compositionality judgements were obtained from humans, who indicated to what extent the meaning of the compound is that of the words when combined literally, and various computational methods were applied to learn that mapping. Those methods were initially thesaurus-based (McCarthy et al., 2003), relied on word vectors from co-occurrence matrices later on (Reddy et al., 2011), or employed deep neural networks (Nandakumar et al., 2019). Compositionality by reconstruction TRE (Andreas, 2018) is a task-agnostic metric that evaluates the compositionality of data representations: \(\text{TRE}(x)=\delta(f(x),\hat{f}_{\eta}(d))\). It is the distance between the representation of \(x\) constructed by \(f\) and the compositionally reconstructed variant \(\hat{f}_{\eta}(d)\) based on the derivation of \(x\) (\(d\)). When employing the metric, one should define an appropriate distance function (\(\delta\)) and define \(\hat{f}_{\eta}\) parametrised by \(\eta\). Andreas illustrates the TRE's versatility by instantiating it for three scenarios: to investigate whether image representations are similar to composed image attributes, whether phrase embeddings are similar to the vector addition of their components, and whether generalisation accuracy in a reference game positively correlates with TRE. Bhathena et al. (2020) present two methods based on TRE to obtain compositionality ratings for sentiment trees, referred to as _tree impurity_ and _weighted node switching_ that express the difference between the sentiment label of the root and the other nodes in the tree. Zheng and Jiang (2022) ranked examples of sentiment analysis based on the extent to which neural models should _memorise_ examples in order to capture their target correctly. While different from TRE, memorisation could be related to non-compositionality in the sense that non-compositional examples require more memorisation, akin to formulaic language requiring memorisation in humans (Wray and Perkins, 2000). Other instantiations of the TRE are from literature on language emergence in signalling games, where the degree of compositionality of that language is measured. Korbak et al. (2020) contrast TRE and six other compositionality metrics for signalling games where the colour and shape of an object are communicated. Examples of such metrics are topographic similarity, positional disentanglement and context independence. These are not directly related to our work, considering that they aim to provide a metric for a _language_ rather than single utterances. Appendix B.2 elaborates on topographic similarity and the metrics of Bhathena et al. (2020) and Zheng and Jiang (2022), comparing them to our metric for sentiment analysis. Compositional data splitsRecent work on compositional generalisation using artificial languages or highly-structured natural language data focuses on creating data splits that have systematic separation of input combinations in train and test data. The aim is to create test sets that should not be hard when computing meaning compositionally, but, in practice, are very challenging. An example compositionality metric for semantic parsing is _maximum compound divergence_(Keysers et al., 2019; Shaw et al., 2021), that minimises train-test differences in word distributions while maximising the differences in compound usage. This only applies to a data split as a whole, and - differently from the work at hand - does not rate individual samples. More recently, Bogin et al. (2022) discussed a diagnostic metric for semantic parsing, that predicts model success on examples based on their local structure. Because models struggle with systematically assigning the same meaning to subexpressions when they re-appear in new syntactic structures, such structural deviation diagnoses generalisation failures. Notice that the aim of our work is different, namely identifying examples that are _not_ compositional, rather than investigating generalisation failure for _compositional_ examples. ## 3 Model The model we employ is the _Tree-LSTM_(Tai et al., 2015), which is a generalisation of LSTMs to tree-structured network topologies. The LSTM computes symbols' representations by incorporating previous time steps, visiting symbols in linear order. A sentence representation is simply the final time step. A Tree-LSTM, instead, uses a tree's root node representation as the sentence representation, and computes the representation of a non-terminal node using the node's children. Equations 1 and 2 illustrate the difference between the LSTM and an \(N\)-ary Tree-LSTM for the input gate. The LSTM computes the gate's activation for time step \(t\) using input vector \(x_{t}\) and previous hidden state \(h_{t-1}\). The Tree-LSTM does so for node \(j\) using the input vector \(x_{j}\) and the hidden states of up to \(N\) children of node \(j\). \[i_{t}=\sigma(W^{(i)}x_{t}+U^{(i)}h_{t-1}+b^{(i)}) \tag{1}\] \[i_{j}=\sigma(W^{(i)}x_{j}+\sum_{\ell=1}^{N}U_{\ell}^{(i)}h_{j\ell}+b^{(i)}) \tag{2}\] In addition to the input gate, the Tree-LSTM's specification for non-terminal \(j\) (with its \(k\)th child indicated as \(h_{jk}\)) involves an output gate \(o_{j}\) (equation analogous to 2), a forget gate \(f_{jk}\) (Equation 3), cell input activation vector \(u_{j}\) (equation analogous to 2, with the \(\sigma\) function replaced by tanh), and memory cell state \(c_{j}\) (Equation 4). Finally, \(c_{j}\) feeds into the computation of hidden state \(h_{j}\) (Equation 5). \[f_{jk}=\sigma(W^{(f)}x_{j}+\sum_{\ell=1}^{N}U_{k\ell}^{(f)}h_{j\ell}+b^{(f)}) \tag{3}\] \[c_{j}=i_{j}\odot u_{j}+\sum_{\ell=1}^{N}f_{j\ell}\odot c_{j\ell} \tag{4}\] \[h_{j}=o_{j}\odot\text{tanh}(c_{j}) \tag{5}\] We apply a _binary_ Tree-LSTM to compute hidden state \(h_{j}\) and memory cell state \(c_{j}\), that thus uses separate parameters in the gates for the left and right child. Tree-LSTMs process inputs according to their syntactic structure, which has been associated with more compositional processing (Socher et al., 2013; Tai et al., 2015). Yet, although the topology encourages compositional processing, there is no mechanism to explicitly regulate how much information is passed from children to parent nodes - e.g. given enough capacity, the hidden representations could store every input encountered and postpone processing until the very end. We add such a mechanism by introducing a **bottleneck**. **1. Deep Variational Information Bottleneck** The information bottleneck of Alemi et al. (2017) assumes random variables \(X\) and \(Y\) for the input and output, and emits a compressed representation \(Z\) that preserves information about \(Y\), by minimising the loss \(\mathcal{L}_{IB}\) in Equation 6. This loss is intractable, which motivates the variational estimate \(\mathcal{L}_{VIB}\) provided in Equation 7 (Alemi et al., 2017) that we use to train the **deep variational information bottleneck (DVIB)** version of our model. \[\mathcal{L}_{IB}=\beta I(X,Z)-I(Z,Y) \tag{6}\] \[\mathcal{L}_{VIB}=\underbrace{\beta\underset{x}{\mathbb{E}}[\text{KL}[p_{ \theta}(z|x),r(z)]]}_{\text{information loss}}+ \tag{7}\] In the information loss, \(r(z)\) and \(p_{\theta}(z|x)\) estimate the prior and posterior probability over \(z\), respectively. In the task loss, \(q_{\phi}(y|z)\) is a parametric approximation of \(p(y|z)\). In order to allow an analytic computation of the KL-divergence, we consider Gaussian distributions \(r(z)\) and \(p_{\theta}(z|x)\), namely \(r(z)=\mathcal{N}(z|\mu_{0},\Sigma_{0})\) and \(p_{\theta}(z|x)=\mathcal{N}(z|\mu(x),\Sigma(x))\), where \(\mu(x)\) and \(\mu_{0}\) are mean vectors, and \(\Sigma(x)\) and \(\Sigma_{0}\) are diagonal covariance matrices. The reparameterisation trick is used to estimate the gradients: \(z=\mu(x)+\Sigma(x)\odot\epsilon\), where \(\epsilon\sim\mathcal{N}(0,I)\). We sample \(z\) once per non-terminal node, and average the KL terms of all non-terminal nodes, where \(x\) is the hidden state \(h_{j}\) or the cell state \(c_{j}\) (that have separate bottlenecks), and \(\mu(x)\) and \(\Sigma(x)\) are computed by feeding \(x\) to two linear layers. \(\beta\) regulates the impact of the DVIB, and is gradually increased during training. During inference, we use \(z=\mu(x)\). 2. Dropout bottleneckBinary **dropout**(Srivastava et al., 2014) is commonly applied when training neural models, to prevent overfitting. With a probability \(p\) hidden units are set to zero, and during the evaluation all units are kept, but the activations are scaled down. Dropout encourages distributing the most salient information over multiple neurons, which comes at the cost of idiosyncratic patterns that networks may memorise otherwise. We hypothesise that this hurts non-compositional examples most. We apply dropout to the Tree-LSTM's hidden states (\(h_{j}\)) and memory cell states (\(c_{j}\)). 3. Hidden dimensionality bottleneckSimilarly, decreasing the number of **hidden units** is expected to act as a bottleneck. We decrease the number of hidden units in the Tree-LSTM, keeping the embedding and task classifier dimensions stable, where possible. The different bottlenecks have different merits: whereas the hidden dimensionality and dropout bottlenecks shine through simplicity, they are rigid in how they affect the model and apply in the same way at every node. The DVIB allows for more flexibility in how compression is achieved through learnt \(\Sigma(x)\) and by requiring an overall reduction in the information loss term, without enforcing the same bottleneck at every node in the tree. From bottleneck to compositionality metricBCM compares Tree-LSTMs with and without a bottleneck. We experiment with two methods, inspired by TRE (Andreas, 2018). TRE aims to find \(\eta\) such that \(\delta(f(x),\hat{f}_{\eta}(d))\) is minimised, for inputs \(x\), their derivations \(d\), distance function \(\delta\), a model \(f\) and its compositional approximation \(\hat{f}_{\eta}\). * In the **TRE training** (BCM-TT) setup, we include the distance (\(\delta\)) between the hidden representations of \(f\) and \(\hat{f}_{\eta}\) in the loss when training \(\hat{f}_{\eta}\). When training \(\hat{f}_{\eta}\) with TRE training, \(f\) is frozen, and \(f\) and \(\hat{f}_{\eta}\) share the final linear layer of the classification module. In the arithmetic task, \(\delta\) is the _mean-squared error_ (MSE) (i.e. the squared Euclidean distance). In sentiment analysis, \(\delta\) is the Cosine distance function. * In the **post-processing** (BCM-PP) setup, we train the two models separately, extract hidden representations and apply _canonical correlation analysis_ (CCA) (Hotelling, 1936) to minimise the distance between the sets of hidden representations. Assume matrices \(A\in\mathcal{R}^{d_{A}\times N}\) and \(B\in\mathcal{R}^{d_{B}\times N}\) representing \(N\) inputs with dimensionalities \(d_{A}\) and \(d_{B}\). CCA linearly transforms these subspaces \(A^{\prime}=WA\), \(B^{\prime}=VB\) to maximise the correlations \(\{\rho_{1},\ldots,\rho_{\text{min}(d_{A},d_{B})}\}\) of the transformed subspaces. We treat the number of CCA dimensions to use as a hyperparameter. ## 4 Proof-of-concept: Arithmetic Given a task, we assign ratings to inputs that express to what extent their task-dependent meaning arises in a locally compositional manner. To investigate the impact of our metric on compositional and non-compositional examples in a controlled environment, we first use perfectly compositional arithmetic expressions and introduce exceptions to that compositionality manually. ### Data and model training Math problems have previously been used to examine neural models' compositional reasoning (e.g. Saxton et al., 2018; Hupkes et al., 2018; Russin et al., 2021). Arithmetic expressions are suited for our application, in particular since they can be represented as trees. We use expressions containing brackets, integers -10 to 10, and + and - operators - e.g. "( 10 - ( 5 + 3 ))" (using data from Hupkes et al., 2018). The output is an integer. This is modelled as a regression problem with the MSE loss. The'meaning' (the numerical value) of a subexpression can be locally computed at each node in the tree: there are no contextual dependencies. In this controlled environment, we introduce exceptions by making "\(\emptyset\)" ambiguous. When located in the subtree headed by the root node's left child, it takes on its regular value, but when located in the right subtree, it takes on the value of the leftmost leaf node of the entire tree (see Figure 2). The model is thus encouraged to perform non-compositional processing to keep track of all occurrences of "\(\emptyset\)" and store the first leaf node's value throughout the tree. 88% of the training data are the original arithmetic expressions, and 12% are such exceptions. We can thus track what happens to the two categories when we introduce the bottleneck. The training data consist of 14903 expressions with 1 to 9 numbers. We test on expressions with lengths 5 to 9, using 5000 examples per length. The Tree-LSTMs trained on this dataset have embeddings and hidden states of sizes 150 and are trained for 50 epochs with learning rate \(2\mathrm{e}{-4}\) with AdamW and a batch size of 32. The base Tree-LSTMs in all setups use the same architecture, namely the Tree-LSTM architecture required for the DVIB, but with \(\beta=0\). All results are averaged over models trained using ten different random seeds. In the Tree-LSTM, the numbers are leaf nodes and the labels of non-terminal nodes are the operators.2 Footnote 2: Appendix C further elaborates on the experimental setup. ### Task performance: Hierarchy without compositionality? Figures 2(a) and 2(b) visualise the performance for the regular examples and exceptions, respectively, when increasing \(\beta\) for the DVIB. The DVIB disproportionately harms the exceptions; when \(\beta\) is too high the model cannot capture the non-local dependencies. Appendix A.1 shows how the hidden dimensionality and dropout bottlenecks have a similar effect. Figure 4 and Appendix A.2 provide insights in the training dynamics of the models: initially, all models will treat "\(\emptyset\)" as a regular number, independent of the bottleneck. Close to convergence, models trained with a low \(\beta\) have learnt to capture the ambiguities, whereas models trained with a higher \(\beta\) will remain in a more locally compositional state.3 Footnote 3: Comparing ‘early’ and ‘late’ models may yield similar results as comparing base and bottleneck models. Yet, without labels of which examples are compositional, it is hard to know when the model transitions from the early to the late stage. Bottlenecks restrict information passed throughout the tree. To process an arithmetic subexpression, all that is needed is to pass on its outcome, not the subexpression itself - e.g. in Figure 2, one could simply store the value 6 instead of the subexpression "2 - -4". The former represents _local processing_ and is more efficient (i.e. it requires storing less information), while the latter leads to information from the leaf nodes being passed to non-terminal nodes higher up the tree. Storing information about the leaf nodes would be required to cope with the exceptions in the data. That the model could get close to accurate predictions for these exceptions in the first place suggests Tree-LSTMs can process inputs according to the hierarchical structure without being locally compositional. Increasing compression using bottlenecks enforces local processing. Figure 4: Training dynamics for the Tree-LSTM with the DVIB: for all test examples we compute the MSE over the course of training on the validation set using (a) the compositional targets, or (b) the targets from the adapted dataset, of which a subset is not compositional. Figure 3: Performance (MSE) on the arithmetic task for the Tree-LSTM with the DVIB (darker colours correspond to higher \(\beta\)). Exceptions have a contextual dependency and cannot be computed bottom up. Figure 2: Illustration of the ‘exceptions’ in the arithmetic task: the value of “\(\emptyset\)” depends on its position and on the value of the leftmost leaf node in the tree. ### The Bottleneck Compositionality Metric The bottleneck Tree-LSTM harms the exceptions disproportionately in terms of task performance, and through BCM we can exploit the difference between the base and bottleneck model to distinguish compositional from non-compositional examples. As laid out in SS3, we use the TT or PP method to compare pairs of Tree-LSTMs: the base Tree-LSTM with \(\beta=0\), no dropout and a hidden dimension of 150 (**base model**) is paired up with Tree-LSTMs with the same architecture, but a different \(\beta\), a different dropout probability \(p\) or a different hidden dimensionality \(d\) (**bottleneck model**). All Tree-LSTMs have a classification module that consists of two linear layers, where the first layer maps the Tree-LSTM's hidden representation to a vector of 100 units, and the second layer emits the predicted value. The 100-dimensional vector is used to apply the BCM: * In BCM-PP the vector feeds into the CCA computation, that compares the hidden representation of the base model and the bottleneck model using their Cosine distance. We rank examples according to that distance, and use all CCA directions. * In BCM-TT, the vector feeds into the TRE loss component. We train the base model, freeze that network, and add the MSE of the hidden representations of the bottleneck model and the base model to the loss. After training, the remaining MSE is used to rank examples. Both setups have the same output, namely a compositionality ranking of examples in a dataset. A successful ranking would put the exceptions last. Figure 4(a) illustrates the relative position of regular examples and exceptions for all bottlenecks and BCM variants, for \(\beta=0.25\), \(p=0.5\) and \(d=25\). The change in MSE observed in SS4.2 is reflected in the quality of the ranking, but the success does not depend on the specific selection of \(\beta\), \(p\) or \(d\), as long as they are large (\(\beta\), \(p\)) or small enough (\(d\)). Figure 4(b) illustrates one of the rankings. Summarising, we illustrated that recursive models can employ strategies that do not locally compose the meaning of arithmetic subexpressions but carry tokens' identities throughout the tree. We can make a model more locally compositional using bottlenecks and can use a model's hidden states to infer which examples required non-local processing afterwards, acting as our compositionality metric. ## 5 Sentiment analysis We apply the metric to the task of sentiment analysis, for which Moilanen and Pulman (2007, p. 1) suggest the following notion of compositionality: "For if the meaning of a sentence is a function of the meanings of its parts then the global polarity of a sentence is a function of the polarities of its parts." Sentiment is quasi-compositional: even though the sentiment of an expression is often a straightforward function of the sentiment of its parts, there are exceptions - e.g. consider cases of sarcasm, such as "I love it when people yell at me first thing in the morning" (Barnes et al., 2019), which makes it a suitable testing bed. ### Data and model training We use the SST-5 subtask from the Stanford Sentiment Treebank (SST) (Socher et al., 2013), that contains sentences from movie reviews collected Figure 5: Rankings of arithmetic examples. (a) shows the relative position of regular examples and exceptions in the rankings of all setups, where 0 corresponds to the start of the ranking and 1 to the end. (b) illustrates the result of BCM-TT with the DVIB, \(\beta=0.25\). Figure 6: The accuracy (solid) and macro-averaged \(F_{1}\)-scores (dashed) for the SST test set, for base models, bottleneck models and a sentiment-only baseline. by Pang and Lee (2005). The SST-5 subtask requires classifying sentences into one of five classes ranging from very negative to very positive. The standard train, development and test subsets have 8544, 1101 and 2210 examples, respectively. The sentences were originally parsed with the Stanford Parser (Klein and Manning, 2003), and the dataset includes sentiment labels for all nodes of those parse trees. Typically, labels for all phrases are included in training, but the evaluation is conducted for the root nodes of the test set, only. Following Tai et al. (2015), we use GloVe word embeddings (Pennington et al., 2014), that we freeze across models.4 The Tree-LSTM has 150 hidden units and is trained for 10 epochs with a learning rate of \(2\mathrm{e}{-4}\) and the AdamW optimiser. During each training update, the loss is computed over all subexpressions of 4 trees. Training is repeated with 10 random seeds.5 Figure 6 provides the performance on the test set for the base and bottleneck models, using the accuracy and the macro-averaged \(F_{1}\)-score. Tai et al. (2015) obtained an accuracy of 0.497 using frozen embeddings. Footnote 4: The notion of local compositionality, relied on in this work, assumes that tokens are not disambiguated, which is why we refrain from using state-of-the-art contextualised representations. The focus of this work is on developing a compositionality metric rather than on improving sentiment analysis. Footnote 5: Appendix C further elaborates on the experimental setup. In sentiment analysis, as in our pseudo-arithmetic task, a successful model would often have to deviate from local processing. After all, the correct interpretation of a leaf node is often unknown without access to the context - e.g. in the case of ambiguous words like "sick" which is likely to refer to being ill, but could also mean "awesome". Being successful at the task thus requires a recursive model to keep track of information about (leaf) nodes while recursively processing the input, and more so for non-compositional examples than for compositional examples. As with the arithmetic task, local processing - enforced in the bottleneck models - should disproportionately hinder processing of non-compositional examples. ### A sentiment-only baseline To assert that the bottlenecks make the models more compositional, we create a sentiment-only baseline that is given as input not words but their sentiment, and has a hidden dimensionality \(d=25\). Non-compositional patterns that arise from the composition of certain words rather than the sentiment of those words (e.g. "drop dead gorgeous") could hardly be captured by that model. As such, the model exemplifies how sentiment can be computed more compositionally. Figure 7 illustrates the default sentiment this model predicts for various input combinations. Its predictions can be characterised by i) predicting **positive** for positive inputs, ii) predicting **negative** for negative inputs, iii) predicting **neutral** if one input is neutral, iv) predicting a class **in between** the input classes or as v) predicting the same class as its inputs (**continuity**). The performance of the model is included in Figure 6, and Figure 8 (a-c) indicates the Pearson's \(r\) between the sentiment predictions of bottleneck models and baseline models. Generally, a higher \(\beta\) or dropout probability, or a lower hidden dimen Figure 8: Pearson’s \(r\) for the predictions of sentiment-only baselines and bottleneck models (a-c) and Spearman’s \(\rho\) for the SST validation set compositionality ranking of the baselines and bottleneck models (d-f), when varying the number of CCA dimensions used. Figure 7: Illustration of the predictions of a sentiment-only baseline model. We indicate the predicted sentiment given two inputs. The labels range from very negative (‘- -’) to neutral (‘\(\sim\)’) to very positive (‘++’). sionality, leads to predictions that are more similar to this sentiment-only model, unless the amount of regularisation is too extreme, debilitating the model (e.g. for dropout with probability 0.9). ### The Bottleneck Compositionality Metric Now we use BCM to obtain a ranking over the SST dataset. We separate the dataset into four folds, train on those folds for the base and bottleneck model, and compute the cosine distances for the hidden representations of examples in the test sets (using BCM-PP or BCM-TT). We merge the cosine distances for the different folds, averaged over models trained with 10 random seeds, and order examples based on the resulting distance. We select the values for \(\beta\), dropout and the hidden dimensionality, as well as the number of CCA directions to use, based on rankings computed over the SST validation data. Figure 8 (d-f) illustrates how the rankings of bottleneck models correlate with rankings constructed using the sentiment-only baseline. We select 25 directions, \(\beta=0.0025\), dropout \(p=0.65\) and a hidden dimensionality of 25 to compute the full ranking. Contrary to the arithmetic task, BCM-TT underperforms compared to BCM-PP. Different from the arithmetic task, it is unclear which examples _should_ be at the start or end of the ranking. Therefore, we examine the relative position of categories of examples in Figure 9 for the BCM-PP with the hidden dimensionality bottleneck, and in Appendix B.3 for the remaining rankings. The categories include the previously introduced ones, augmented with the following four: * **amplification**: the root is even more positive/negative than its top two children; * **attenuation**: the root is less positive/negative than its top two children; * **switch**: the children are positive/negative, but the root node flips that sentiment; * **neutral\(\leftrightarrow\)polarised**: the inputs are sentiment-laden, but the root is neutral, or vice versa. We also include characterisations from Barnes et al. (2019), who label examples from the SST test set, for which state-of-the-art sentiment models cannot seem to predict the correct label, including, for example, cases where a sentence contains mixed sentiment, or sentences with idioms, irony or sarcasm. Appendix B.1 elaborates on the meaning of the categories. Figure 9 illustrates the relative positions of our sentiment characterisations and those of Barnes et al. on that ranking. Patterns associated with more compositional sentiment processing, such as 'positive', 'negative' and 'in between' lead to hidden representations that are more similar between the base model and bottleneck models than the dataset average (the mid point, 0.5). Atypical patterns like'switch' and 'neutral\(\leftrightarrow\)polarised', on the other hand, along with the characterisations by Barnes et al. lead to less similar hidden representations. Appendix B.3 presents the same results for all six rankings considered, along with example sentences from across the ranking, to illustrate the types of sentences encountered among the most compositional and the least compositional ones. ### Example use cases Compositionality rankings can be used in multiple manners, of which we illustrate two below. When data is scarce: use compositional examplesAssuming that most of natural language _is_ compositional, one would expect that when limiting the training data, selecting compositional examples yields the best test performance. To investigate this, we train models on various training dataset sizes, and evaluate with the regular test set. The training data is taken from the start of the ranking for the 'compositional' setup, and from the end of the ranking for the 'non-compositional' setup Figure 9: Categories of SST examples and their average position on the compositionality ranking visualised for the BCM-PP with the hidden dimensionality bottleneck and \(d=25\). Categories in black are assigned by us; categories in gray are from Barnes et al. (2019). The categories are further explained in the main text and Appendix B.1. Jittering was applied to better visualise overlapping categories. (excluding test data), while ensuring equal distributions over input lengths and output classes. We train a two-layer bidirectional LSTM with 300 hidden units, and Roberta-base Liu et al. (2019), using batch size 4. The models are trained for 10 and 5 epochs, respectively, with learning rates \(2e-4\) and \(5e-6\). Because the ranking is computed over full sentences, and not subexpressions, we train the models on the sentiment labels for the root nodes. Figure 10 presents the results, consolidating that when data is scarce, using compositional examples is beneficial. **Non-compositional examples are challenging** For the same models, Table 1 indicates how performance changes if we redistribute train and test data such that the test set contains the most compositional examples, or the least compositional ones (keeping length and class distributions similar). The non-compositional test setup is more challenging, with an 11 percentage point drop in accuracy for the LSTM, and a 3 point decrease for Roberta. In conclusion, applying the BCM to the sentiment data has confirmed findings previously observed for the arithmetic toy task. While it is harder to understand whether the method actually filters out non-compositional examples, both comparisons to a sentiment-only baseline, and the average position of cases for which the composition of sentiment is known to be challenging (e.g. for'mixed' sentiment, 'comparative' sentiment or'sarcasm'), suggest that compression acts as a compositionality metric. We also illustrated two ways in which the resulting ranking can be used. ## 6 Conclusion This work presents the Bottleneck Compositionality Metric, a TRE-based metric Andreas (2018) that is task-independent and can be applied to inputs of varying lengths: we pair up Tree-LSTMs where one of them has more compressed representations due to a bottleneck (the DVIB, hidden dimensionality bottleneck or dropout bottleneck), and use the distance between their hidden representations as a per-datum metric. The method was applied to rank examples in datasets from most compositional to least compositional, which is of interest due to the growing relevance of compositional generalisation research in NLP, which assumes the compositionality of natural language, and encourages models to compose meanings of expressions rather than to memorise phrases as chunks. We provided a proof-of-concept using an arithmetic task but also applied the metric to the much more noisy domain of sentiment analysis. The different bottlenecks lead to qualitatively similar results. This suggests that, while DVIB might be better motivated (it directly optimises an estimate of the Shannon information passed across the network), its alternatives may be preferable in practice due to their simplicity. Because natural language itself is not fully compositional, graded metrics like the ones we presented can support future research, such as i) learning from data according to a compositionality-based curriculum to improve task performance, ii) filtering datasets to improve compositional generalisation, or iii) developing more and less compositional models depending on the desiderata for a task - e.g. to perform well on sentences with idioms, one may desire a more non-compositional model. In addition, the formulation of the metric was general enough to be expanded upon in the future: one could pair up other models, such as an LSTM and a Tree-LSTM, or a Transformer and its recursive variant, as long as one keeps in mind that the compositional reconstruction itself should not be too powerful. After all, even Tree-LSTMs could capture the exceptions in the arithmetic dataset despite their hierarchical inductive bias. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Model** & \multicolumn{2}{c}{**Comp.**} & \multicolumn{2}{c}{**Non-comp.**} & \multicolumn{2}{c}{**Random**} \\ & Acc. & \(F_{1}\) & Acc. & \(F_{1}\) & Acc. & \(F_{1}\) \\ \hline Roberta &.546 &.535 &.516 &.487 &.565 &.549 \\ LSTM &.505 &.485 &.394 &.310 &.478 &.447 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance on the new SST compositionality splits, generated using the ranking from the BCM-PP metric with the hidden dimensionality bottleneck. Figure 10: Change in SST test set accuracy (solid) and macro-averaged \(F_{1}\)-score (dashed) as the training set size increases, for LSTM and Roberta models. The examples are from the most (in blue) or the least compositional (in green) portion of the ranking from the BCM-PP metric with the hidden dimensionality bottleneck. ### Limitations We identify three types of limitations for the work presented: * A **conceptual limitation** is that we work from a very strict definition of compositionality (_local_ compositionality), which essentially equates language with arithmetic. While overly restrictive, current datasets testing compositional generalisation follow this notion. The framework might be extensible to more relaxed notions by allowing for token disambiguation by using contextualised token embeddings and only enforcing a bottleneck on the amount of further contextual integration within the model added on top of the token embeddings. * although well-motivated from the perspective of compositional processing - is a major limitation. Tree-LSTMs are most suited for sentence classification tasks, limiting the approach's applicability to sequence-to-sequence tasks. Nonetheless, the bottlenecks can be integrated in other types of architectures that process inputs in a hierarchical manner, such as sequence-to-sequence models inducing latent source and target trees [15] to yield an alternative implementation of the BCM. Our work also assumes that an input's tree structure is known, which might not always be the case. Therefore, the compositionality ranking obtained using BCM always depends on the trees used: what is non-compositional given one (potentially inadequate) structure might be more compositional given another (improved) structure. * Lastly, the **evaluation** of our approach is limited in the natural domain through the absence of gold labels of the compositionality of examples in the sentiment analysis task, but for other tasks that could have been considered, the same limitation would have applied. ## Acknowledgements We thank Chris Lucas for his contributions to this project when it was still in an early stage, Kenny Smith for his comments on the first draft of this paper, and Matthias Lindemann for excellent suggestions for the camera-ready version. VD is supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics and School of Philosophy, Psychology & Language Sciences. IT acknowledges the support of the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO Vidi 639.022.518).
2308.16429
Solving Poisson Problems in Polygonal Domains with Singularity Enriched Physics Informed Neural Networks
Physics-Informed Neural Networks (PINNs) are a powerful class of numerical solvers for partial differential equations, employing deep neural networks with successful applications across a diverse set of problems. However, their effectiveness is somewhat diminished when addressing issues involving singularities, such as point sources or geometric irregularities, where the approximations they provide often suffer from reduced accuracy due to the limited regularity of the exact solution. In this work, we investigate PINNs for solving Poisson equations in polygonal domains with geometric singularities and mixed boundary conditions. We propose a novel singularity enriched PINN (SEPINN), by explicitly incorporating the singularity behavior of the analytic solution, e.g., corner singularity, mixed boundary condition and edge singularities, into the ansatz space, and present a convergence analysis of the scheme. We present extensive numerical simulations in two and three-dimensions to illustrate the efficiency of the method, and also a comparative study with several existing neural network based approaches.
Tianhao Hu, Bangti Jin, Zhi Zhou
2023-08-31T03:35:12Z
http://arxiv.org/abs/2308.16429v2
Solving Poisson Problems in Polygonal Domains with Singularity Enriched Physics Informed Neural Networks+ ###### Abstract Physics informed neural networks (PINNs) represent a very powerful class of numerical solvers for partial differential equations using deep neural networks, and have been successfully applied to many diverse problems. However, when applying the method to problems involving singularity, e.g., point sources or geometric singularities, the obtained approximations often have low accuracy, due to limited regularity of the exact solution. In this work, we investigate PINNs for solving Poisson equations in polygonal domains with geometric singularities and mixed boundary conditions. We propose a novel singularity enriched PINN (SEPINN), by explicitly incorporating the singularity behavior of the analytic solution, e.g., corner singularity, mixed boundary condition and edge singularities, into the ansatz space, and present a convergence analysis of the scheme. We present extensive numerical simulations in two and three-dimensions to illustrate the efficiency of the method, and also a comparative study with existing neural network based approaches. **Key words:** Poisson equation, corner singularity, edge singularity, physics informed neural network, singularity enrichment ## 1 Introduction Partial differential equations (PDEs) represent a class of mathematical models that occupy a vital role in physics, science and engineering. Many traditional PDE solvers have been developed, e.g., finite difference method, finite element method and finite volume method. These methods have been maturely developed over the past decades and efficient implementations and mathematical guarantees are also available. In the last few years, motivated by the great success in computer vision, speech recognition and natural language processing etc, neural solvers for PDEs using deep neural networks (DNNs) have received much attention [42]. The list of neural solvers includes physics informed neural networks (PINNs) [53], deep Ritz method (DRM) [60], deep Galerkin method [56], weak adversarial network [62] and deep least-squares method [12] etc. Compared with traditional methods, neural PDE solvers have shown very promising results in several direct and inverse problems [36, 22, 17]. In these neural solvers, one employs DNNs as ansatz functions to approximate the solution to the PDE either in strong, weak or Ritz formulations. The existing approximation theory of DNNs [30] indicates that the accuracy of the DNN approximations depends crucially on the Sobolev regularity of the solution (also suitable stability of the mathematical formulation). Thus, these methods might be ineffective or even fail completely when applied to problems with irregular solutions [58, 41], e.g., convection-dominated problems, transport problem, high-frequency wave propagation, problems with geometric singularities (cracks / corner singularity) and singular sources. All these settings lead to either strong directional behavior or solution singularities or highly oscillatory behavior, which are challenging for standard DNNs to approximate effectively. Thus, there is an imperative need to develop neural solvers for PDEs with nonsmooth solutions. Indeed, several recent efforts have been devoted to addressing the issue, including self-adaptive PINN (SAPINN) [32], failure-informed PINN (FIPINN) [26, 27] and singularity splitting DRM (SSDRM) [31]. SAPINN extends the standard PINN by splitting out the regions with singularities and then setting different weights to compensate the effect of singular regions. FIPINN [27] is inspired by the classical adaptive FEM, using the PDE residual as the indicator to aid judicious selection of sampling points for training. SSDRM [31] exploits analytic insights into the exact solution, by approximating only the regular part using DNNs whereas extracting the singular part explicitly. These methods have shown remarkable performance for problems with significant singularities. One prime example is point sources, whose solutions involve localized singularities that can be extracted using fundamental solutions. In this work, we continue this line of research for Poisson problems on polygonal domains, which involve geometric singularities, including corner singularities and mixed boundary condition in the two-dimensional (2D) case, and edge singularity in the three-dimensional (3D) case. This represents an important setting in practical applications that has received enormous attention; see [29, 39, 40, 48] for the solution theory. We shall develop a class of effective neural solvers for Poisson problems with geometric singularities based on the idea of singularity enrichment, building on known analytic insights of the problems, and term the proposed method singularity enriched PINN (SEPINN). ### Problem setting First, we state the mathematical formulation of the problem. Let \(\Omega\in\mathbb{R}^{d}\) (\(d=2,3\)) be an open, bounded polygonal domain with a boundary \(\partial\Omega\), and \(\Gamma_{D}\) and \(\Gamma_{N}\) be a partition of the boundary \(\partial\Omega\) such that \(\Gamma_{D}\cup\Gamma_{N}=\partial\Omega\) and \(\Gamma_{D}\cap\Gamma_{N}=\emptyset\), with a nonempty \(\Gamma_{D}\) (i.e., Lebesgue measure \(|\Gamma_{D}|\neq 0\)). Let \(n\) denote the unit outward normal vector to the boundary \(\partial\Omega\), and \(\partial_{n}u\) denote taking the outward normal derivative. Given a source \(f\in L^{2}(\Omega)\), consider the following Poisson problem \[\left\{\begin{aligned} -\Delta u=& f,& \text{in }\Omega,\\ u=& 0,&\text{on }\Gamma_{D},\\ \partial_{n}u=& 0,&\text{on }\Gamma_{N}.\end{aligned}\right. \tag{1.1}\] We focus on the zero boundary conditions, and nonzero ones can be easily transformed to (1.1) using the trace theorem. Due to the existence of corners, cracks or edges in \(\Omega\), the solution \(u\) of problem (1.1) typically exhibits singularities, even if the source \(f\) is smooth. The presence of singularities in the solution \(u\) severely deteriorates the accuracy of standard numerical methods for constructing approximations, including neural solvers, and more specialized techniques are needed in order to achieve high numerical efficiency. Next we briefly review existing techniques for resolving the singularities. In the 2D case, there are several classes of traditional numerical solvers based on FEM, including singularity representation based approaches [23, 57, 13], mesh grading [54, 3, 2], generalized FEM [25] and adaptive FEM [28] etc. These methods require different amount of knowledge about the analytic solution. The methods in the first class exploits a singular representation of the solution \(u\) as a linear combination of singular and regular parts [29, 18], and can be further divided into four groups. (i) The singular function method augments singular functions to both trial and test spaces [23, 57]. However, the convergence of the coefficients (a.k.a. stress intensity factors) sometimes is poor [21], which may lead to low accuracy near singularities. (ii) The dual singular function method [10] employs the dual singular function to extract the coefficients as a postprocessing strategy of FEM, which can also achieve the theoretical rate in practical computation. (iii) The singularity splitting method [13] splits the singular part from the solution \(u\) and approximates the smooth part with the Galerkin FEM, and enjoys \(H^{1}(\Omega)\) and \(L^{2}(\Omega)\) error estimates. It can improve the accuracy of the approximation, and the stress intensity factor can also be obtained from the extraction formula, cf. (3.6). (iv) The singular complement method [4] is based on an orthogonal decomposition of the solution \(u\) into a singular part and a regular part, by augmenting the FEM trial space with specially designed singular functions. For 3D problems with edges, the singular functions belong to an infinite-dimensional space and their coefficients are functions defined along the edges [29]. Thus, their computation involves approximating functions defined along edges, and there are relatively few numerical methods, and numerical investigations are strikingly lacking. The methods developed for the 2D case do not extend directly to the 3D case. In fact, in several existing studies, numerical algorithms and error analysis have been provided, but the methods are nontrivial to implement [52, 57]. For example, the approach in [52] requires evaluating a few dozens of highly singular integrals at each step, which may lead to serious numerical issues. ### Our contributions In this work, following the paradigm of building analytic knowledge of the problem into numerical schemes, we construct a novel numerical method using PINN to solve Poisson problems with geometric singularities. The key analytic insight is that the solution \(u\) has a singular function representation as a linear combination of a singular function \(S\) and a regular part \(w\)[18, 19, 29, 38]: \(u=S+w\), with \(w\in H^{2}(\Omega)\). The singular function \(S\) is determined by the domain \(\Omega\), truncation functions and their coefficients. Using this fact, we develop, analyze and test SEPINN, and make the following contributions: 1. develop a novel class of SEPINNs for corner singularities, mixed boundary conditions and edge singularity, for the Poisson problem and modified Helmholtz problem. 2. provide error bounds for the SEPINN approximation. 3. present numerical experiments for several challenging scenarios, including 3D problems with edge singularities and the eigenvalue problem on an L-shaped domain, to illustrate the flexibility and accuracy of SEPINN. We also include a comparative study with existing approaches. The rest of the paper is organized as follows. In Section 2 we recall preliminaries on DNNs and its use in PINNs. Then we develop the singularity enriched PINN in Section 3 for 2D case (corner singularity and mixed boundary conditions) and 3D case (edge singularity) separately, and also extend it to the Helmholtz case. In Section 4, we discuss the convergence analysis of SEPINN. In Section 5, we present extensive numerical experiments to illustrate the performance of SEPINN, including a comparative study with PINN and its variants, and give further discussions in Section 6. ## 2 Preliminaries ### Deep neural networks We employ standard fully connected feedforward DNNs, i.e., functions \(f_{\theta}:\mathbb{R}^{d}\rightarrow\mathbb{R}\), with the DNN parameters \(\theta\in\mathbb{R}^{N_{\theta}}\) (\(N_{\theta}\) is the dimensionality of the DNN parameters). Given a sequence of integers \(\{n_{\ell}\}_{\ell=0}^{L}\), with \(n_{0}=d\) and \(n_{L}=1\), \(f_{\theta}(x)\) is defined recursively by \[f^{(0)} =x,\] \[f^{(\ell)} =\rho(A_{\ell}f^{(\ell-1)}+b_{\ell}),\quad\ell=1,2,\cdots,L-1,\] \[f_{\theta}(x) :=f^{(L)}(x)=A_{L}f^{(L-1)}+b_{L},\] where \(A_{\ell}\in\mathbb{R}^{n_{\ell}\times n_{\ell-1}}\) and \(b_{\ell}\in\mathbb{R}^{n_{\ell}},\,\ell=1,2,\cdots,L\), are the weight matrix and bias vector at the \(\ell\)th layer. The nonlinear activation function \(\rho:\mathbb{R}\rightarrow\mathbb{R}\) is applied componentwise to a vector. The integer \(L\) is called the depth, and \(W:=\max\{n_{\ell},\ell=0,1,\cdots,L\}\) the width of the DNN. The set of parameters \(\{A_{\ell},\,b_{\ell}\}_{\ell=1}^{L}\) of the DNN is trainable and stacked into a big vector \(\theta\). \(f^{(0)}\) is called the input layer, \(f^{(\ell)}\), \(\ell=1,2,\cdots,L-1\), are called the hidden layer and \(f_{\theta}(x)\) is the output layer. There are many possible choices of \(\rho\). The most frequently used one in computer vision is rectified linear unit (ReLU), \(\rho(x)=\max(x,0)\). However, it is not smooth enough for PINN, since PINN requires thrice differentiability of \(\rho\): two spatial derivatives in the loss, and another derivative in the DNN parameters \(\theta\) (for the optimizer). For neural PDE solvers, hyperbolic tangent \(\rho(x)=\frac{\mathrm{e}^{x}-\mathrm{e}^{-x}}{\mathrm{e}^{x}+\mathrm{e}^{-x}}\) and logistic / sigmoid \(\rho(x)=\frac{1}{1+\mathrm{e}^{-x}}\) are often used [53, 17]. We employ the hyperbolic tangent. We denote the collection of DNN functions of depth \(L\), with \(N_{\theta}\) nonzero parameters, and each the parameter bounded by \(R\), with the activation function \(\rho\) by \(\mathcal{N}_{\rho}(L,N_{\theta},R)\), i.e., \(\mathcal{N}_{\rho}(L,N_{\theta},R)=\{w_{\theta}:w_{\theta}\) has a depth \(L\), \(|\theta|_{0}\leq N_{\theta},\,|\theta|_{\ell^{\infty}}\leq R\}\), where \(|\cdot|_{\ell^{0}}\) and \(|\cdot|_{\ell^{\infty}}\) denote the number of nonzero entries in and the maximum norm of a vector, respectively. We also use the notation \(\mathcal{A}\) to denote this collection of functions. ### Physics informed neural networks Physics informed neural networks (PINNs) [53] represent one popular neural solver based on the principle of PDE residual minimization. For problem (1.1), the continuous loss \(\mathcal{L}(u)\) is given by \[\mathcal{L}_{\mathbf{\sigma}}(u)=\|\Delta u+f\|_{L^{2}(\Omega)}^{2}+\sigma_{d}\| u\|_{L^{2}(\Gamma_{D})}^{2}+\sigma_{n}\|\partial_{n}u\|_{L^{2}(\Gamma_{N})}^{2},\] where the tunable penalty weights \(\sigma_{d},\sigma_{n}>0\) are to approximately enforce the boundary conditions, and \(\mathbf{\sigma}=(\sigma_{d},\sigma_{n})\). We approximate the solution \(u\) by an element \(u_{\theta}\in\mathcal{A}\), and then discretize relevant integrals using quadrature, e.g., Monte Carlo method. Let \(U(D)\) be the uniform distribution over a set \(D\), and \(|D|\) denotes taking its Lebesgue measure. Then we can rewrite the loss \(\mathcal{L}(u_{\theta})\) as \[\mathcal{L}_{\mathbf{\sigma}}(u_{\theta})=|\Omega|\mathbb{E}_{U(\Omega)}[(\Delta u _{\theta}(X)+f(X))^{2}]+\sigma_{d}|\Gamma_{D}|\mathbb{E}_{U(\Gamma_{D})}[(u_{ \theta}(Y))^{2}]+\sigma_{n}|\Gamma_{N}|\mathbb{E}_{U(\Gamma_{N})}[(\partial_{ n}u_{\theta}(Z))^{2}],\] where \(\mathbb{E}_{\mathbf{\nu}}\) denotes taking expectation with respect to a distribution \(\nu\). Let the sampling points \(\{X_{i}\}_{i=1}^{N_{\pi}}\), \(\{Y_{j}\}_{j=1}^{N_{\pi}}\) and \(\{Z_{k}\}_{k=1}^{N_{\pi}}\) be identically and independently distributed (i.i.d.), uniformly on the domain \(\Omega\) and the boundaries \(\Gamma_{D}\) and \(\Gamma_{N}\), respectively, i.e., \(\{X_{i}\}_{i=1}^{N_{\pi}}\sim U(\Omega)\), \(\{Y_{j}\}_{j=1}^{N_{\pi}}\sim U(\Gamma_{D})\) and \(\{Z_{k}\}_{k=1}^{N_{\pi}}\sim U(\Gamma_{N})\). Then the empirical loss function \(\widehat{\mathcal{L}}_{\mathbf{\sigma}}(u_{\theta})\) is given by \[\widehat{\mathcal{L}}_{\mathbf{\sigma}}(u_{\theta})=\frac{|\Omega|}{N_{r}}\sum_{ i=1}^{N_{\pi}}(\Delta u_{\theta}(X_{i})+f(X_{i}))^{2}+\frac{\sigma_{d}|\Gamma_{D}|}{N _{d}}\sum_{j=1}^{N_{d}}(u_{\theta}(Y_{j}))^{2}+\frac{\sigma_{n}|\Gamma_{N}|}{ N_{n}}\sum_{k=1}^{N_{n}}(\partial_{n}u_{\theta}(Z_{k}))^{2}.\] Note that the resulting optimization problem \(\widehat{\mathcal{L}}_{\mathbf{\sigma}}(u_{\theta})\) over \(\mathcal{A}\) is well posed due to the box constraint on the DNN parameters \(\theta\), i.e., \(|\theta|_{\ell\sim\infty}\leq R\) for suitable \(R\), which induces a compact set in \(\mathbb{R}^{N_{\theta}}\). Meanwhile the empirical loss \(\widehat{\mathcal{L}}_{\mathbf{\sigma}}(u_{\theta})\) is continuous in \(\theta\), when \(\rho\) is smooth. In the abscence of the box constraint, the optimization problem might not have a finite minimizer. The loss \(\widehat{\mathcal{L}}_{\mathbf{\sigma}}(u_{\theta})\) is minimized with respect to the DNN parameters \(\theta\). This is often achieved by gradient type algorithms, e.g., Adam [37] or limited memory BFGS [11], which returns an approximate minimizer \(\theta^{*}\). The DNN approximation to the PDE solution \(u\) is given by \(u_{\theta^{*}}\). Note that the major computational effort, e.g., gradient of \(\widehat{\mathcal{L}}_{\mathbf{\sigma}}(u_{\theta})\) with respect to the DNN parameters \(\theta\) and the DNN \(u_{\theta}(x)\) with respect to the input \(x\) can both be computed efficiently via automatic differentiation [7], which is available in many software platforms, e.g., PyTorch or Tensorflow. Thus, the method is very flexible and easy to implement, and applicable to a wide range of direct and inverse problems for PDEs [36]. The population loss \(\mathcal{L}_{\mathbf{\sigma}}(u_{\theta})\) and empirical loss \(\widehat{\mathcal{L}}_{\mathbf{\sigma}}(u_{\theta})\) have different minimizers, due to the presence of quadrature errors. The analysis of these errors is known as generalization error analysis in statistical learning theory [1]. The theoretical analysis of PINNs has been investigated in several works under different settings [55, 33, 49, 46]. These mathematical theories require that the solutions to the problems be smooth, e.g., \(C^{2}(\overline{\Omega})\) in order to achieve consistency [55] and even stronger regularity for convergence rates [33]. Such conditions unfortunately cannot be met for problem (1.1), due to the inherently limited solution regularity. Thus, it is not _a priori_ clear that one can successfully apply PINNs to problem (1.1). This is also confirmed by the numerical experiments in Section 5. In this work, we shall develop an effective strategy to overcome the challenge. ## 3 Singularity enriched PINN Now we develop a class of singularity enriched PINN (SEPINN) for solving Poisson problems with geometric singularities, including 2D problems with mixed boundary conditions or on polygonal domains, and 3D problems with edge singularity. The key is the singular function representation. We discuss the 2D case in Section 3.1, and the more involved 3D case in Section 3.2. Finally we extend the approach to the modified Helmholtz equation in Section 3.3. ### Two-dimensional problem #### 3.1.1 Singular function representation First we develop a singular function representation in the 2D case. We determine the analytic structure of the solution \(u\) of problem (1.1) by means of the Fourier method. Consider a vertex \(\mathbf{v}_{j}\) of the polygonal domain \(\Omega\) with an interior angle \(\omega_{j}\). We denote by \((r_{j},\theta_{j})\) the local polar coordinate of the vertex \(\mathbf{v}_{j}\) so that the interior angle \(\omega_{j}\) is spanned counterclockwise by two rays \(\theta_{j}=0\) and \(\theta_{j}=\omega_{j}\). Then consider the local behavior of \(u\) near the vertex \(\mathbf{v}_{j}\), or in the sector \(G_{j}=\{(r_{j},\theta_{j}):0<r_{j}<R_{j},0<\theta_{j}<\omega_{j}\}.\) Let the edge overlapping with \(\theta_{j}=0\) be \(\Gamma_{j_{1}}\) and with \(\theta_{j}=\omega_{j}\) be \(\Gamma_{j_{2}}\). We employ the system of orthogonal and complete set of basis functions in \(L^{2}(0,\omega_{j})\) in Table 3.1, according to the boundary conditions (b.c.) on the edges \(\Gamma_{j_{1}}\) and \(\Gamma_{j_{2}}\)[51]. We denote the representation of \(u\) in the local polar coordinate by \(\widetilde{u}(r_{j},\theta_{j})\), i.e., \(\tilde{u}(r_{j},\theta_{j})=u(r_{j}\cos\theta_{j},r_{j}\sin\theta_{j})\). To study the behavior of \(u\) in the sector \(G_{j}\), we assume \(\widetilde{u}(R_{j},\theta_{j})=0\) and \(|\widetilde{u}(0,\theta_{j})|<\infty\). Since \(u\) and \(f\) belong to \(L^{2}(G_{j})\), they can be represented in Fourier series with respect to \(\{\phi_{j,k}\}_{k=1}^{\infty}\): \[u(x,y)=\widetilde{u}(r_{j},\theta_{j})=\sum_{k=1}^{\infty}u_{k}(r_ {j})\phi_{j,k}(\theta_{j}),\quad\text{in }G_{j}, \tag{3.1}\] \[f(x,y)=\widetilde{f}(r_{j},\theta_{j})=\sum_{k=1}^{\infty}f_{k}( r_{j})\phi_{j,k}(\theta_{j}),\quad\text{in }G_{j}, \tag{3.2}\] with the Fourier coefficients \(u_{k}(r_{j})\) and \(f_{k}(r_{j})\) given respectively by \(u_{k}(r_{j})=\frac{2}{\omega_{j}}\int_{0}^{\omega_{j}}\widetilde{u}(r_{j},\theta_ {j})\phi_{j,k}(\theta_{j})\mathrm{d}\theta_{j}\) and \(f_{k}(r_{j})=\frac{2}{\omega_{j}}\int_{0}^{\omega_{j}}\widetilde{f}(r_{j}, \theta_{j})\phi_{j,k}(\theta_{j})\mathrm{d}\theta_{j}\). Substituting (3.1) and (3.2) into (1.1) gives the following two-point boundary value problem: \[\begin{cases}\quad-u_{j,k}^{\prime\prime}-r^{-1}u_{j,k}^{\prime}+\lambda_{j,k} ^{2}r^{-2}u_{j,k}=f_{k},\quad 0<r<R_{j},\\ |u_{j,k}(r)|_{r=0}<\infty,\quad u_{j,k}(r)|_{r=R_{j}}=0,\end{cases}\] Solving the ODE yields [51, (2.10)] \[u_{k}(r_{j})= r_{j}^{\lambda_{j,k}}\left(\frac{1}{2\lambda_{j,k}}\int_{r_{j}}^ {R_{j}}f_{k}(\tau)\tau^{1-\lambda_{j,k}}\mathrm{d}\tau-\frac{1}{2\lambda_{j,k} R_{j}^{2\lambda_{j,k}}}\int_{0}^{R_{j}}f_{k}(\tau)\tau^{1+\lambda_{j,k}} \mathrm{d}\tau\right)+\frac{r_{j}^{-\lambda_{j,k}}}{2\lambda_{j,k}}\int_{0}^{ r_{j}}f_{k}(\tau)\tau^{1+\lambda_{j,k}}\mathrm{d}\tau.\] Since the factor in front of \(r_{j}^{\lambda_{j,k}}\) is generally nonzero, and \(r_{j}^{\lambda_{j,k}}\notin H^{2}(G)\) for \(\lambda_{j,k}<1\), there exist singular terms in the representation (3.1). This shows the limited regularity of the solution \(u\), which is the culprit of low efficiency of standard numerical schemes. Next we list all singularity points and the corresponding singular functions. Let \(\mathbf{v}_{j},j=1,2,\cdots,M\), be the vertices of \(\Omega\) whose interior angles \(\omega_{j},j=1,2,\cdots,M\), satisfy \[\begin{cases}\quad\pi<\omega_{j}<2\pi,&\text{b.c. doesn't change its type},\\ \pi/2<\omega_{j}<2\pi,&\text{b.c. changes its type}.\end{cases} \tag{3.3}\] Table 3.2 gives the index set \(\mathbb{I}_{j}\) and the associated singularity functions [14, p. 2637]. Note that \(s_{j,1}\in H^{1+\frac{\pi}{2\omega}-\epsilon}(\Omega)\), \(s_{j,\frac{1}{2}}\in H^{1+\frac{\pi}{2\omega}-\epsilon}(\Omega)\), and \(s_{j,\frac{3}{2}}\in H^{1+\frac{\pi}{2\omega}-\epsilon}(\Omega)\) for small \(\epsilon>0\). Upon letting \[\omega^{*}=\max_{1\leq j\leq M}\hat{\omega_{j}},\quad\text{with }\hat{\omega_{j}}= \begin{cases}\omega_{j},&\text{b.c. does't change its type at }\mathbf{v}_{j},\\ 2\omega_{i},&\text{b.c. changes its type at }\mathbf{v}_{j},\end{cases}\] the solution \(u\) belongs to \(H^{1+\frac{\pi}{2\omega}-\epsilon}(\Omega)\), just falling short of \(H^{1+\frac{\pi}{2\omega}}(\Omega)\). If \(\omega^{*}>\pi\), \(u\) fails to belong to \(H^{2}(\Omega)\). Hence, it is imperative to develop techniques to resolve such singularities. We employ a smooth cut-off function. For \(\rho\in(0,2]\), we define a cut-off function \(\eta_{\rho}\) by \[\eta_{\rho}(r_{j})=\begin{cases}\quad 1,&0<r_{j}<\frac{\rho R}{2},\\ \frac{15}{16}\left(\frac{8}{15}-\left(\frac{4r_{j}}{\rho R}-3\right)+\frac{2} {3}\left(\frac{4r_{j}}{\rho R}-3\right)^{3}-\frac{1}{5}\left(\frac{4r_{j}}{ \rho R}-3\right)^{5}\right),&\frac{\rho R}{2}\leq r_{j}<\rho R,\\ 0,&r_{j}\geq\rho R,\end{cases} \tag{3.4}\] where \(R\in\mathbb{R}_{+}\) is a fixed number so that \(\eta_{\rho}\) vanishes identically on \(\partial\Omega\). In practice, we take \(R\) to be small enough so that when \(i\neq j\), the support of \(\eta_{\rho}(r_{i})\) does no intersect with that of \(\eta_{\rho}(r_{j})\). By construction, we have \(\eta_{\rho}\in C^{2}([0,\infty))\). Then the solution \(u\) of problem (1.1) has the following singular function representation [5, 18, 43] \[u=w+\sum_{j=1}^{M}\sum_{i\in\mathbb{I}_{j}}\gamma_{j,i}\eta_{\rho_{j}}(r_{j})s_ {j,i}(r_{j},\theta_{j}),\quad\text{with }w\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega), \tag{3.5}\] where the scalars \(\gamma_{j,i}\in\mathbb{R}\) are known as stress intensity factors and given by the following extraction formulas [29, Lemma 8.4.3.1]: \[\gamma_{j,i}=\frac{1}{i\pi}\left(\int_{\Omega}f\eta_{\rho_{j}}s_{j,-i}\mathrm{ d}x+\int_{\Omega}u\Delta(\eta_{\rho_{j}}s_{j,-i})\mathrm{d}x\right), \tag{3.6}\] where \(s_{j,-i}\) denotes dual singular functions of \(s_{j,i}\). Specifically, if \(s_{j,i}(r_{j},\theta_{j})=r_{j}^{\frac{i\pi}{\nu_{j}}}\sin\frac{i\pi\theta_{j}}{ \omega_{j}}\), the dual function \(s_{j,-i}\) is given by \(s_{j,-i}(r_{j},\theta_{j})=r_{j}^{\frac{i\pi}{\nu_{j}}}\sin\frac{i\pi\theta_{j}} {\omega_{j}}\)[14, p. 2639]. Moreover, the following regularity estimate on the regular part \(w\) holds \[\|w\|_{H^{2}(\Omega)}+\sum_{j=1}^{M}\sum_{i\in\mathbb{I}_{j}}|\gamma_{j,i}|\leq c \|f\|_{L^{2}(\Omega)}. \tag{3.7}\] #### 3.1.2 Singularity enriched physics-informed neural network Now we propose singularity enriched PINN (SEPINN) for problem (1.1), inspired by the representation (3.5). We discuss only the case with one singular function in \(\Omega\) satisfying the condition (3.3). The case of multiple singularities can be handled similarly. Upon letting \(S=\gamma\eta_{\rho}s\), the regular part \(w\) satisfies \[\left\{\begin{aligned} -\Delta w&=f+\gamma\Delta(\eta_{ \rho}s),&\text{in }\Omega,\\ w&=0,&\text{on }\Gamma_{D},\\ \partial_{n}w&=0,&\text{on }\Gamma_{N}, \end{aligned}\right. \tag{3.8}\] where the parameter \(\gamma\) is unknown. Since \(w\in H^{2}(\Omega)\), it can be well approximated using PINN. The parameter \(\gamma\) can be either learned together with \(w\) or extracted from \(w\) via (3.6). Based on the principle of PDE residual minimization, the solution \(w^{*}\) of problem (3.8) and the exact parameter \(\gamma^{*}\) in the splitting (3.5) is a global minimizer of the following loss \[\mathcal{L}_{\mathbf{\sigma}}(w;\gamma)=\|\Delta w+f+\gamma\Delta(\eta_{\rho}s) \|_{L^{2}(\Omega)}^{2}+\sigma_{d}\|w\|_{L^{2}(\Gamma_{D})}^{2}+\sigma_{n}\left\| \partial_{n}w\right\|_{L^{2}(\Gamma_{N})}^{2}, \tag{3.9}\] where the penalty weights \(\mathbf{\sigma}=(\sigma_{d},\sigma_{n})\in\mathbb{R}_{+}^{2}\) are tunable. Following the PINN paradigm in Section 2.2, we employ a DNN \(w_{\theta}\in\mathcal{A}\) to approximate \(w^{*}\in H^{2}(\Omega)\), and treat the parameter \(\gamma\) as a trainable parameter and learn it along with the DNN parameters \(\theta\). This leads to an empirical los \[\widehat{\mathcal{L}}_{\mathbf{\sigma}}(w_{\theta};\gamma)= \frac{|\Omega|}{N_{r}}\sum_{i=1}^{N_{r}}(\Delta w_{\theta}(X_{i})+f(X _{i})+\gamma\Delta(\eta_{\rho}s)(X_{i}))^{2}+\sigma_{d}\frac{|\Gamma_{D}|}{N_ {d}}\sum_{j=1}^{N_{d}}w_{\theta}^{2}(Y_{j})+\sigma_{n}\frac{|\Gamma_{N}|}{N_{ n}}\sum_{k=1}^{N_{n}}\left(\partial_{n}w_{\theta}(Z_{k})\right)^{2}, \tag{3.10}\] with i.i.d. sampling points \(\{X_{i}\}_{i=1}^{N_{r}}\sim U(\Omega)\), \(\{Y_{j}\}_{j=1}^{N_{d}}\sim U(\Gamma_{D})\) and \(\{Z_{k}\}_{k=1}^{N_{n}}\sim U(\Gamma_{N})\). Let \((\widehat{\theta}^{*},\widehat{\gamma}^{*})\) be a minimizer of the empirical loss \(\widehat{L}(w_{\theta};\gamma)\). Then \(w_{\widehat{\theta}^{*}}\in\mathcal{A}\) is the DNN approximation of the regular part \(w^{*}\), and the approximation \(\hat{u}\) to \(u\) is given by \(\hat{u}=w_{\theta^{*}}+\hat{\gamma}^{*}\eta_{\rho}s\). \begin{table} \begin{tabular}{c|c|c} \hline \hline \(\Gamma_{j2}\backslash\Gamma_{j1}\) & Dirichlet & Neumann \\ \hline Dirichlet & \(s_{j,1}(r_{j},\theta_{j})=r_{j}^{\frac{\pi}{\nu_{j}}}\sin\frac{\pi\theta_{j}}{ \omega_{j}}\), & \(s_{j,\frac{1}{2}}(r_{j},\theta_{j})=r_{j}^{\frac{\pi}{\nu_{j}}}\cos\frac{\pi \theta_{j}}{2\omega_{j}}\), if \(\frac{\pi}{2}<\omega_{j}\leq\frac{3\pi}{2}\), \\ & \(\mathbb{I}_{j}=\left\{\frac{1}{2}\right\}\). & \(\mathbb{I}_{j}=\left\{\frac{1}{2}\right\}\). \\ \cline{2-3} & \(\mathbb{I}_{j}=\left\{1\right\}\). & \(s_{j,\frac{1}{2}}(r_{j},\theta_{j})=r_{j}^{\frac{\pi}{\nu_{j}}}\cos\frac{\pi \theta_{j}}{2\omega_{j}}\) and \\ & \(s_{j,\frac{3}{2}}(r_{j},\theta_{j})=r_{j}^{\frac{3\pi}{2\omega_{j}}}\cos\frac{3 \pi\theta_{j}}{2\omega_{j}}\), if \(\frac{3\pi}{2}<\omega_{j}\leq 2\pi\), \\ & \(\mathbb{I}_{j}=\left\{\frac{1}{2},\frac{3}{2}\right\}\). & \\ \hline Neumann & \(s_{j,\frac{1}{2}}(r_{j},\theta_{j})=r_{j}^{\frac{\pi}{2\omega_{j}}}\sin\frac{ \pi\theta_{j}}{2\omega_{j}}\), if \(\frac{\pi}{2}<\omega_{j}\leq\frac{3\pi}{2}\), \\ & \(\mathbb{I}_{j}=\left\{\frac{1}{2}\right\}\). & \(s_{j,1}(r_{j},\theta_{j})=r_{j}^{\frac{\pi}{2\omega_{j}}}\cos\frac{\pi\theta_{ j}}{\omega_{j}}\), \\ & \(s_{j,\frac{1}{2}}(r_{j},\theta_{j})=r_{j}^{\frac{3\pi}{2\omega_{j}}}\sin\frac{3 \pi\theta_{j}}{2\omega_{j}}\) and \\ & \(s_{j,\frac{3}{2}}(r_{j},\theta_{j})=r_{j}^{\frac{3\pi}{2\omega_{j}}}\sin\frac{3 \pi\theta_{j}}{2\omega_{j}}\), if \(\frac{3\pi}{2}<\omega_{j}\leq 2\pi\), \\ & \(\mathbb{I}_{j}=\left\{\frac{1}{2},\frac{3}{2}\right\}\). & \\ \hline \hline \end{tabular} \end{table} Table 3.2: Singularity functions \(s_{j,i}\), depending on the boundary condition. The tuple \((r_{j},\theta_{j})\) refers to the local polar coordinate of the vertex \(\mathbf{v}_{j}\), and \(\mathbb{I}_{j}\) is an index set for leading singularities associated with \(\mathbf{v}_{j}\). Now we discuss the training of the loss \(\widehat{\mathcal{L}}_{\mathbf{\sigma}}(w_{\theta};\gamma)\). One can minimize \(\widehat{\mathcal{L}}_{\mathbf{\sigma}}(w_{\theta};\gamma)\) directly with respect to \(\theta\) and \(\gamma\), which works reasonably. However, the DNN approximation \(\widehat{u}\) tends to have larger errors on the boundary \(\partial\Omega\) than in the domain \(\Omega\), but the estimated \(\widehat{\gamma}^{*}\) is often accurate. Thus we adopt a two-stage training procedure. 1. At Stage 1, minimize the loss \(\widehat{\mathcal{L}}_{\mathbf{\sigma}}(w_{\theta};\gamma)\) for a fixed \(\mathbf{\sigma}\), and obtain the minimizer \((\widehat{\theta}^{*},\widehat{\gamma}^{*})\). 2. At Stage 2, fix \(\gamma\) in \(\widehat{\mathcal{L}}_{\mathbf{\sigma}}(w_{\theta};\gamma)\) at \(\widehat{\gamma}^{*}\), and learn \(\theta\) via a path-following strategy [45, 31]. Now we describe the path-following strategy on updating \(\mathbf{\sigma}\). We start with small values \(\mathbf{\sigma}^{(1)}=\mathbf{\sigma}\). After each loop (i.e., finding one minimizer \(\widehat{\theta}^{*}_{k}\)), we update \(\mathbf{\sigma}\) geometrically: \(\mathbf{\sigma}^{(k+1)}=q\mathbf{\sigma}^{(k)}\), with \(q>1\). By updating \(\mathbf{\sigma}^{(k)}\), the minimizer \(\widehat{\theta}^{*}_{k}\) of the loss \(\widehat{\mathcal{L}}_{\mathbf{\sigma}^{(k)}}(w_{\theta};\widehat{\gamma}^{*})\) also approaches that of problem (3.9), and the path-following strategy enforces the boundary conditions progressively, which is beneficial to obtain good approximations, since when \(\mathbf{\sigma}\) are large, the optimization problem is known to be stiff (and thus numerically challenging). Note that the minimizer \(\widehat{\theta}^{*}_{k+1}\) of the \(\mathbf{\sigma}^{(k+1)}\)-problem (i.e., minimizing \(\widehat{\mathcal{L}}_{\mathbf{\sigma}^{(k+1)}}(w_{\theta};\widehat{\gamma}^{*})\)) can be initialized to \(\widehat{\theta}^{*}_{k}\) of the \(\mathbf{\sigma}^{(k)}\)-problem to warm start the training process. Hence, for each fixed \(\mathbf{\sigma}^{(k)}\) (except \(\mathbf{\sigma}^{(1)}\)), the initial parameter configuration is close to the optimal one, and the training only requires few iterations to reach convergence. The overall procedure is shown in Algorithm 1 for 2D problems with corner singularities and / or mixed boundary conditions. ``` 1:Set \(\mathbf{\sigma}^{(1)}\), and obtain the minimizer \((\widehat{\theta}^{*},\widehat{\gamma}^{*})\) of the loss \(\widehat{\mathcal{L}}_{\mathbf{\sigma}^{(1)}}(w_{\theta};\gamma)\). 2:Set \(k=1\), \(\widehat{\theta}^{*}_{0}=\widehat{\theta}^{*}\), and increasing factor \(q>1\). 3:while Stopping condition not met do 4: Find a minimizer \(\widehat{\theta}^{*}_{k}\) of the loss \(\widehat{\mathcal{L}}_{\mathbf{\sigma}^{(k)}}(w_{\theta};\widehat{\gamma}^{*})\) (initialized to \(\widehat{\theta}^{*}_{k-1}\)). 5: Update \(\mathbf{\sigma}\) by \(\mathbf{\sigma}^{(k+1)}=q\mathbf{\sigma}^{(k)}\), and \(k\gets k+1\). 6:Output the SEPINN approximation \(\widehat{u}=w_{\widehat{\theta}^{*}_{k-1}}+\widehat{\gamma}^{*}\eta s\). ``` **Algorithm 1** SEPINN for 2D problems ### Three-dimensional problem Now we develop SEPINN for the 3D Poisson problem with edge singularities. #### 3.2.1 Singular function representation There are several different types of geometric singularities in the 3D case, and each case has to be dealt with separately [19]. We only study edge singularities, which cause strong solution singularities. Indeed, the \(H^{2}(\Omega)\)-regularity of the solution \(u\) of problem (1.1) is not affected by the presence of conic points [38, 29, 40]. Now we state the precise setting. Let \(\Omega_{0}\subset\mathbb{R}^{2}\) be a polygonal domain as in Section 3.1, and \(\Omega=\Omega_{0}\times(0,l)\). Like before, let \(\mathbf{v}_{j},j=1,2,\cdots,M\), be the vertices of \(\Omega_{0}\) whose interior angles \(\omega_{j},j=1,2,\cdots,M\), satisfy (3.3), and let \(\Gamma_{z_{1}}=\Omega_{0}\times\{0\}\) and \(\Gamma_{z_{2}}=\Omega_{0}\times\{l\}\). We employ a complete orthogonal system \(\{Z_{j,n}\}_{n=0}^{\infty}\) of \(L^{2}(0,l)\), given in Table 3.3. The functions \(u\in L^{2}(\Omega)\) and \(f\in L^{2}(\Omega)\) from problem (1.1) can then be represented by the following convergent Fourier series in the 3D wedge \(G_{j}=\mathbb{K}_{j}\times(0,l)\) (suppressing the subscript \(j\)): \[u(x,y,z) =\frac{1}{2}u_{0}(x,y)Z_{0}(z)+\sum_{n=1}^{\infty}u_{n}(x,y)Z_{n}(z), \tag{3.11}\] \[f(x,y,z) =\frac{1}{2}f_{0}(x,y)Z_{0}(z)+\sum_{n=1}^{\infty}f_{n}(x,y)Z_{n}( z), \tag{3.12}\] where the Fourier coefficients \(\{u_{n}\}_{n\in\mathbb{N}}\) and \(\{f_{n}\}_{n\in\mathbb{N}}\) are defined on the 2D domain \(\Omega_{0}\) by \[u_{n}(x,y)=\frac{2}{l}\int_{0}^{l}u(x,y,z)Z_{n}(z)\mathrm{d}z\quad\text{and} \quad f_{n}(x,y)=\frac{2}{l}\int_{0}^{l}f(x,y,z)Z_{n}(z)\mathrm{d}z.\] By substituting (3.11) and (3.12) into (1.1), we get countably many 2D elliptic problems: \[\begin{cases}-\Delta u_{n}+\xi_{j,n}^{2}u_{n}=f_{n},&\text{in }\Omega_{0},\\ \phantom{-\Delta u_{n}+\xi_{j,n}^{2}u_{n}=}{}0,&\text{on }\Gamma_{D},\\ \phantom{-\Delta u_{n}+\xi_{j,n}^{2}u_{n}=}{}0,&\text{on }\Gamma_{N}.\end{cases} \tag{3.13}\] Then problem (1.1) can be analyzed via the 2D problems. Below we describe the edge behavior of the weak solution \(u\in H^{1}(\Omega)\)[52, Theorem 2.1]. The next theorem gives a crucial decomposition of \(u\in H^{1+}\overrightarrow{\omega}^{*}\leftarrow\) for every \(\epsilon>0\). The functions \(\Psi_{j,i}\) are the so-called edge flux intensity functions. **Theorem 3.1**.: _For any fixed \(f\in L^{2}(\Omega)\), let \(u\in H^{1}(\Omega)\) be the unique weak solution to problem (1.1). Then there exist unique functions \(\Psi_{j,i}\in H^{1-\lambda_{j,i}}(0,l)\) of the variable \(z\) such that \(u\) can be split into a sum of a regular part \(w\in H^{2}(\Omega)\) and a singular part \(S\) with the following properties:_ \[u =w+S,\quad S=\sum_{j=1}^{M}\sum_{i\in\mathbb{I}_{j}}S_{j,i}(x_{j}, y_{j},z), \tag{3.14a}\] \[S_{j,i}(x_{j},y_{j},z) =(T_{j}(r_{j},z)*\Psi_{j,i}(z))\eta_{\rho_{j}}(r_{j})s_{j,i}(r_{j},\theta_{j}), \tag{3.14b}\] _where the functions \(T_{j}\) are fixed Poisson's kernels, and the symbol \(*\) denotes the convolution in \(z\in(0,l)\). Moreover, there exists a constant \(C>0\) independent of \(f\in L^{2}(\Omega)\) such that \(\|w\|_{H^{2}(\Omega)}\leq C\|f\|_{L^{2}(\Omega)}\)._ Now we assume that near each edge \(\mathbf{v}_{j}\times(0,l)\), the domain \(\Omega\) coincides with a 3D wedge \(G_{j}\) defined by \(G_{j}=\{(x_{j},y_{j},z)\in\Omega:0<r_{j}<R_{j},0<\theta_{j}<\omega_{j},0<z<l\}\), where \((r_{j},\theta_{j})\) are local polar coordinates linked with the local Cartesian coordinates \((x_{j},y_{j})\). The explicit form of the function \(T_{j}(r_{j},z)*\Psi_{j,i}(z)\) and the formula for the coefficients \(\gamma_{j,i,n}\) are given below [52, Theorem 2.2][51, pp. 179-182]. **Theorem 3.2**.: _The coefficients \(\Phi_{j,i}(x_{j},y_{j},z)=T_{j}(r_{j},z)*\Psi_{j,i}(z)\) of the singularities in (3.14b) can be represented by Fourier series in \(z\) and with respect to the orthogonal system \(\{Z_{n}(z)\}_{n=0}^{\infty}\):_ \[T_{j}(r_{j},z) =\frac{1}{2}Z_{0}(z)+\sum_{n=1}^{\infty}\mathrm{e}^{-\xi_{j,n}r_{ j}}Z_{n}(z),\] \[\Psi_{j,i}(z) =\frac{1}{2}\gamma_{j,i,0}Z_{0}(z)+\sum_{n=1}^{\infty}\gamma_{j, i,n}Z_{n}(z),\] \[\Phi_{j,i}(x_{j},y_{j},z) =\frac{1}{2}\gamma_{j,i,0}Z_{0}(z)+\sum_{n=1}^{\infty}\gamma_{j, i,n}\mathrm{e}^{-\xi_{j,n}r_{j}}Z_{n}(z), \tag{3.15}\] _where the coefficients \(\gamma_{j,i,n}\) are given explicitly by_ \[\gamma_{j,i,n} =\frac{2}{l\omega_{j}\lambda_{j}}\int_{G_{j}}f_{j}^{*}\mathrm{e}^ {\xi_{j,n}r_{j}}s_{j,-k}(r_{j},\theta_{j})Z_{n}(z)\mathrm{d}x\mathrm{d}y\mathrm{ d}z, \tag{3.16}\] \[f_{j}^{*} =f\eta_{\rho_{j}}-u\left(\frac{\partial^{2}\eta_{\rho_{j}}}{ \partial r_{j}^{2}}+\left(2\xi_{j,n}+\frac{1}{r_{j}}\right)\frac{\partial\eta _{\rho_{j}}}{\partial r_{j}}+\left(2\xi_{j,n}^{2}+\frac{\xi_{j,n}}{r_{j}} \right)\eta_{\rho_{j}}\right)-2\frac{\partial u}{\partial r_{j}}\left(\frac{ \partial\eta_{\rho_{j}}}{\partial r_{j}}+\xi_{j,n}\eta_{\rho_{j}}\right).\] _Moreover there exists a constant \(C>0\) independent of \(f\) such that_ \[|\gamma_{j,i,0}|^{2}+\sum_{n=1}^{\infty}\xi_{j,n}^{2(1-\lambda_{j,i})}|\gamma_{j, i,n}|^{2}\leq C\|f\|_{L^{2}(G_{j})}.\] Note that the formula (3.16) is a singular integral and numerically inconvenient to evaluate. We discuss the case of only one edge with one singular function below. Then the solution \(u\) can be split into \[u=w+\Phi\eta s. \tag{3.17}\] Since the functions \(\Phi\eta_{\rho}s\) and \(\partial_{n}(\Phi\eta_{\rho}s)\) vanish on \(\Gamma_{D}\) and \(\Gamma_{N}\), respectively, \(w\) solves \[\begin{cases}-\Delta w=f+\Delta(\Phi\eta_{\rho}s),&\text{in }\Omega,\\ w=0,&\text{on }\Gamma_{D},\\ \partial_{n}w=0,&\text{on }\Gamma_{N}.\end{cases} \tag{3.18}\] This forms the basis of SEPINN for 3D problems with edge singularities. Below we describe two strategies for constructing SEPINN approximations, i.e., SEPINN-C based on a cutoff approximation of the infinite series and SEPINN-N based on multiple DNN approximations. #### 3.2.2 Sepinn - Cutoff approximation The expansion (3.15) of the singular function \(\Phi\) in (3.18) involves infinitely many unknown scalar coefficients \(\{\gamma_{j,i,n}\}_{=0}^{\infty}\). In practice, it is infeasible to learn all of them. However, since the series is convergent, we may truncate it to a finite number of terms: the edge flux intensity function \(\Phi(x_{j},y_{j},z)\) from (3.14b) is approximated by the truncation \[\Phi_{j,i}^{N}(x_{j},y_{j},z)=\frac{1}{2}\gamma_{j,i,0}+\sum_{n=1}^{N}\gamma_ {j,i,n}\mathrm{e}^{-\xi_{j,n}r_{j}}Z_{n}(z),\quad\text{with }N\in\mathbb{N}.\] The approximate singular function \(s_{j,i}^{N}\) is given by \[S_{j,i}^{N}(x_{j},y_{j},z)=\Phi_{j,i}^{N}(x_{j},y_{j},z)\eta_{\rho_{j}}(r_{j}) s_{j,i}(r_{j},\theta_{j}). \tag{3.19}\] In view of the splitting (3.14a) and truncation (3.19), the approximate regular part \(w\) satisfies \[\begin{cases}-\Delta w=f+\gamma_{0}\Delta(\eta_{\rho}s)+\sum_{n=1}^{N}\gamma_ {n}\Delta(\mathrm{e}^{-\xi_{n}r}Z_{n}(z)\eta_{\rho}s),&\text{in }\Omega,\\ \quad\quad w=0,&\text{on }\Gamma_{D},\\ \partial_{n}w=0,&\text{on }\Gamma_{N},\end{cases} \tag{3.20}\] with \(\gamma_{i},i=0,1,\cdots,N\), being \(N+1\) unknown parameters. Let \(w^{N}\) be the solution of (3.20) and \(w^{*}\) the solution of (3.18). The next result shows that when \(N\) is large enough, the error \(w^{N}-w^{*}\) can be made small, and so is the error between \(u^{N}\) and \(u^{*}\), which underpins the truncation method. **Theorem 3.3**.: _Let \(w^{N}\) be the solution of (3.20) and \(S^{N}\) be the truncated singular function. Let \(u^{N}=w^{N}+S^{N}\) and let \(u^{*}\) be the solution of (1.1). Then there holds_ \[\|u^{N}-u^{*}\|_{H^{1}(G)}\leq CN^{-1}\|f\|_{L^{2}(G)}.\] Proof.: By [52, Lemma 3.3], there exists a constant \(C>0\) independent of \(f\) such that \[\|S^{N}-S^{*}\|_{H^{1}(G)}\leq CN^{-1}\|f\|_{L^{2}(G)}. \tag{3.21}\] Since \(w^{N}\) solves (3.20) and \(w^{*}\) solves (3.18), \(w^{*}-w^{N}\) satisfies problem (1.1) with the source \(f^{N}=\sum_{n=N+1}^{\infty}\gamma_{n}\Delta(\mathrm{e}^{-\xi_{n}r}Z_{n}(z) \eta_{\rho}s)\). By elliptic regularity theory, we have \[\|w^{*}-w^{N}\|_{H^{1}(\Omega)}\leq C\|f^{N}\|_{H^{-1}_{D}(\Omega)}\leq C\| \Delta(S^{N}-S^{*})\|_{H^{-1}_{\Gamma_{D}}(\Omega)},\] where the notation \(H^{-1}_{\Gamma_{D}}(\Omega)\) denotes the dual space of \(H^{1}_{\Gamma_{D}}(\Omega)=\{v\in H^{1}(\Omega):v=0\text{ on }\Gamma_{D}\}\). Upon integration by parts, we get \[\|\Delta(S^{N}-S^{*})\|_{H^{-1}_{\Gamma_{D}}(\Omega)}=\sup_{v\in H ^{1}_{\Gamma_{D}}(\Omega),\|v\|_{H^{1}(\Omega)}\leq 1}\left|\int_{\Omega} \Delta(S^{N}-S^{*})v\mathrm{d}\mathbf{x}\right|\] \[=\sup_{v\in H^{1}_{\Gamma_{D}}(\Omega),\|v\|_{H^{1}(\Omega)}\leq 1} \left|\int_{\Omega}\nabla(S^{N}-S^{*})\cdot\nabla v\mathrm{d}x\right|\leq\|S^{ N}-S^{*}\|_{H^{1}(\Omega)}.\] Combining this estimate with (3.21) yields the desired assertion. Following the PINN paradigm in Section 2.2, we approximate \(w^{N}\) by an element \(w_{\theta}\in\mathcal{A}\). Like in the 2D case, we view the parameters \(\boldsymbol{\gamma}_{N}:=(\gamma_{i},\dots,\gamma_{N})\) as trainable parameters and learn them along with DNN parameters \(\theta\). The population loss \(\mathcal{L}_{\boldsymbol{\sigma}}(w_{\theta};\boldsymbol{\gamma}_{N})\) is given by \[\mathcal{L}_{\boldsymbol{\sigma}}(w_{\theta};\boldsymbol{\gamma}_{N})=\left\| \Delta w_{\theta}+f+\gamma_{0}\Delta(\eta_{\rho}s)+\sum_{n=1}^{N}\gamma_{n} \Delta(\mathrm{e}^{-\xi_{n}r}Z_{n}(z)\eta_{\rho}s)\right\|_{L^{2}(\Omega)}^{2}+ \sigma_{d}\|w_{\theta}\|_{L^{2}(\Gamma_{D})}^{2}+\sigma_{n}\left\|\partial_{n} w_{\theta}\right\|_{L^{2}(\Gamma_{N})}^{2}.\] In practice, we employ the following empirical loss \[\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(w_{\theta};\boldsymbol {\gamma}_{N})= \frac{|\Omega|}{N_{r}}\sum_{i=1}^{N_{r}}\left(\Delta w_{\theta}(X _{i})+f(X_{i})+\gamma_{0}\Delta(\eta_{\rho}s)(X_{i})+\sum_{n=1}^{N}\gamma_{n} \Delta(\mathrm{e}^{-\xi_{n}r}Z_{n}(z)\eta_{\rho}s)(X_{i})\right)^{2}\] \[+\sigma_{d}\frac{|\Gamma_{D}|}{N_{d}}\sum_{j=1}^{N_{d}}w_{\theta} ^{2}(Y_{j})+\sigma_{n}\frac{|\Gamma_{N}|}{N_{n}}\sum_{k=1}^{N_{n}}\left( \partial_{n}w_{\theta}(Z_{k})\right)^{2},\] with i.i.d. sampling points \(\{X_{i}\}_{i=1}^{N_{r}}\sim U(\Omega)\), \(\{Y_{j}\}_{j=1}^{N_{d}}\sim U(\Gamma_{D})\) and \(\{Z_{k}\}_{k=1}^{N_{n}}\sim U(\Gamma_{N})\). The empirical loss \(\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(w_{\theta};\boldsymbol{\gamma}_{N})\) can be minimized using the two-stage procedure as in the 2D case. #### 3.2.3 SEPINN - Neural networks approximation There is actually a direct way to resolve the singular part \(S\), i.e., using a DNN to approximate \(\Phi\) in (3.14a). We term the resulting method SEPINN-N. This strategy eliminates the necessity of explicitly knowing the expansion basis and relieves us from lengthy derivations. Thus, compared with SEPINN-C, it is more direct and simpler to implement. The downside is an increase in the number of parameters that need to be learned. Specifically, let \(\mathcal{B}\) to be a DNN set with a fixed architecture (possibly different from \(\mathcal{A}\)) and \(\zeta\) its parameterization. Then a DNN \(\Phi_{\zeta}\in\mathcal{B}\) is employed to approximate \(\Phi\) in (3.17), where the DNN parameters \(\zeta\) are also learned. The splitting is then given by \[u=w+\Phi_{\zeta}\eta s.\] Since we cannot guarantee \(\Phi_{\zeta}=0\) on \(\Gamma_{D}\) or \(\partial_{n}\Phi_{\zeta}=0\) on \(\Gamma_{N}\), the boundary conditions of \(w\) have to be modified accordingly (noting \(\partial_{n}(\eta_{\rho}s)=0\)): \[\begin{cases}-\Delta w=f+\Delta(\Phi_{\zeta}\eta_{\rho}s),&\text{in }\Omega,\\ w=-\Phi_{\zeta}\eta_{\rho}s,&\text{on }\Gamma_{D},\\ \partial_{n}w=-\partial_{n}(\Phi_{\zeta})\eta_{\rho}s,&\text{on }\Gamma_{N}. \end{cases}\] Like before, we can obtain the following empirical loss \[\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(w_{\theta};\Phi_{ \zeta})= \frac{|\Omega|}{N_{r}}\sum_{i=1}^{N_{r}}\left(\Delta w_{\theta}(X _{i})+f(X_{i})+\Delta(\Phi_{\zeta}\eta_{\rho}s)(X_{i})\right)^{2}+\sigma_{d} \frac{|\Gamma_{D}|}{N_{d}}\sum_{j=1}^{N_{d}}(w_{\theta}(Y_{j})+\Phi_{\zeta} \eta_{\rho}s(Y_{j}))^{2}\] \[+\sigma_{n}\frac{|\Gamma_{N}|}{N_{n}}\sum_{k=1}^{N_{n}}\left( \partial_{n}w_{\theta}(Z_{k})+\partial_{n}(\Phi_{\zeta})\eta_{\rho}s(Z_{k}) \right)^{2},\] with i.i.d. sampling points \(\{X_{i}\}_{i=1}^{N_{r}}\sim U(\Omega)\), \(\{Y_{j}\}_{j=1}^{N_{d}}\sim U(\Gamma_{D})\) and \(\{Z_{k}\}_{k=1}^{N_{n}}\sim U(\Gamma_{N})\). The implementation of SEPINN-N is direct, since both DNNs \(w_{\theta}\) and \(\Psi_{\zeta}\) are learned, and the resulting optimization problem can be minimized using the path-following strategy. ### Modified Helmholtz equation Now we extend SEPINN to the modified Helmholtz equation equipped with mixed boundary conditions: \[\begin{cases}-\Delta u+A^{2}u=f,&\text{in }\Omega,\\ \qquad\qquad u=0,&\text{on }\Gamma_{D},\\ \partial_{n}u=0,&\text{on }\Gamma_{N},\end{cases} \tag{3.22}\] where \(A>0\) is the wave number. It is also known as the screened Poisson problem in the literature. #### 3.3.1 Two-dimensional case Following the discussion in Section 3.1, we use the basis \(\{\phi_{j,k}\}_{k=1}^{\infty}\) to expand the solution \(u\), find the singular term and split it from the solution \(u\) of (3.22). Since \(u\) cannot be expressed in terms of an elementary function, the derivation of the singularity splitting differs from that for the Poisson equation. Nonetheless, a similar decomposition holds [20, Theorem 3.3]. **Theorem 3.4**.: _Let \(\mathbf{v}_{j},j=1,2,\cdots,M\), be the vertices of \(\Omega\) whose interior angles \(\omega_{j},j=1,2,\cdots,M\), satisfy (3.3). Then the unique weak solution \(u\in H^{1}(\Omega)\) of (3.22) can be decomposed into_ \[u=w+\sum_{j=1}^{M}\sum_{i\in\mathbb{I}_{j}}\gamma_{j,i}\eta_{\rho_{j}}(r_{j}) \mathrm{e}^{-Ar_{j}}s_{j,i}(r_{j},\theta_{j}),\quad\text{with }w\in H^{2}(\Omega).\] _Moreover, there holds \(|w|_{H^{2}(\Omega)}+A|w|_{H^{1}(\Omega)}+A^{2}\|w\|_{L^{2}(\Omega)}\leq C\|f \|_{L^{2}(\Omega)}\)._ Once having the singularity solutions \(s_{j,i}\), one can learn the parameters \(\gamma_{j,i}\) and the DNN parameters \(\theta\) of the regular part \(w_{\theta}\) as Algorithm 1. Thus, SEPINN applies equally well to the (modified) Helmholtz case. In contrast, the FEM uses an integral formula and numerical integration when calculating the flux intensity factors [57, 51], which can be rather cumbersome. Further, such extraction formulas are still unavailable for the Helmholtz equation. In SEPINN, training the coefficients directly along with the DNN not only reduces manual efforts, but also increases the computational efficiency. #### 3.3.2 Three-dimensional case Following the discussion in Section 3.2, we employ the orthogonal basis \(\{Z_{n}\}_{n=0}^{\infty}\) in Table 3.3 to expand \(u\) and \(f\), cf. (3.11) and (3.12), and substitute them into (3.22): \[\begin{cases}-\Delta u_{n}+(\xi_{j,n}^{2}+A^{2})u_{n}=f_{n},&\text{in }\Omega_{0},\\ \phantom{-\Delta u_{n}+(\xi_{j,n}^{2}+A^{2})u_{n}=0,\quad\text{on }\Gamma_{D},}\\ \phantom{-\Delta u_{n}+(\xi_{j,n}^{2}+A^{2})u_{n}=0,\quad\text{on }\Gamma_{N}.} \end{cases} \tag{3.23}\] The only difference of (3.23) from (3.13) lies in the wave number. Upon letting \(\widetilde{A}_{j,n}=(\xi_{j,n}^{2}+A^{2})^{1/2}\), we can rewrite Theorems 3.1 and 3.2 with \(\xi_{j,n}=\widetilde{A}_{j,n}\), and obtain the following result. **Theorem 3.5**.: _For each \(f\in L^{2}(\Omega)\), let \(u\in H^{1}(\Omega)\) be the unique weak solution to problem (3.22). Then \(u\) can be split into a sum of a regular part \(w\) and a singular part \(S\) as_ \[u=w+S,\quad\text{with }w\in H^{2}(\Omega),\quad S=\sum_{j=1}^{M} \sum_{i\in\mathbb{I}_{j}}S_{j,i}(x_{j},y_{j},z),\] \[S_{j,i}(x_{j},y_{j},z)=\widetilde{\Phi}_{j,i}(r_{j},z)\eta_{\rho _{j}}(r_{j})s_{j,i}(r_{j},\theta_{j}),\quad\text{with }\widetilde{\Phi}_{j,i}(x_{j},y_{j},z)=\frac{1}{2}\gamma_{j,i,0}+\sum_{n=1}^{ \infty}\gamma_{j,i,n}\mathrm{e}^{-\widetilde{A}_{j,n}\tau_{j}}Z_{n}(z).\] Note that the expressions for the coefficients \(\gamma_{j,i,n}\) differ from that in Section 3.2, which however are not needed for implementing SEPINN and thus not further discussed. Nonetheless, we can still use both SEPINN-C and SEPINN-N to solve 3D Helmholtz problems. ## 4 Error analysis Now we discuss the error analysis of SEPINN developed in Section 3, following the strategies established in the recent works [33, 46, 31], in order to provide theoretical guarantee of SEPINN. We only analyze the 2D problem in Section 3.1. Let \((w^{*},\gamma^{*})\) be the global minimizer of the loss \(\mathcal{L}_{\mathbf{\sigma}}(w;\gamma)\), cf. (3.9), and the exact solution \(u^{*}\) to problem (1.1) be given by \(u^{*}=w^{*}+\gamma^{*}\eta_{\rho}s.\) Moreover, we assume that \(w^{*}\in H^{3}(\Omega)\) and \(|\gamma^{*}|\leq B\), cf. (3.7); otherwise, we can split out additional singular function(s); see Table 3.1 and the argument in Section 3.1.1. The following approximation property holds [30, Proposition 4.8]. **Lemma 4.1**.: _Let \(s\in\mathbb{N}\cup\{0\}\) and \(p\in[1,\infty]\) be fixed, and \(v\in W^{k,p}(\Omega)\) with \(k\geq s+1\). Then for any tolerance \(\epsilon>0\), there exists at least one \(v_{\theta}\) of depth \(\mathcal{O}\big{(}\log(d+k)\big{)}\), with \(|\theta|_{\epsilon^{0}}\) bounded by \(\mathcal{O}\big{(}\epsilon^{-\frac{d}{k-s-\mu(\epsilon=2)}}\big{)}\) and \(|\theta|_{\epsilon^{\infty}}\) bounded by \(\mathcal{O}(\epsilon^{-2-\frac{2(d\epsilon/\epsilon+\epsilon+\epsilon+\epsilon (\epsilon=2))+d\epsilon/\epsilon+d}{k-s-\mu(\epsilon=2)}})\), where \(\mu>0\) is arbitrarily small, such that_ \[\|v-v_{\theta}\|_{W^{s,p}(\Omega)}\leq\epsilon.\] For any \(\epsilon>0\), with \(d=2\), \(k=3\) and \(s=2\), Lemma 4.1 implies that there exists a DNN \[v_{\theta}\in\mathcal{N}(C,C\epsilon^{-\frac{2}{1-\epsilon}},C\epsilon^{-2- \frac{16+2\epsilon}{1-\epsilon}})=:\mathcal{W}_{\epsilon},\] such that \(\|w^{*}-v_{\theta}\|_{H^{2}(\Omega)}\leq\epsilon\). Also let \(I_{\gamma}=[-B,B]\). Let \((\widehat{w}_{\theta},\widehat{\gamma})\) be a minimizer of \(\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(w_{\theta};\gamma)\), cf. (3.10) over \(\mathcal{W}_{\epsilon}\times I_{\gamma}\), and set \(\widehat{u}=\widehat{w}_{\theta}+\widehat{\gamma}\eta_{\rho}s.\) The next lemma gives a decomposition of the error \(\|u^{*}-\widehat{u}\|_{L^{2}(\Omega)}\). **Lemma 4.2**.: _For any \(\epsilon>0\), let \((\widehat{w}_{\theta},\widehat{\gamma})\in\mathcal{W}_{\epsilon}\times I_{\gamma}\) be a minimizer to the loss \(\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(w_{\theta};\gamma)\). Then there exists a constant \(c(\boldsymbol{\sigma})\) such that_ \[\|u^{*}-\widehat{u}\|_{L^{2}(\Omega)}^{2}\leq c(\boldsymbol{\sigma})\mathcal{ L}_{\boldsymbol{\sigma}}(\widehat{w}_{\theta};\widehat{\gamma})\leq c( \boldsymbol{\sigma})\Big{(}\epsilon^{2}+\sup_{(w_{\theta},\gamma)\in\mathcal{ W}_{\epsilon}\times I_{\gamma}}|\mathcal{L}_{\boldsymbol{\sigma}}(w_{ \theta};\gamma)-\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(w_{\theta}; \gamma)|\Big{)}.\] Proof.: For any \((w_{\theta},\gamma)\in\mathcal{W}_{\epsilon}\times I_{\gamma}\), let \(u=w_{\theta}+\gamma\eta_{\rho}s\), and \(e=u^{*}-u\). By the trace theorem, we have \[\mathcal{L}_{\boldsymbol{\sigma}}(w_{\theta};\gamma)=\|\Delta e\|_{L^{2}( \Omega)}^{2}+\sigma_{d}\|e\|_{L^{2}(\Gamma_{D})}^{2}+\sigma_{n}\|\partial_{n}e \|_{L^{2}(\Gamma_{N})}^{2}\leq c(\boldsymbol{\sigma})\|e\|_{H^{2}(\Omega)}^{2}.\] To treat the nonzero boundary conditions of \(w_{\theta}\), we define the harmonic extension \(\zeta\) by \[\begin{cases}-\Delta\zeta=0,&\text{in }\Omega,\\ \quad\zeta=w_{\theta},&\text{on }\Gamma_{D},\\ \quad\partial_{n}\zeta=\partial_{n}w_{\theta},&\text{on }\Gamma_{N}.\end{cases}\] Then the following elliptic regularity estimate holds [9, Theorem 4.2, p. 870] \[\|\zeta\|_{L^{2}(\Omega)}\leq c\big{(}\|w_{\theta}\|_{L^{2}(\Gamma_{D})}+\| \partial_{n}w_{\theta}\|_{L^{2}(\Gamma_{N})}\big{)}. \tag{4.1}\] Let \(\tilde{e}=e+\zeta\). Then it satisfies \[\begin{cases}-\Delta\tilde{e}=-\Delta e,&\text{in }\Omega,\\ \quad\tilde{e}=0,&\text{on }\Gamma_{D},\\ \quad\partial_{n}\tilde{e}=0,&\text{on }\Gamma_{N}.\end{cases}\] Since \(\Delta e\in L^{2}(\Omega)\), the standard energy argument and Poincare inequality imply \[\|\tilde{e}\|_{L^{2}(\Omega)}\leq c\|\nabla\tilde{e}\|_{L^{2}(\Omega)}\leq c \|\Delta e\|_{L^{2}(\Omega)}.\] This, the stability estimate (4.1) and the triangle inequality lead to \[\|e\|_{L^{2}(\Omega)}^{2}\leq c\big{(}\|\tilde{e}\|_{L^{2}(\Omega)}+\|\zeta\|_ {L^{2}(\Omega)}\big{)}\leq c\big{(}\|\Delta e\|_{L^{2}(\Omega)}^{2}+\|w_{ \theta}\|_{L^{2}(\Gamma_{D})}^{2}+\|\partial_{n}w_{\theta}\|_{L^{2}(\Gamma_{N })}^{2}\big{)}\leq c(\boldsymbol{\sigma})\mathcal{L}_{\boldsymbol{\sigma}}(w_{ \theta};\gamma).\] This proves the first inequality of the lemma. Next, by Lemma 4.1 and the assumption \(w^{*}\in H^{3}(\Omega)\), there exists \(w_{\theta}\in\mathcal{W}_{\epsilon}\) such that \(\|w_{\theta}-w^{*}\|_{H^{2}(\Omega)}\leq\epsilon\). Let \(\bar{u}=w_{\theta}+\gamma^{*}\eta_{\rho}s\). Then we derive \[\mathcal{L}_{\boldsymbol{\sigma}}(w_{\bar{\theta}};\gamma^{*}) =\|\Delta(w_{\bar{\theta}}-w^{*})\|_{L^{2}(\Omega)}^{2}+\sigma_{d}\|w_{ \bar{\theta}}-w^{*}\|_{L^{2}(\Gamma_{D})}^{2}+\sigma_{n}\|\partial_{n}(w_{ \bar{\theta}}-w^{*})\|_{L^{2}(\Gamma_{N})}^{2}\] \[\leq c(\boldsymbol{\sigma})\|w_{\bar{\theta}}-w^{*}\|_{H^{2}( \Omega)}^{2}\leq c(\boldsymbol{\sigma})\epsilon^{2}.\] Thus by the minimizing property of \((\widehat{w}_{\theta},\widehat{\gamma})\), we arrive at \[\mathcal{L}_{\boldsymbol{\sigma}}(\widehat{w}_{\theta};\widehat{ \gamma}) =\mathcal{L}_{\boldsymbol{\sigma}}(\widehat{w}_{\theta};\widehat{ \gamma})-\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(\widehat{w}_{\theta}; \widehat{\gamma})+\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(\widehat{w}_{\theta}; \widehat{\gamma})-\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(w_{\bar{\theta}}; \gamma^{*})+\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(w_{\bar{\theta}}; \gamma^{*})-\mathcal{L}_{\boldsymbol{\sigma}}(w_{\bar{\theta}};\gamma^{*})+ \mathcal{L}_{\boldsymbol{\sigma}}(w_{\bar{\theta}};\gamma^{*})\] \[\leq\big{|}\mathcal{L}_{\boldsymbol{\sigma}}(\widehat{w}_{\theta}; \widehat{\gamma})-\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(\widehat{w}_{ \theta};\widehat{\gamma})\big{|}+\big{|}\widehat{\mathcal{L}}_{\boldsymbol{ \sigma}}(w_{\bar{\theta}};\gamma^{*})-\mathcal{L}_{\boldsymbol{\sigma}}(w_{\bar{ \theta}};\gamma^{*})\big{|}+\mathcal{L}_{\boldsymbol{\sigma}}(w_{\bar{\theta}}; \gamma^{*})\] \[\leq 2\sup_{(w_{\theta},\gamma)\in\mathcal{W}_{\epsilon}\times I_{ \gamma}}|\mathcal{L}_{\boldsymbol{\sigma}}(w_{\theta};\gamma)-\widehat{ \mathcal{L}}_{\boldsymbol{\sigma}}(w_{\theta};\gamma)|+c(\boldsymbol{\sigma}) \epsilon^{2}.\] This completes the proof of the lemma. Next, we bound the statistical error \(\mathcal{E}_{stat}=\sup_{(w_{\theta},\gamma)\in\mathcal{W}_{\kappa}\times I_{\gamma}} \big{|}\mathcal{L}_{\mathbf{\sigma}}(w_{\theta};\gamma)-\widehat{\mathcal{L}}_{\bm {\sigma}}(w_{\theta};\gamma)\big{|}\), which arises from approximating the integrals by Monte Carlo. By the triangle inequality, we have the splitting \[\mathcal{E}_{stat}\leq \sup_{(w_{\theta},\gamma)\in\mathcal{W}_{\kappa}\times I_{\gamma}} |\Omega|\Big{|}\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}h_{r}(X_{i};w_{\theta},\gamma) -\mathbb{E}_{X}(h_{r}(X;w_{\theta},\gamma))\Big{|}\] \[+\sup_{w_{\theta}\in\mathcal{W}_{\epsilon}}\sigma_{d}|\Gamma_{D}| \Big{|}\frac{1}{N_{d}}\sum_{j=1}^{N_{d}}h_{d}(Y_{j};w_{\theta})-\mathbb{E}_{Y }(h_{d}(Y;w_{\theta}))\Big{|} \tag{4.2}\] \[+\sup_{w_{\theta}\in\mathcal{W}_{\epsilon}}\sigma_{n}|\Gamma_{N} |\Big{|}\frac{1}{N_{n}}\sum_{k=1}^{N_{n}}h_{n}(Z_{k};w_{\theta})-\mathbb{E}_{ Y}(h_{n}(Z;w_{\theta}))\Big{|},\] with \(h_{r}(x;w_{\theta},\gamma)=(\Delta w_{\theta}+f+\gamma\Delta(\eta_{\theta}s) )^{2}(x)\) for \(x\in\Omega\), \(h_{d}(y;w_{\theta})=|w_{\theta}(y)|^{2}\) for \(y\in\Gamma_{D}\) and \(h_{n}(z;w_{\theta})=|\partial_{n}w_{\theta}(z)|^{2}\) for \(z\in\Gamma_{N}\). Thus, we define the following three function classes \[\mathcal{H}_{r}=\{h_{r}(w_{\theta},\gamma):w_{\theta}\in\mathcal{W}_{\epsilon },\gamma\in I_{\gamma}\},\ \ \mathcal{H}_{d}=\{h_{d}(w_{\theta}):w_{\theta}\in\mathcal{W}_{\epsilon}\}\ \ \text{and}\ \ \mathcal{H}_{n}=\{h_{n}(w_{\theta}):w_{\theta}\in\mathcal{W}_{\epsilon}\}.\] To bound the errors in (4.2), we employ Rademacher complexity [1, 6], which measures the complexity of a collection of functions by the correlation between function values with Rademacher random variables. **Definition 4.1**.: _Let \(\mathcal{F}\) be a real-valued function class defined on the domain \(D\) and \(\xi=\{\xi_{j}\}_{j=1}^{n}\) be i.i.d. samples from the distribution \(\mathcal{U}(D)\). Then the Rademacher complexity \(\mathfrak{R}_{n}(\mathcal{F})\) of \(\mathcal{F}\) is defined by_ \[\mathfrak{R}_{n}(\mathcal{F})=\mathbb{E}_{\xi,\omega}\bigg{[}\sup_{f\in \mathcal{F}}\ n^{-1}\bigg{|}\ \sum_{j=1}^{n}\omega_{j}f(\xi_{j})\ \bigg{|}\bigg{]},\] _where \(\omega=\{\omega_{j}\}_{j=1}^{n}\) are i.i.d Rademacher random variables with probability \(P(\omega_{j}=1)=P(\omega_{j}=-1)=\frac{1}{2}\)._ Then we have the following PAC-type generalization bound [50, Theorem 3.1]. **Lemma 4.3**.: _Let \(X_{1},\dots,X_{n}\) be a set of i.i.d. random variables, and let \(\mathcal{F}\) be a function class defined on \(D\) such that \(\sup_{f\in\mathcal{F}}\|f\|_{L^{\infty}(D)}\leq M_{\mathcal{F}}<\infty\). Then for any \(\tau\in(0,1)\), with probability at least \(1-\tau\):_ \[\sup_{f\in\mathcal{F}}\bigg{|}n^{-1}\sum_{j=1}^{n}f(X_{j})-\mathbb{E}[f(X)] \bigg{|}\leq 2\mathfrak{R}_{n}(\mathcal{F})+2M_{\mathcal{F}}\sqrt{\frac{\log \frac{1}{\tau}}{2n}}.\] To apply Lemma 4.3, we bound Rademacher complexities of the function classes \(\mathcal{H}_{r}\), \(\mathcal{H}_{d}\) and \(\mathcal{H}_{n}\). This follows from Dudley's formula in Lemma 4.6. The next lemma gives useful boundedness and Lipschitz continuity of the DNN function class in terms of \(\theta\); see [34, Lemma 3.4 and Remark 3.3] and [35, Lemma 5.3]. The estimates also hold when \(L^{\infty}(\Omega)\) is replaced with \(L^{\infty}(\Gamma_{D})\) or \(L^{\infty}(\Gamma_{N})\). **Lemma 4.4**.: _Let \(L\), \(W\) and \(R\) be the depth, width and maximum weight bound of a DNN function class \(\mathcal{W}\), with \(N_{\theta}\) nonzero weights. Then for any \(v_{\theta}\in\mathcal{W}\), the following estimates hold_ * \(\|v_{\theta}\|_{L^{\infty}(\Omega)}\leq WR\)_,_ \(\|v_{\theta}-v_{\tilde{\theta}}\|_{L^{\infty}(\Omega)}\leq 2LW^{L}R^{L-1}| \theta-\tilde{\theta}|_{\ell^{\infty}}\)_;_ * \(\|\nabla v_{\theta}\|_{L^{\infty}(\Omega;\mathbb{R}^{d})}\leq\sqrt{2}W^{L-1}R^{L}\)_,_ \(\|\nabla(v_{\theta}-v_{\tilde{\theta}})\|_{L^{\infty}(\Omega;\mathbb{R}^{d})} \leq\sqrt{2}L^{2}W^{2L-2}R^{2L-2}|\theta-\tilde{\theta}|_{\ell^{\infty}}\)_;_ * \(\|\Delta v_{\theta}\|_{L^{\infty}(\Omega)}\leq 2LW^{2L-2}R^{2L}\)_,_ \(\|\Delta(v_{\theta}-v_{\tilde{\theta}})\|_{L^{\infty}(\Omega)}\leq 8N_{\theta}L^{2}W^{3L-3}R^{3L-3}| \theta-\tilde{\theta}|_{\ell^{\infty}}\)_._ Lemma 4.4 implies boundedness and Lipschitz continuity of functions in \(\mathcal{H}_{r}\), \(\mathcal{H}_{d}\) and \(\mathcal{H}_{n}\). **Lemma 4.5**.: _There exists a constant \(c\) depending on \(\|f\|_{L^{\infty}(\Omega)}\), \(\|\Delta(\eta_{\rho}s)\|_{L^{\infty}(\Omega)}\) and \(B\) such that_ \[\|h(w_{\theta},\gamma)\|_{L^{\infty}(\Omega)}\leq cL^{2}W^{4L-4}R^{L},\quad\forall h\in\mathcal{H}_{r},\] \[\|h(w_{\theta})\|_{L^{\infty}(\Gamma_{D})}\leq cW^{2R}R^{2},\quad\forall h\in\mathcal{H}_{d},\] \[\|h(w_{\theta})\|_{L^{\infty}(\Gamma_{N})}\leq cW^{2L-2}R^{2L},\quad\forall h\in\mathcal{H}_{n}.\] _Moreover, the following Lipschitz continuity estimates hold_ \[\|h(w_{\theta},\gamma)-\tilde{h}(w_{\tilde{\theta}},\gamma)\|_{L^{ \infty}(\Omega)}\leq cN_{\theta}L^{3}W^{5L-5}R^{5L-3}(|\theta-\tilde{\theta}|_{ \ell^{\infty}}+|\gamma-\tilde{\gamma}|),\quad\forall h,\tilde{h}\in\mathcal{H}_{r},\] \[\|h(w_{\theta})-h(w_{\tilde{\theta}})\|_{L^{\infty}(\Gamma_{D})}\leq cLW^{L+1}R^{L}| \theta-\tilde{\theta}|_{\ell^{\infty}},\quad\forall h,\tilde{h}\in\mathcal{H}_{d},\] \[\|h(w_{\theta})-\tilde{h}(w_{\tilde{\theta}})\|_{L^{\infty}( \Gamma_{N})}\leq cL^{2}W^{3L-3}R^{3L-2}|\theta-\tilde{\theta}|_{\ell^{\infty}}, \quad\forall h,\tilde{h}\in\mathcal{H}_{n}.\] Proof.: The uniform bounds follow directly from Lemma 4.5. The Lipschitz estimates follow similarly, and we show it only for the set \(H_{r}\). Indeed, for any \(h=h_{r}(w_{\theta},\gamma),\tilde{h}=h_{r}(w_{\tilde{\theta}},\tilde{\gamma}) \in\mathcal{H}_{r}\), we have \[\|h-\tilde{h}\|_{L^{\infty}(\Omega)}=\|(\Delta w_{\theta}+f+\gamma\Delta(\eta_{ \rho}s))^{2}-(\Delta w_{\tilde{\theta}}+f+\tilde{\gamma}\Delta(\eta_{\rho}s))^ {2}\|_{L^{\infty}(\Omega)}.\] By Lemma 4.5, we have \[\|(\Delta w_{\theta}+f+\gamma\Delta(\eta_{\rho}s))+(\Delta w_{ \tilde{\theta}}+f+\tilde{\gamma}\Delta(\eta_{\rho}s))\|_{L^{\infty}(\Omega)}\] \[\leq 2(2LW^{2L-2}R^{2L}+\|f\|_{L^{\infty}(\Omega)}+B\|\Delta(\eta_{ \rho}s)\|_{L^{\infty}(\Omega)}\leq cLW^{2L-2}R^{2L},\] \[\|\Delta(w_{\theta}-w_{\tilde{\theta}})\|_{L^{\infty}(\Omega)}+| \gamma-\tilde{\gamma}|\|\Delta(\eta_{\rho}s))^{2}\|_{L^{\infty}(\Omega)}\] \[\leq 8N_{\theta}L^{2}W^{3L-3}R^{3L-3}|\theta-\tilde{\theta}|_{\ell \infty}+|\gamma-\tilde{\gamma}||\Delta(\eta_{\rho}s)\|_{L^{\infty}(\Omega)}\] \[\leq cN_{\theta}L^{2}W^{3L-3}R^{3L-3}(|\theta-\tilde{\theta}|_{\ell ^{\infty}}+|\gamma-\tilde{\gamma}|).\] Combining the last three estimates yield the bound on \(\|h-\tilde{h}\|_{L^{\infty}(\Omega)}\). Next, we state Dudley's lemma ([47, Theorem 9], [59, Theorem 1.19]), which bounds Rademacher complexities using covering number. **Definition 4.2**.: _Let \((\mathcal{M},\rho)\) be a metric space of real valued functions, and \(\mathcal{G}\subset\mathcal{M}\). A set \(\{x_{i}\}_{i=1}^{n}\subset\mathcal{G}\) is called an \(\epsilon\)-cover of \(\mathcal{G}\) if for any \(x\in\mathcal{G}\), there exists a \(x_{i}\) such that \(\rho(x,x_{i})\leq\epsilon\). The \(\epsilon\)-covering number \(\mathcal{C}(\mathcal{G},\rho,\epsilon)\) is the minimum cardinality among all \(\epsilon\)-covers of \(\mathcal{G}\) with respect to \(\rho\)._ **Lemma 4.6** (Dudley's lemma).: _Let \(M_{\mathcal{F}}:=\sup_{f\in\mathcal{F}}\|f\|_{L^{\infty}(\Omega)}\), and \(\mathcal{C}(\mathcal{F},\|\cdot\|_{L^{\infty}(\Omega)},\epsilon)\) be the covering number of \(\mathcal{F}\). Then the Rademacher complexity \(\mathfrak{R}_{n}(\mathcal{F})\) is bounded by_ \[\mathfrak{R}_{n}(\mathcal{F})\leq\inf_{0<s<M_{\mathcal{F}}}\left(4s\ +\ 12n^{-\frac{1}{2}}\int_{s}^{M_{\mathcal{F}}}\left(\log\mathcal{C}(\mathcal{F}, \|\cdot\|_{L^{\infty}(\Omega)},\epsilon)\right)^{\frac{1}{2}}\,\mathrm{d} \epsilon\right).\] **Lemma 4.7**.: _For any small \(\tau\), with probability at least \(1-3\tau\), there holds_ \[\sup_{(w_{\theta},\gamma)\in\mathcal{W}_{\epsilon}\times I_{\gamma}}|\mathcal{ L}_{\mathbf{\sigma}}(w_{\theta};\gamma)-\widehat{\mathcal{L}}_{\mathbf{\sigma}}(w_{ \theta};\gamma)|\leq c(e_{r}+\sigma_{d}e_{d}+\sigma_{n}e_{n}),\] _where \(c\) depends on \(\|f\|_{L^{\infty}(\Omega)}\) and \(\|\Delta(\eta_{\rho}s)\|_{L^{\infty}(\Omega)}\), and \(e_{r}\), \(e_{d}\) and \(e_{n}\) are respectively defined by_ \[e_{r} \leq c\frac{L^{2}R^{4L}N_{\theta}^{4L-4}\big{(}N_{\theta}^{\frac{1}{2 }}\left(\log^{\frac{1}{2}}R+\log^{\frac{1}{2}}N_{\theta}+\log^{\frac{1}{2}}N_{r }\right)+\log^{\frac{1}{2}}\frac{1}{\tau}\big{)}}{\sqrt{N_{r}}},\] \[e_{d} \leq c\frac{R^{2}N_{\theta}^{2}\big{(}N_{\theta}^{\frac{1}{2}}\left( \log^{\frac{1}{2}}R+\log^{\frac{1}{2}}N_{\theta}+\log^{\frac{1}{2}}N_{d} \right)+\log^{\frac{1}{2}}\frac{1}{\tau}\big{)}}{\sqrt{N_{d}}},\] \[e_{n} \leq c\frac{R^{2L}N_{\theta}^{2L-2}\big{(}N_{\theta}^{\frac{1}{2}} \left(\log^{\frac{1}{2}}R+\log^{\frac{1}{2}}N_{\theta}+\log^{\frac{1}{2}}N_{n }\right)+\log^{\frac{1}{2}}\frac{1}{\tau}\big{)}}{\sqrt{N_{n}}}.\] Proof.: Fix \(m\in\mathbb{N}\), \(R\in[1,\infty)\), \(\epsilon\in(0,1)\), and \(B_{R}:=\{x\in\mathbb{R}^{m}:\ |x|_{\ell^{\infty}}\leq R\}\). Then by [16, Prop. 5], \[\log\mathcal{C}(B_{R},|\cdot|_{\ell^{\infty}},\epsilon)\leq m\log(4R\epsilon^{-1}).\] The Lipschitz continuity estimates in Lemmas 4.4 and 4.5 imply \[\log\mathcal{C}(\mathcal{H}_{r},\|\cdot\|_{L^{\infty}(\Omega)},\epsilon)\leq \log\mathcal{C}(\Theta\times I_{\gamma},\max(|\cdot|_{\ell^{\infty}},|\cdot|), \Lambda_{r}^{-1}\epsilon)\leq cN_{\theta}\log(4R\Lambda_{r}\epsilon^{-1}),\] with \(\Lambda_{r}=cN_{\theta}L^{3}W^{5L-5}R^{5L-3}\). Moreover, by Lemma 4.5, we have \(M_{r}\equiv M_{\mathcal{H}_{r}}=cL^{2}W^{4L-4}R^{4L}\). Then setting \(s=n^{-\frac{1}{2}}\) in Lemma 4.6 and using the estimates \(1\leq R\), \(1\leq L\) and \(1\leq W\leq N_{\theta}\), \(1\leq L\leq c\log 5\) (with \(d=2\) and \(k=3\)), cf. Lemma 4.1, lead to \[\mathfrak{R}_{n}(\mathcal{H}_{r})\leq 4n^{-\frac{1}{2}}+12n^{-\frac{1}{2}}\int_{n^{-\frac{1}{2}}}^{M_{r}} \big{(}cN_{\theta}\log(4R\Lambda_{r}\epsilon^{-1})\big{)}^{\frac{1}{2}}\,\mathrm{d}\epsilon\] \[\leq 4n^{-\frac{1}{2}}+12n^{-\frac{1}{2}}M_{r}\big{(}cN_{\theta}\log( 4R\Lambda_{r}n^{\frac{1}{2}})\big{)}^{\frac{1}{2}}\] \[\leq 4n^{-\frac{1}{2}}+cn^{-\frac{1}{2}}W^{4L-4}R^{4L}N_{\theta}^{ \frac{1}{2}}\left(\log^{\frac{1}{2}}R+\log^{\frac{1}{2}}\Lambda_{r}+\log^{\frac{1} {2}}n\right)\] \[\leq cn^{-\frac{1}{2}}R^{4L}N_{\theta}^{4L-\frac{1}{2}}\big{(}\log^{\frac{1}{2}}R +\log^{\frac{1}{2}}N_{\theta}+\log^{\frac{1}{2}}n\big{)}.\] Similarly, repeating the preceding argument leads to \[\mathfrak{R}_{n}(\mathcal{H}_{d}) \leq cn^{-\frac{1}{2}}R^{2}N_{\theta}^{\frac{5}{2}}(\log^{\frac{1}{2 }}R+\log^{\frac{1}{2}}N_{\theta}+\log^{\frac{1}{2}}n),\] \[\mathfrak{R}_{n}(\mathcal{H}_{n}) \leq cn^{-\frac{1}{2}}R^{2L}N_{\theta}^{2L-\frac{3}{2}}(\log^{ \frac{1}{2}}R+\log^{\frac{1}{2}}N_{\theta}+\log^{\frac{1}{2}}n).\] Finally, the desired result follows from the PAC-type generalization bound in Lemma 4.3. Then combining Lemma 4.2 with Lemma 4.7 yields the following error estimate. Thus, by choosing the numbers \(N_{r}\), \(N_{d}\) and \(N_{n}\) of sampling points sufficiently large, the \(L^{2}(\Omega)\) error of the SEPINN approximation can be made about \(O(\epsilon^{2})\). **Theorem 4.1**.: _Fix a tolerance \(\epsilon>0\), and let \((\widehat{w}_{\theta},\widehat{\gamma})\in\mathcal{W}_{\epsilon}\times I_{\gamma}\) be a minimizer to the empirical loss \(\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(w_{\theta};\gamma)\) in (3.10). Then for any small \(\tau\), with the statistical errors \(e_{r}\), \(e_{d}\) and \(e_{n}\) from Lemma 4.7, we have with probability at least \(1-3\tau\), the following error estimate holds_ \[\|u^{*}-\widehat{u}\|_{L^{2}(\Omega)}^{2}\leq c(\boldsymbol{\sigma})\big{(} \epsilon^{2}+e_{r}+e_{d}+e_{n}\big{)}.\] ## 5 Numerical experiments Now we present numerical examples (with zero Dirichlet and / or Neumann boundary condition) to illustrate SEPINN, and compare it with existing PINN type solvers. In the training, \(N_{r}=10,000\) points in the domain \(\Omega\) and \(N_{b}=800\) points on the boundary \(\partial\Omega\) are selected uniformly at random to form the empirical loss \(\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}\), unless otherwise specified. In the path-following (PF) strategy, we take an increasing factor \(q=1.5\). All numerical experiments were conducted on a personal laptop (Windows 10, with RAM 8.0GB, Intel(R) Core(TM) i7-10510U CPU, 2.3 GHz), with Python 3.9.7, with the software framework PyTorch. The gradient of the DNN output \(w_{\theta}(x)\) with respect to the input \(x\) (i.e., spatial derivative) and that of the loss \(\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}\) to \(\theta\) are computed via automatic differentiation [7], as implemented by torch.autograd. For SEPINN and SEPINN-C (based on cutoff), we minimize the loss \(\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}\) in two stages: first determine the coefficients \(\boldsymbol{\gamma}_{N}\), and then reduce the boundary error and refine the DNN approximation \(w_{\theta}\) (of the regular part \(w\)) by the PF strategy. We use different optimizers at these two stages. First, we minimize the loss \(\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(w_{\theta};\boldsymbol{\gamma})\) in both \(\theta\) and \(\boldsymbol{\gamma}\) using Adam [37] (from the SciPy library), with the default setting (tolerance: 1.0e-8, no box constraint, maximum iteration number: 1000); then, we minimize \(\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}(w_{\theta};\widehat{\gamma}^{*})\) (fixing \(\boldsymbol{\gamma}\) at \(\widehat{\gamma}^{*}\) fixed) using limited memory BFGS (L-BFGS) [11], with the default setting (tolerance: 1.0e-9, no box constraint, maximum iteration number: 2500). For SEPINN-N, we employ only L-BFGS [11]. To measure the accuracy of an approximation \(\hat{w}\) of \(w^{*}\), we use the relative \(L^{2}(\Omega)\)-error \(e=\|w^{*}-\hat{w}\|_{L^{2}(\Omega)}/\|w^{*}\|_{L^{2}(\Omega)}\). The stopping condition of the PF strategy is set to \(e<1.00\)e-3 and \(\sigma_{d}^{(k)},\sigma_{n}^{(k)}\leq\sigma^{*}\), for some fixed \(\sigma^{*}>0\). The first condition ensures that \(\hat{w}\) can achieve the desired accuracy and the second one terminates the iteration after a finite number of loops. The detailed hyper-parameter setting of the PF strategy is listed in Table 5.1. The Python code for reproducing the numerical experiments will be made available at [https://github.com/hhjc-web/SEPINN.git](https://github.com/hhjc-web/SEPINN.git). First, we showcase the approach on an L-shape domain [15, Example 5.2]. **Example 5.1**.: _The domain \(\Omega=(-1,1)^{2}\backslash\left([0,1)\times(-1,0]\right)\). Set \(\rho=1\) and \(R=\frac{1}{2}\) in (3.4), the source_ \[f=\begin{cases}\sin(2\pi x)\left[2\pi^{2}\left(y^{2}+2y\right)\left(y^{2}-1 \right)-\left(6y^{2}+6y-1\right)\right]-\Delta\left(\eta_{\rho}s\right),&-1 \leq y\leq 0,\\ \sin(2\pi x)\left[2\pi^{2}\left(-y^{2}+2y\right)\left(y^{2}-1\right)-\left(-6 y^{2}+6y+1\right)\right]-\Delta\left(\eta_{\rho}s\right),&0\leq y\leq 1,\end{cases}\] _with the singular function \(s=r^{\frac{2}{3}}\sin\left(\frac{2\theta}{3}\right)\), and \(\Gamma_{D}=\partial\Omega\). The exact solution \(u\) of the problem is given by \(u=w+\eta_{\rho}s\), with the regular part \(w\) given by_ \[w=\begin{cases}\sin(2\pi x)\left(\frac{1}{2}y^{2}+y\right)\left(y^{2}-1\right),& -1\leq y\leq 0,\\ \sin(2\pi x)\left(-\frac{1}{2}y^{2}+y\right)\left(y^{2}-1\right),&0\leq y\leq 1.\end{cases}\] Here the regular part \(w\) belongs to \(H^{2}(\Omega)\) but not to \(H^{3}(\Omega)\), and \(u\) lies in \(H^{1}(\Omega)\) but not in \(H^{2}(\Omega)\). In SEPINN, we employ a 2-20-20-20-1 DNN (3 hidden layers, each having 20 neurons). The first stage of the PF strategy gives an estimate \(\widehat{\gamma}^{*}=0.9958\), and the final prediction error \(e\) after the second stage is 3.43e-3. Fig. 1 shows that the pointwise error of the SEPINN approximation is small and the accuracy around the singularity at the reentrant corner is excellent. In contrast, applying PINN and DRM directly fails to yield satisfactory results near the reentrant corner because of the presence of the singular term \(r^{\frac{7}{3}}\sin\frac{2}{3}\theta\), consistent with the approximation theory of DNNs to singular functions [30]. DRM shows larger errors over the whole domain, not just in the vicinity of the corner. By adaptively adjusting the empirical loss, SAPINN and FIPINN can yield more accurate approximations than PINN, but the error around the singularity is still large. Numerically, FIPINN can adaptively add sampling points near the corner but does not give high concentration, which limits the accuracy of the final DNN approximation. This shows the advantages of explicitly incorporating analytic insights into SEPINN. \begin{table} \begin{tabular}{|c|c|c c c c|} \hline Example & method & \(\sigma^{*}\) & \(\boldsymbol{\sigma}^{(1)}\) & \(\boldsymbol{\sigma}^{(K)}\) & \(\widehat{\gamma}^{*}\) & \(e\) \\ \hline 5.1 & SEPINN & 800 & 100 & 759.4 & 0.9958 & 3.43e-3 \\ \hline 5.2 & SEPINN & 800 & (100,100) & (759.4,759.4) & 1.0023 & 3.48e-3 \\ \hline 5.3 & SEPINN-C & 7000 & 2000 & 6750 & Table 5.2 & 3.93e-2 \\ & SEPINN-N & 7000 & 400 & 6834.4 & – – & 2.05e-2 \\ \hline 5.4 & SEPINN-C & 1000 & (100,100) & (759.4,759.4) & Table 5.4 & 2.08e-2 \\ & SEPINN-N & 4000 & (100,400) & (759.4,3037.5) & – – & 3.83e-2 \\ \hline 5.5 & SEPINN & 800 & 100 & 759.4 & 0.9939 & 1.02e-2 \\ & SEPINN-C & 2000 & 400 & 759.4 & Table 5.5 & 3.30e-2 \\ & SEPINN-N & 4000 & 400 & 3037.5 & – & 2.55e-2 \\ \hline 5.6 & SEPINN & 600 & 50 & 569.5 & Table 5.6 & – – \\ \hline \end{tabular} \end{table} Table 5.1: The hyper-parameters for the PF strategy for SEPINN (2D) and SEPINN-C and SEPINN-N (3D) for Examples 5.1-5.5. The notation \(\widehat{\gamma}^{*}\) denotes the estimated stress intensity factor, and \(e\) the prediction error. Figure 1: The numerical approximations for Example 5.1 by SEPINN, SAPINN, FIPINN, PINN and DRM. From top to bottom: exact solution, DNN approximation and pointwise error. To shed further insights into the methods, we show in Fig. 2 the training dynamics of the empirical loss \(\widehat{\mathcal{L}}\) and relative error \(e\), where \(i\) denotes the total iteration index along with the PF loops. For all methods, the loss \(\widehat{\mathcal{L}}\) and error \(e\) both decay steadily as the iteration proceeds, indicating stable convergence of the optimizer, but SEPINN enjoys the fastest decay and smallest error \(e\), due to the improved regularity of \(w^{*}\). The final error \(e\) saturates at around \(10^{-3}\) for SEPINN and \(10^{-2}\) for PINN, SAPINN and FIPINN, but only \(10^{-1}\) for DRM. Indeed the accuracy of neural PDE solvers tends to stagnate at a level of \(10^{-2}\sim 10^{-3}\)[53, 60, 62, 17], which differs markedly from more traditional solvers. We next investigate a mixed boundary value problem, adapted from [14, Example 1]. **Example 5.2**.: _The domain \(\Omega\) is the unit square \(\Omega=(0,1)^{2}\), \(\Gamma_{N}=\{(x,0):x\in(0,\frac{1}{2})\}\) and \(\Gamma_{D}=\partial\Omega\backslash\Gamma_{N}\). The singular function \(s=r^{\frac{1}{2}}\sin\frac{\theta}{2}\) in the local polar coordinate \((r,\theta)\) at \((\frac{1}{2},0)\). Set \(\rho=1\) and \(R=\frac{1}{4}\) in (3.4), the source \(f=-\sin(\pi x)(-\pi^{2}y^{2}(y-1)+6y-2)-\Delta(\eta_{\rho}s).\) The exact solution \(u=\sin(\pi x)y^{2}(y-1)+\eta_{\rho}s\), with the regular part \(w=\sin(\pi x)y^{2}(y-1)\) and stress intensity factor \(\gamma=1\)._ This problem has a geometric singularity at the point \((\frac{1}{2},0)\), where the boundary condition changes from Dirichlet to Neumann with an interior angle \(\omega=\pi\). We employ a 3-10-10-10-1 DNN (with 3 hidden layers, each having 10 neurons). The first stage of the PF strategy gives an estimate \(\widehat{\gamma}^{*}=1.0023\), and the prediction error \(e\) after the second stage is 3.48e-3. The singularity at the crack point is accurately resolved by SEPINN, cf. Fig. 3, which shows also the approximations by DRM and other PINN techniques. The overall solution accuracy is very similar to Example 5.1, and SEPINN achieves the Figure 3: The numerical approximations of Example 5.2 by the proposed SEPINN (error: 3.48e-3), SAPINN (error: 4.40e-2), FIPINN (error: 3.65e-2), PINN (error: 7.33e-2) and DRM (error: 1.86e-1). From top to bottom: analytic solution, DNN approximation and pointwise error. Figure 2: Training dynamics for SEPINN and benchmark methods: (a) the decay of the empirical loss \(\widehat{\mathcal{L}}\) versus the iteration index \(i\) (counted along the path-following trajectory) and (b) the error \(e\) versus \(i\). smallest error. SAPINN and FIPINN improve the standard PINN, but still suffer from large errors near the singularity. The next example is a 3D Poisson equation adapted from [52, Example 1]. **Example 5.3**.: _Let \(\Omega_{0}=(-1,1)^{2}\backslash\ ([0,1)\times(-1,0])\), and the domain \(\Omega=\Omega_{0}\times(-1,1)\). Define_ \[\Phi(r,z)=-2\arctan\frac{\mathrm{e}^{-\pi r}\sin\pi z}{1+\mathrm{e}^{-\pi r} \cos\pi z}=2\sum_{n=1}^{\infty}\frac{(-1)^{n}\mathrm{e}^{-\pi nr}\sin n\pi z} {n}, \tag{5.1}\] _and set \(\rho=1\) and \(R=\frac{1}{2}\) in (3.4), the source \(f=6x(y-y^{3})(1-z^{2})+6y(x-x^{3})(1-z^{2})+2(y-y^{3})(x-x^{3})-\Delta(\Phi\eta _{\rho}s)\), with the singular function \(s=r^{\frac{3}{3}}\sin(\frac{2\Phi}{3})\), and a zero Dirichlet boundary condition. The exact solution \(u\) is given by \(u=(x-x^{3})(y-y^{3})(1-z^{2})+\Phi\eta_{\rho}s\)._ It follows from (5.1) that the coefficients \(\gamma_{n}^{*}\) are given by \(\gamma_{0}^{*}=0\) and \(\gamma_{n}^{*}=(-1)^{n}\frac{2}{n}\) for \(n\in\mathbb{N}\). In SEPINN-C (cutoff), we take a truncation level \(N=20\) to approximate the first \(N+1\) coefficients in the series (5.1), and a 3-10-10-10-1 DNN to approximate \(w\). In the first stage, we employ a learning rate 1.0e-3 for the DNN parameters \(\theta\) and 8.0e-3 for coefficients \(\boldsymbol{\gamma}_{N}\), and the prediction error \(e\) after the second stage is 3.93e-2. We present the slices at \(z=\frac{1}{2}\) and \(z=\frac{1}{4}\) in Fig. 4. The true and estimated values of \(\boldsymbol{\gamma}_{n}\) are given in Table 5.2: The first few terms of the expansion are well approximated, but as the index \(n\) gets larger, the approximations of \(\widehat{\gamma}_{n}^{*}\) becomes less accurate. The error initially indeed decreases as the truncation level \(N\) increases, but it does not consistently decrease with \(N\), cf. Table 5.3. The difference between the empirical observation and the theoretical prediction in Theorem 3.3 is possibly due to the optimization error during the training. Next we present the SEPINN-N approximation. We use two 4-layer DNNs, both of 3-10-10-10-1, for \(w\) and \(\Phi\). The PF strategy is run with a maximum 2500 iterations for each fixed \(\boldsymbol{\sigma}\), and the final prediction error \(e\) is 2.05e-2. The maximum error of the SEPINN-N approximation is slightly smaller, and both can give excellent approximations. Fig. 5 compares the training dynamics for SEPINN-C and SEPINN-N. Fig. 5 (a) shows the convergence for the fist few flux intensity factors \(\boldsymbol{\gamma}\), all initialized to \(1\). Within hundreds of iterations (\(\leq 1000\)), the iterates converge to the exact one steadily (and hence we set the maximum iteration number for \(\boldsymbol{\gamma}\) to \(1000\)). To accurately approximate \(w\), more iterations are needed. Fig. 5 (b) shows the training dynamics for the DNN \(\Phi_{\zeta}\), where \(e_{\boldsymbol{\Phi}}=\|\boldsymbol{\Phi}^{*}-\Phi_{\zeta}\|_{L^{2}(G)}/\| \Phi^{*}\|_{L^{2}(G)}\) (\(G\) is the support of the cut-off function \(\eta\) and \(\Phi^{*}\) is the exact one). The error \(e_{\boldsymbol{\Phi}}\) eventually decreases to \(10^{-2}\). The entire training process of the two methods is similar, cf. Figs. 5 (c) and (d). SEPINN-N takes more iterations than SEPINN-C, but SEPINN-C actually takes longer training time: SEPINN-C requires evaluating the coefficient \(\gamma_{n}\), which incurs taking Laplacian of singular terms, whereas in SEPINN-N, all parameters are trained together. The next example involves a combination of four singularities. **Example 5.4**.: _Let the domain \(\Omega=(-\pi,\pi)^{3}\), \(\Gamma_{D}=\{(x,-\pi,z):x\in(-\pi,0),z\in(-\pi,\pi)\}\cup\{(-\pi,y,z):y\in(- \pi,0),z\in(-\pi,\pi)\}\cup\{(x,\pi,z):x\in(0,\pi),z\in(-\pi,\pi)\}\cup\{(\pi,y,z):y\in(0,\pi),z\in(-\pi,\pi)\}\) and \(\Gamma_{D}=\partial\Omega\backslash\Gamma_{N}\). Let the vertices \(\boldsymbol{v}_{1}:(0,-\pi)\), \(\boldsymbol{v}_{2}:(\pi,0)\), \(\boldsymbol{v}_{3}:(0,\pi)\) and \(\boldsymbol{v}_{4}:(-\pi,0)\). This problem has four geometric singularities at boundary edges \(\boldsymbol{v}_{j}\times(-\pi,\pi)\), where the type of the boundary condition changes from Dirichlet to Neumann with interior angles \(\omega_{j}=\pi\), \(j=1,2,3,4\). Define_ \[\Phi_{j}(r_{j},z)=r-\ln(2\cosh r_{j}-2\cos z)=\sum_{n=1}^{\infty}\frac{2}{n} \mathrm{e}^{-nr_{j}}\cos{nz},\quad j=1,2,3,4, \tag{5.2}\] _and set \(\rho_{j}=1\) and \(R=\frac{1}{2}\) in (3.4), the source \(f=((\frac{4}{\pi^{2}}-\frac{12\pi^{2}}{\pi^{2}})(1-\frac{y^{2}}{\pi^{2}})^{2}+ (\frac{4}{\pi^{2}}-\frac{12\eta^{2}}{\pi^{4}})(1-\frac{x^{2}}{\pi^{2}})^{2}+( 1-\frac{x^{2}}{\pi^{2}})^{2}(1-\frac{y^{2}}{\pi^{2}})^{2}\cos{z}-\sum_{j=1}^{4 }\Delta(\Phi_{j}\eta_{\rho_{j}}s_{j})\) with the singular functions \(s_{j}=r_{j}^{\frac{1}{2}}\cos(\frac{\theta_{j}}{2})\) for \(j=1,3\) and \(s_{j}=r_{j}^{\frac{1}{2}}\sin(\frac{\theta_{j}}{2})\) for \(j=2,4\). The exact solution \(u\) is given by \(u=(1-\frac{x^{2}}{\pi^{2}})^{2}(1-\frac{y^{2}}{\pi^{2}})^{2}\cos{z}+\sum_{j=1} ^{4}\Phi_{j}\eta_{\rho_{j}}s_{j}\)._ This example requires learning a large number of parameters regardless of the method: for SEPINN-C, we have to expand the four singular functions and learn their coefficients, whereas for SEPINN-N, we employ five networks to approximate \(w\) and \(\Phi_{\zeta,j}\) (\(j=1,2,3,4\)). We take the number of sampling points \(N_{d}=800\) and \(N_{n}=1200\) on the boundaries \(\Gamma_{D}\) and \(\Gamma_{N}\), respectively, and \(N_{r}=10000\) in the domain \(\Omega\). First we present the SEPINN-C approximation. From (5.2), we have the explicit coefficients \(\gamma_{0}^{*}=0\) and \(\gamma_{n}^{*}=\frac{2}{n}\) for \(n\in\mathbb{N}\). In SEPINN-C, we take a truncation level \(N=15\) for all four singularities. In the first stage, we take a learning rate 2.0e-3 for \(\theta\) and for coefficients \(r_{1}=r_{3}=1.1\)e-2, \(r_{2}=r_{4}=7.0\)e-3. The final prediction error \(e\) is 2.08e-2, and the estimated coefficients \(\boldsymbol{\widehat{\gamma}}^{*}\) are shown in Table 5.4. The first few coefficients are well approximated, but the high-order ones are less accurately approximated. \begin{table} \begin{tabular}{c|c c c c} \hline \hline \(N\) & 5 & 10 & 15 & 20 \\ \hline \(e\) & 8.15e-2 & 5.96e-2 & 3.48e-2 & 3.93e-2 \\ \hline rate & & 0.45 & 1.32 & -0.42 \\ \hline \hline \end{tabular} \end{table} Table 5.3: Convergence of the SEPINN-C approximation with respect to the truncation level \(N\) for Example 5.3. Figure 5: The training dynamics of SEPINN-C and SEPINN-N: (a) the variation of first few coefficients versus iteration index \(i\), (b) the error \(e_{\boldsymbol{\Phi}}\) versus iteration index \(i\), (c) the decay of the loss \(\widehat{\mathcal{L}}\) versus iteration index \(i\), (d) the error \(e\) versus iteration index \(i\). Next we show the SEPINN-N approximation, obtained with five 4-layer 3-10-10-10-1 DNNs to approximate \(w\) and \(\Phi_{\zeta,j}\) (\(j=1,2,3,4\)) separately. The training process suffers from the following problem: using only L-BFGS tends to be trapped into a local minimum of the loss \(\widehat{\mathcal{L}}_{\boldsymbol{\sigma}}\), which persists even after adjusting extensively the hyper-parameters. Therefore, we take the following approach: we first train the DNNs with Adam (learning rate \(r=4.0\)e-3) for 1000 iterations and then switch to L-BFGS (learning rate \(r=0.2\)) for a maximum 4000 iterations. This training strategy greatly improves the accuracy. The prediction error \(e\) after the second stage is 3.83e-2. The approximation is fairly accurate but slightly less accurate than that by SEPINN-C in Fig. 6 in both \(L^{\infty}(\Omega)\) and \(L^{2}(\Omega)\) norms. The next example illustrates SEPINN on the Helmholtz equation. **Example 5.5**.: _Consider the following problem in both 2D and 3D domains:_ \[-\Delta u+\pi^{2}u=f,\quad\text{in}\quad\Omega\] _with the source \(f\) taken to be \(f=(4x^{3}+6x)y\mathrm{e}^{x^{2}-1}(\mathrm{e}^{y^{2}-1}-1)-(4y^{3}+6y)x \mathrm{e}^{y^{2}-1}(\mathrm{e}^{x^{2}-1}-1)+\pi^{2}xy(\mathrm{e}^{x^{2}-1}-1) (\mathrm{e}^{y^{2}-1}-1)-\Delta(\mathrm{e}^{-\pi r}\eta_{\rho}s)+\pi^{2} \mathrm{e}^{-\pi r}\eta_{\rho}s\) for the 2D domain \(\Omega=\Omega_{1}=(-1,1)^{2}\backslash([0,1)\times(-1,0])\), and \(f=-((4x^{3}+6x)y\mathrm{e}^{x^{2}-1}(\mathrm{e}^{y^{2}-1}-1)-(4y^{3}+6y)x \mathrm{e}^{y^{2}-1}(\mathrm{e}^{x^{2}-1}-1)+2\pi^{2}xy(\mathrm{e}^{x^{2}-1}-1 )(\mathrm{e}^{y^{2}-1}-1))\sin(\pi z)-\Delta(\mathrm{e}^{-\sqrt{2}\pi r}\eta_{ \rho}s\sin(\pi z))+\pi^{2}\mathrm{e}^{-\sqrt{2}\pi r}\eta_{\rho}s\sin(\pi z)\) for the 3D domain \(\Omega=\Omega_{2}=\Omega_{1}\times(-1,1)\), with \(\rho=1\) and \begin{table} \begin{tabular}{c|c|c c c c} \hline \(n\backslash\gamma_{j,n}\) & exact & \(\gamma_{1,n}\) & \(\gamma_{2,n}\) & \(\gamma_{3,n}\) & \(\gamma_{4,n}\) \\ \hline 0 & 0.0000e0 & 2.4763e-4 & 2.9810e-4 & 1.1018e-3 & -1.0403e-4 \\ \hline 1 & 2.0000e0 & 1.9985e0 & 1.9932e0 & 1.9902e0 & 1.9968e0 \\ \hline 2 & 1.0000e0 & 1.0087e0 & 1.0020e0 & 9.9630e-1 & 1.0005e0 \\ \hline 3 & 6.6667e-1 & 6.7281e-1 & 6.6629e-1 & 6.5206e-1 & 6.5664e-1 \\ \hline 4 & 5.0000e-1 & 5.1538e-1 & 4.8951e-1 & 4.8902e-1 & 4.8838e-1 \\ \hline 5 & 4.0000e-1 & 4.0184e-1 & 3.9272e-1 & 3.9834e-1 & 3.7305e-1 \\ \hline 6 & 3.3333e-1 & 3.1959e-1 & 3.1089e-1 & 3.2622e-1 & 2.9855e-1 \\ \hline 7 & 2.8571e-1 & 2.5984e-1 & 2.7772e-1 & 2.3514e-1 & 2.4714e-1 \\ \hline 8 & 2.5000e-1 & 1.8928e-1 & 2.3612e-1 & 2.1324e-1 & 1.9534e-1 \\ \hline 9 & 2.2222e-1 & 1.3174e-1 & 1.6849e-1 & 3.0520e-1 & 1.5977e-1 \\ \hline 10 & 2.0000e-1 & 4.9766e-2 & 1.3174e-1 & 2.4587e-1 & 1.2393e-1 \\ \hline 11 & 1.8182e-1 & 1.7544e-3 & 1.4534e-1 & 2.0706e-1 & 1.1372e-1 \\ \hline 12 & 1.6667e-1 & -6.0320e-2 & 1.3719e-1 & 2.9289e-1 & 9.5344e-2 \\ \hline 13 & 1.5385e-1 & -8.6117e-2 & 1.0655e-1 & 2.1058e-1 & 5.5091e-2 \\ \hline 14 & 1.4286e-1 & -5.5869e-2 & 1.1616e-1 & 1.2455e-1 & 5.2139e-2 \\ \hline 15 & 1.3333e-1 & -6.2956e-2 & 6.1959e-2 & 2.2816e-1 & 1.4306e-2 \\ \hline \end{tabular} \end{table} Table 5.4: The comparison between true and estimated values of the parameters \(\gamma_{j,n}\) for Example 5.4, with 5 significant digits. Figure 6: The SEPINN-C and SEPINN-N approximations for Example 5.4, slices at \(z=\frac{\pi}{2}\) (top) and \(z=\pi\) (bottom). \(R=\frac{1}{2}\) in (3.4), and \(s=r^{\frac{2}{3}}\sin(\frac{2\theta}{3})\), \(\Gamma_{D}=\partial\Omega\). The exact solution \(u\) is given by_ \[u=\begin{cases}xy(\mathrm{e}^{x^{2}-1}-1)(\mathrm{e}^{\mathsf{v}^{2}-1}-1)+ \mathrm{e}^{-\pi r}\eta_{\rho}s,\quad\Omega=\Omega_{1},\\ xy(\mathrm{e}^{x^{2}-1}-1)(\mathrm{e}^{\mathsf{v}^{2}-1}-1)\sin(\pi z)+\mathrm{e }^{-\sqrt{2}\pi r}\eta_{\rho}s\sin(\pi z),\quad\Omega=\Omega_{2}.\end{cases}\] For the 2D problem, we employ a 2-20-20-20-1 DNN. The first stage of the PF strategy yields an estimate \(\widehat{\gamma}^{*}=0.9939\), and the final prediction error \(e\) is 1.02e-2. This accuracy is comparable with that for the Poisson problem in Example 5.1. In the 3D case, we have the coefficients \(\gamma_{1}^{*}=1\) and \(\gamma_{n}^{*}=0\) for \(n\in\mathbb{N}\backslash\{1\}\). In SEPINN-C, we take \(N=10\) terms. In the first stage, we set the learning rate 2.0e-3 for the DNN parameters \(\theta\) and 4.0e-3 for coefficients \(\boldsymbol{\gamma}\). The estimated \(\widehat{\boldsymbol{\gamma}}^{*}\) are shown in Table 5.5, which approximate accurately the exact coefficients, and the prediction error \(e\) after the second stage is 3.30e-2. In SEPINN-N, we use five 4-layer (3-20-20-20-1) DNNs to approximate \(w\) and 3-10-10-10 approximate \(\Phi\), and optimize the loss with L-BFGS with a maximum of 2500 iterations. The final prediction error \(e\) is 2.55e-2. The results are shown in Fig. 7. SEPINN-C and SEPINN-N give very similar pointwise errors, and the singularity at the reentrant corner is accurately resolved, due to singularity enrichment. This shows clearly the flexibility of SEPINN for the screened Poisson equation with geometric singularities. Last, we illustrate SEPINN on the Laplacian eigenvalue problem in an L-shaped domain. **Example 5.6**.: _On the domain \(\Omega=(-1,1)^{2}\backslash([0,1)\times(-1,0])\), consider the eigenvalue problem:_ \[\begin{cases}-\Delta u=\mu u,&\text{in }\Omega,\\ u=0,&\text{on }\partial\Omega,\end{cases}\] _where \(\mu>0\) is the eigenvalue and \(u\not\equiv 0\) is the corresponding eigenfunction._ The eigenvalue problem on an L-shaped domain has been studied extensively [24, 44, 61]. The eigenfunctions may have singularity around the reentrant corner, but the analytic forms appear unavailable: the first eigenfunction \(u_{1}\) has a leading singular term \(r^{\frac{2}{3}}\sin(\frac{2}{3}\theta)\), the second one \(u_{2}\) has \(r^{\frac{1}{3}}\sin(\frac{4}{3}\theta)\)[24], and the third \(u_{3}\) is analytic, given by \(u_{3}(x_{1},x_{2})=\sin(\pi x_{1})\sin(\pi x_{2})\). To illustrate the flexibility of SEPINN, we compute the first two leading eigenpairs \((\mu_{1},u_{1})\) and \((\mu_{2},u_{2})\). The preceding discussions \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline & \(\gamma_{0}\) & \(\gamma_{1}\) & \(\gamma_{2}\) & \(\gamma_{3}\) & \(\gamma_{4}\) & \(\gamma_{5}\) \\ \hline estimate & 5.6180e-4 & 9.8857e-1 & 5.8005e-3 & -2.3331e-3 & 3.3624e-3 & -2.3693e-3 \\ \hline & \(\gamma_{6}\) & \(\gamma_{7}\) & \(\gamma_{8}\) & \(\gamma_{9}\) & \(\gamma_{10}\) & \\ \hline estimate & 2.7073e-3 & -2.9761e-3 & -1.8276e-3 & -9.7370e-4 & 1.1867e-3 & \\ \hline \end{tabular} \end{table} Table 5.5: The estimated coefficients \(\gamma_{n}\) in the 3D case for Example 5.5, with five significant digits. Figure 7: 2D problem with SEPINN (top) and 3D problem with SEPINN-C and SEPINN-N (bottom) for Example 5.5, with slice at \(z=\frac{1}{2}\). indicate \(u_{1}\in H^{1}(\Omega)\) and \(u_{2}\in H^{2}(\Omega)\), and we split the leading singularity from both functions in order to benefit from SEPINN, i.e., \(u_{i}=w_{i}+\gamma_{i}\eta_{\rho}s\), with \(s=r^{\frac{2}{3}}\sin(\frac{2\theta}{3})\) and \(w_{i}\) is approximated by a DNN. Following the ideas in [8] and SEPINN, we employ the following loss \[\mathcal{L}_{\mathbf{\sigma}}(w_{1},w_{2};\gamma_{1},\gamma_{2})= \sum_{i=1}^{2}\left(\|\Delta(w_{i}+\gamma_{i}\eta_{\rho}s)+R(u_{i} )(w_{i}+\gamma_{i}\eta_{\rho}s)\|_{L^{2}(\Omega)}^{2}+\sigma_{1}\|w_{i}\|_{L^{ 2}(\partial\Omega)}^{2}\right. \tag{5.3}\] \[+\alpha\big{|}\|w_{i}+\gamma_{i}\eta_{\rho}s\|_{L^{2}(\Omega)}^{2 }-1\big{|}+\nu_{i}R(u_{i})\Big{)}+\beta\left|(w_{1}+\gamma_{1}\eta_{\rho}s,w_{ 2}+\gamma_{2}\eta_{\rho}s)_{L^{2}(\Omega)}\right|.\] where \(\alpha\), \(\nu_{i}(i=1,2)\) and \(\beta\) are hyper-parameters and \(R(u_{i})\) is the Rayleigh quotient: \[R(u_{i})=\|\nabla u_{i}\|_{L^{2}(\Omega)}^{2}/\|u_{i}\|_{L^{2}( \Omega)}^{2},\quad i=1,2, \tag{5.4}\] which estimates the eigenvalue \(\mu_{i}\) using the eigenfunction \(u_{i}\), by Rayleigh's principle. We employ an alternating iteration method: we first approximate the eigenfunction \(u_{i}\) by minimizing the loss (5.3) and then update the eigenvalue \(\mu_{i}\) by (5.4), which is then substituted back into (5.3). These two steps are repeated until convergence. In SEPINN, we employ two 2-10-10-10-10-10-10-1 DNNs to approximate the regular parts \(w_{1}\) and \(w_{2}\), and take \(\alpha=100\), \(\beta=135\), \(\nu_{1}=0.02\) and \(\nu_{2}=0.01\), and determine the parameter \(\sigma_{d}\) by the PF strategy. We use Adam with a learning rate 2e-3 for all DNN parameters and \(\mathbf{\gamma}\). Table 5.6 shows that singularity enrichment helps solve the eigenvalue problem. Indeed, we can approximate \(u_{1}\) better and get more accurate eigenvalue estimates. Note that during the training process of the standard PINN, the DNN approximation actually directly approaches \(u_{2}\) and cannot capture \(u_{1}\), since it cannot be resolved accurately. Even with a larger penalty \(\nu_{1}\) for the first eigenvalue, this does not improve the accuracy, while \(u_{2}\) can be approximated well since \(u_{2}\in H^{2}(\Omega)\). This is also seen from estimated stress intensity factor in Table 5.6, which is much larger for \(u_{1}\) than for \(u_{2}\), indicating stronger singularity of \(u_{1}\). ## 6 Conclusions In this work, we have developed a family of neural solvers for solving boundary value problems with geometric singularities, e.g., corner singularity and mixed boundary conditions in the 2D case, and edge singularities in the 3D case. The basic idea is to enrich the ansatz space, which deep neural networks span, by incorporating specially designed singular functions. These singular functions are designed to capture the leading singularities of the exact solution. In so doing, we can achieve a significantly improved convergence rate. Additionally, we provide preliminary theoretical guarantees of the approach. We discuss several variants of the method, depending on the specific scenarios, and discuss the extensions to the Helmholtz equation and eigenvalue problems. To the best of our knowledge, it is the first work systematically exploring the use of singularity enrichment in a neural PDE solver. The extensive numerical experiments indicate that the approach is indeed highly effective and flexible, and works for a broad range of problem settings.
2305.19725
Direct Learning-Based Deep Spiking Neural Networks: A Review
The spiking neural network (SNN), as a promising brain-inspired computational model with binary spike information transmission mechanism, rich spatially-temporal dynamics, and event-driven characteristics, has received extensive attention. However, its intricately discontinuous spike mechanism brings difficulty to the optimization of the deep SNN. Since the surrogate gradient method can greatly mitigate the optimization difficulty and shows great potential in directly training deep SNNs, a variety of direct learning-based deep SNN works have been proposed and achieved satisfying progress in recent years. In this paper, we present a comprehensive survey of these direct learning-based deep SNN works, mainly categorized into accuracy improvement methods, efficiency improvement methods, and temporal dynamics utilization methods. In addition, we also divide these categorizations into finer granularities further to better organize and introduce them. Finally, the challenges and trends that may be faced in future research are prospected.
Yufei Guo, Xuhui Huang, Zhe Ma
2023-05-31T10:32:16Z
http://arxiv.org/abs/2305.19725v4
# Direct Learning-Based Deep Spiking Neural Networks: A Review ###### Abstract The spiking neural network (SNN), as a promising brain-inspired computational model with binary spike information transmission mechanism, rich spatially-temporal dynamics, and event-driven characteristics, has received extensive attention. However, its intricately discontinuous spike mechanism brings difficulty to the optimization of the deep SNN. Since the surrogate gradient method can greatly mitigate the optimization difficulty and shows great potential in directly training deep SNNs, a variety of direct learning-based deep SNN works have been proposed and achieved satisfying progress in recent years. In this paper, we present a comprehensive survey of these direct learning-based deep SNN works, mainly categorized into accuracy improvement methods, efficiency improvement methods, and temporal dynamics utilization methods. In addition, we also divide these categorizations into finer granularities further to better organize and introduce them. Finally, the challenges and trends that may be faced in future research are prospected. Spiking Neural Network, Brain-inspired Computation, Direct Learning, Deep Neural Network, Energy Efficiency, Spatial-temporal Processing ## 1 Introduction The Spiking Neural Network (SNN) has been recognized as one of the brain-inspired neural networks due to its bio-mimicry of the brain neurons. It transmits information by firing binary spikes and can process the information in a spatial-temporal manner (Fang et al., 2021; Wu et al., 2019; Zhang et al., 2020; Wu et al., 2019; Zhang et al., 2020). This event-driven and spatial-temporal manner makes the SNN very efficient and good at handling temporal signals, thus receiving a lot of research attention, especially recently. Despite the energy efficiency and spatial-temporal processing advantages, it is a challenge to train deep SNNs due to the firing process of the SNN is undifferentiable, thus making it impossible to train SNNs via gradient-based optimization methods. At first, many works leverage the spike-timing-dependent plasticity (STDP) approach (Lobov et al., 2020), which is inspired by biology, to update the SNN weights. However, STDP cannot help train large-scale networks yet, thus limiting the practical applications of the SNN. There are two widely used effective pathways to obtain deep SNNs up to now. First, the ANN-SNN conversion approach (Han and Roy, 2020; Li et al., 2021; Bu et al., 2022; Liu et al., 2022; Li and Zeng, 2022; Wang et al., 2020). et al., 2022; Bu et al., 2023) converts a well-trained ANN to an SNN by replacing the activation function from ReLU with spiking activation. It provides a fast way to obtain an SNN. However, it is limited in the rate-coding scheme and ignores the rich temporal dynamic behaviors of SNNs. Second, the surrogate gradient (SG)-based direct learning approach (Fang et al., 2021; Wu et al., 2018; Guo et al., 2022; Li et al., 2021) tries to find an alternative differentiable surrogate function to replace the undifferentiable firing activity when doing back-propagation of the spiking neurons. Since SG can handle temporal data and provide decent performance with few time-steps on the large-scale dataset, it has received more attention recently. Considering the sufficient advantages and rapid development of the direct learning-based deep SNN, a comprehensive and systematic survey on this kind of work is essential. Previously related surveys (Wang et al., 2020; Zhang et al., 2022; Roy et al., 2019; Tavanaei et al., 2019; Ponulak and Kasinski, 2011; Yamazaki et al., 2022) have begun to classify existing works mainly based on the key components of SNNs: biological neurons, encoding methods, SNN structures, SNN learning mechanisms, software and hardware frameworks, datasets, and applications. Though such classification is intuitive to general readers, it is difficult for them to grasp the challenges and the landmark work involved. While in this survey, we provide a new perspective to summarize these related works, _i.e._, starting from analyzing the characteristics and difficulties of the SNN, and then classify them into i) accuracy improvement methods, ii) efficiency improvement methods, and iii) temporal dynamics utilization methods, based on the solutions for corresponding problems or the utilization of SNNs' advantages. Further, these categories are divided into finer granularities: i) accuracy improvement methods are subdivided as improving representative capabilities and relieving training difficulties; ii) efficiency improvement methods are subdivided as network compression techniques and sparse SNNs; iii) temporal dynamics utilization methods are subdivided as sequential learning and cooperating with neuromorphic cameras. In addition to the classification by using strengths or overcoming weaknesses of SNNs, these recent methods can also be divided into the neuron level, network structure level, and training technique level, according to where these methods actually work. The classifications and main techniques of these methods are listed in Table 1 and Table 2. Finally, some promising future research directions are provided. The organization of the remaining part is given as follows, section 2 introduces the preliminary for spiking neural networks. The characteristics and difficulties of the SNN are also analyzed in section 2. section 3 presents the recent advances falling into different categories. section 4 points out future research trends and concludes the review. ## 2 Preliminary Since the neuron models are not the focus of the paper, here, we briefly introduce the commonly used discretized Leaky Integrate-and-Fire (LIF) spiking neurons to show the basic characteristic and difficulties in SNNs, which can be formulated by \[U_{l}^{t}=\tau U_{l}^{t-1}+\mathbf{W}_{l}O_{l-1}^{t},\qquad U_{l}^{t}<V_{\rm th}, \tag{1}\] where \(U_{l}^{t}\) is the membrane potential at \(t\)-th time-step for \(l\)-th layer, \(O_{l-1}^{t}\) is the spike output from the previous layer, \(\mathbf{W}_{l}\) is the weight matrix at \(l\)-th layer, \(V_{th}\) is the firing threshold, and \(\tau\) is a time leak constant for the membrane potential, which is in \((0,1]\). When \(\tau\) is \(1\), the above equation will degenerate to the Integrate-and-Fire (IF) spiking neuron. **Characteristic 1**.: _Rich spatially-temporal dynamics. Seen from Equation 1, different from ANNs, SNNs enjoy the unique spatial-temporal dynamic in the spiking neuron model._ Then, when the membrane potential exceeds the firing threshold, it will fire a spike and then fall to resting potential, given by \[O_{l}^{t}=\left\{\begin{array}{ll}1,&\text{if }U_{l}^{t}\geq V_{\rm th}\\ 0,&\text{otherwise}\end{array}\right.. \tag{2}\] **Characteristic 2**.: _Efficiency. Since the output is a binary tensor, the multiplications of activations and weights can be replaced by additions, thus enjoying high energy efficiency. Furthermore, when there is no spike output generated, the neuron will keep silent. This event-driven mechanism can further save energy when implemented in neuromorphic hardware._ **Characteristic 3**.: _Limited representative ability. Obviously, transmitting information by quantizing the real-valued membrane potentials into binary output spikes will introduce the quantization error in SNNs, thus causing information loss (Guo et al., 2022b; Wang et al., 2023). Furthermore, the binary spike feature map from a timestep cannot carry enough information like the real-valued one in ANNs (Guo et al., 2022d)._ These two problems limit the representative ability of SNN to some extent. **Characteristic 4**.: _Non-differentiability. Another thorny problem in SNNs is the non-differentiability of the firing function._ To demonstrate this problem, we formulate the gradient at the layer \(l\) by the chain rule, given by \[\frac{\partial L}{\partial\mathbf{W}_{l}}=\sum_{t}(\frac{\partial L}{\partial O _{l}^{t}}\frac{\partial O_{l}^{t}}{\partial U_{l}^{t}}+\frac{\partial L}{ \partial U_{l}^{t+1}}\frac{\partial U_{l}^{t+1}}{\partial U_{l}^{t}})\frac{ \partial U_{l}^{t}}{\partial\mathbf{W}_{l}}, \tag{3}\] where \(\frac{\partial O_{l}^{t}}{\partial U_{l}^{t}}\) is the gradient of firing function at at \(t\)-th time-step for \(l\)-th layer and is \(0\) almost everywhere, while infinity at \(V_{\rm th}\). As a consequence, the gradient descent \((\mathbf{W}_{l}\leftarrow\mathbf{W}_{l}-\eta\frac{\partial L}{\partial\mathbf{ W}_{l}})\) either freezes or updates to infinity. Most existing direct learning-based SNN works focus on solving difficulties or utilizing the advantages of SNNs. Boosting the representative ability and mitigating the non-differentiability can both improve SNN's accuracy. From this perspective, we organize the recent advances in the SNN field as accuracy improvement methods, efficiency improvement methods, and temporal dynamics utilization methods. ## 3 Recent Advances In recent years, a variety of direct learning-based deep spiking neural networks have been proposed. Most of these methods fall into solving or utilizing the intrinsic disadvantages or advantages of SNNs. Based on this, in the section, we classify these methods into accuracy improvement methods, efficiency improvement methods, and temporal dynamics utilization methods. In addition, these classifications are also organized in different aspects with a comprehensive analysis. Table 1 and Table 2 summarizes the surveyed SNN methods in different categories. Note that the direct learning methods can be divided into time-based methods and activation-based methods based on whether the gradient represents spike timing (time-based) or spike scale (activation-based) (Zhu et al., 2022c). In time-based methods, the gradients represent the direction where the timing of a spike should be moved, _i.e._, be moved leftward or rightward on the time axis. The SpikeProp (Bohte et al., 2002) and its variants (Booij and tat Nguyen, 2005; Hong et al., 2019; Xu et al., 2013) all belong to this kind of method and they adopt the negative inverse of the time derivative of membrane potential function to approximate the derivative of spike timing to membrane potential. Since most of the time-based methods would restrict each neuron to fire at most once, in (Zhou et al., 2021), the spike time is directly taken as the state of a neuron. Thus the relation of neurons can be modeled by the spike time and the SNN can be trained similarly to an ANN. Though the time-based methods enjoy less computation cost than the activation-based methods and many works (Zhu et al., 2022; Zhang and Li, 2020) have greatly improved the accuracy of the field, it is still difficult to train deep time-based SNN models and apply them to large-scale datasets, _e.g._, ImageNet. Considering the limits of the time-based methods and the topic of summarizing the recent deep SNNs here, we mainly focus on activation-based methods in the paper. \begin{table} \begin{tabular}{c l l l l} \hline \hline **Type** & **Method** & **Key Technology** & \multicolumn{2}{c}{**On the Level\({}^{\star}\)**} \\ \cline{3-5} & LSNN (Bellece et al., 2018) & Adaptive threshold & ✓ & \\ & LIMD(Wang et al., 2022) & Adaptive threshold & ✓ & \\ & BDETT (Ding et al., 2022) & Dynamic threshold & ✓ & \\ & PLIF (Fang et al., 2021b) & Learnable leak constant & ✓ & \\ & Plastic Synaptic Delays (Yu et al., 2022c) & Learnable leak constant & ✓ & \\ & Diet-SNN (Rathi and Roy, 2020) & Learnable leak constant\& threshold & ✓ & \\ & DS-ResNet (Feng et al., 2022) & Multi-firing \& Act before Add-ResNet & ✓ & ✓ \\ & SNN-MLP (Li et al., 2022a) & Group LIF & ✓ & \\ & GLIF (Yao et al., 2022) & Unified gated LIF & ✓ & \\ & Augmented Spikes (Yu et al., 2022d) & Augmented spikes & ✓ & \\ & ImI\(\omega\)-RNN (Shen et al., 2023) & Leaky Integrate and Fire or Burst & ✓ & \\ & MT-SNN (Wang et al., 2023) & Multiple threshold approach & ✓ & \\ & SEW-ResNet (Fang et al., 2021a) & Act before ADD form-based ResNet & ✓ & \\ & MS-ResNet (Hu et al., 2021) & Pre-activation form-based ResNet & ✓ & \\ & AutoSNN (Na et al., 2022) & Neural architecture search & ✓ & \\ & SNASNet (Kim et al., 2022a) & Neural architecture search & ✓ & \\ & TA-SNN (Yao et al., 2021) & Attention mechanism & ✓ & \\ & STSC-SNN (Yu et al., 2022a) & Attention mechanism & ✓ & \\ & TCIA-SNN (Zhu et al., 2022b) & Attention mechanism & ✓ & \\ & Real Spike (Guo et al., 2022d) & Training-inference decoupled structure & ✓ & \\ & IM-Loss (Guo et al., 2022a) & Information maximization loss & ✓ & \\ & RecBlo-SNN (Guo et al., 2022c) & Membrane potential distribution loss & & ✓ \\ & Distilling spikes (Kushawhaha et al., 2021) & Knowledge distillation & ✓ & ✓ \\ & Local tandem Learning (Yang et al., 2022) & Tandem Learning & ✓ & ✓ \\ & sparse-KD (Xu et al., 2023a) & Knowledge distillation & ✓ & ✓ \\ & KDSNN (Xu et al., 2023b) & Knowledge distillation & ✓ & \\ & SNN distillation (Takuya et al., 2021) & Knowledge distillation & ✓ & \\ \hline & SuperSpike (Zenke and Ganguli, 2018) & Fixed surrogate gradient & ✓ \\ & LISNN (Cheng et al., 2020) & Fixed surrogate gradient & ✓ \\ & IM-Loss (Guo et al., 2022a) & Dynamic surrogate gradient & ✓ \\ & Gradual surrogate gradient (Guo et al., 2022a) & Dynamic surrogate gradient & ✓ \\ & Differentiable Spike (Li et al., 2021b) & Learnable surrogate gradient & ✓ \\ & SpikeHHS (Leng et al., 2022) & Differentiable surrogate gradient search & ✓ \\ & DSR (Meng et al., 2022) & Differentiation on Spike Representation & ✓ \\ & NSNN (Ma et al., 2023) & Noise-driven learning rule & ✓ \\ & STDPP (Zhang et al., 2022c) & Rectified postsynaptic potential function & ✓ & \\ & SEW-ResNet (Fang et al., 2021a) & Act before ADD form-based ResNet & ✓ & \\ & M-ResNet (Hu et al., 2021) & Pre-activation form-based ResNet & ✓ & \\ & MultiLevel RNN (Wu et al., 2019c) & Constructive auxiliary feature maps & ✓ \\ & tdBN (Zheng et al., 2021) & Threshold-dependent batch normalization & & ✓ \\ & BNTT (Kim and Panda, 2021) & Temporal batch normalization through time & ✓ \\ & PSP-BN (Ikegawa et al., 2022) & Postsynaptic potential normalization & ✓ & \\ & TEBN (Kim and Panda, 2021) & Temporal effective batch normalization & & ✓ \\ & RecBlo-SNN (Guo et al., 2022c) & Membrane potential distribution loss & & ✓ \\ & TET (Deng et al., 2022) & Temporal regularization loss & & ✓ \\ & Tandem learning (Wu et al., 2021a) & Tandem learning & ✓ & ✓ \\ & Progressive tandem learning (Wu et al., 2021b) & Progressive tandem learning & & ✓ \\ & Joint A-SNN (Guo et al., 2023) & Joint training of ANN and SNN & ✓ \\ \hline \hline \end{tabular} \({}^{\star}\) NL denotes Neuron Level, NSL denotes Network Structure Level, TTL denotes Training Technique Level. \end{table} Table 1: Overview of Direct Learning-Based Deep Spiking Neural Networks: Part I. ### Accuracy Improvement Methods As aforementioned, the limited information capacity and the non-differentiability of firing activity of the SNN cause its accuracy loss for wide tasks. Therefore, to mitigate the accuracy loss in the SNN, a great number of methods devoted to improving the representative capabilities and relief training difficulties of SNNs have been proposed and achieved successful improvements in the past few years. #### 3.1.1 Improving representative capabilities Two problems result in the representative ability decreasing of the SNN, the process of firing activity will induce information loss, which has been proved in (Guo et al., 2022b) and binary spike maps suffer the limited information capacity, which has been proved in (Guo et al., 2022d). These problems can be mitigated on the neuron level, network structure level, and training technique level. **On the neuron level.** A common way to boost the representative capability of the SNN is to make some hyper-parameters in the spiking neuron learnable. In LSNN (Bellec et al., 2018) and LTMD (Wang et al., 2022a), the adaptive threshold spike neuron was proposed to enhance the computing and learning capabilities of SNNs. Further, a novel bio-inspired dynamic energy-temporal threshold, which can be adjusted dynamically according to input data for SNNs was introduced in the BDETT (Ding et al., 2022). Some works adopted the learnable membrane time constant in spiking neurons (Yin et al., 2020; Zimmer et al., 2019; Fang et al., 2021b; Luo et al., 2022; Yu et al., 2022c). Combining these two manners, Diet-SNN (Rathi and Roy, 2020) simultaneously adopted the learnable membrane leak and firing threshold. There are also some works focusing on embedding more factors in the spiking neuron to improve its diversity. A multi-level firing (MLF) unit, which contains multiple LIF neurons with different level thresholds thus could generate more quantization spikes with different thresholds was proposed in DS-ResNet (Feng et al., 2022). A full-precision LIF to communicate between patches in Multi-Layer Perceptron (MLP), including horizontal LIF and vertical LIF in different directions was proposed in SNN-MLP (Li et al., 2022a). SNN-MLP used group LIF to extract better local features. In GLIF (Yao et al., 2022), to enlarge the representation space of spiking neurons, a unified gated leaky integrate-and-fire Neuron was proposed to fuse different bio-features in different neuronal behaviors via embedding gating factors. In augmented spikes (Yu et al., 2022d), a special spiking neuron model was proposed to process augmented spikes, where additional information can be carried from spike strength and latency. This neuron model extends the computation with an additional dimension and thus could be of great significance for the representative ability of the SNN. In LIFB (Shen et al., 2023), a new spiking neuron model called the Leaky Integrate and Fire or Burst was proposed. The neuron model exhibits three modes including resting, regular spike, and burst spike, which significantly enriches the representative capability. Similar to LIFB, MT-SNN (Wang et al., 2023) proposed a multiple threshold approach to firing different spike modes to alleviate the quantization error, such that it could reach a high accuracy at fewer steps. Different from these works, InfLoR-SNN (Guo et al., 2022b) proposed a membrane potential rectifier (MPR), which can adjust the membrane potential to a new value closer to quantization spikes than itself before firing activity. MPR directly handles the quantization error problem in SNNs, thus improving the representative ability. **On the network structure level.** To increase the SNN diversity, some works advocate for improving the SNN architecture. In SEW-ResNet (Fang et al., 2021a) and DS-ResNet (Feng et al., 2022), the widely used standard ResNet backbone is replaced by activation before addition form-based ResNet. In this way, the blocks in the network will fire positive integer spikes. Its representation capability will no doubt be increased, however, the advantages of event-driven and multiplication-addition transform in SNNs will be lost in the meantime. To solve the aforementioned problem, MS-ResNet (Hu et al., 2021) adopted the pre-activation form-based ResNet. In this way, the spike-based convolution can be retained. The difference between these methods is shown in Figure 1. However, these SNN architectures are all manually designed. For designing well-performed SNN models automatically, AutoSNN (Na et al., 2022) and SNASNet (Kim et al., 2022) combined the Neural Architecture Search (NAS) approach to find better SNN architectures. And TA-SNN (Yao et al., 2021), STSC-SNN (Yu et al., 2022), and TCJA-SNN (Zhu et al., 2022) leveraged the learnable attention mechanism to improve the SNN performance. Different from changing the network topology, Real Spike (Guo et al., 2022) provides a training-inference decoupled structure. This method enhances the representation capacity of the SNN by learning real-valued spikes during training. While in the inference phase, the rich representation capacity will be transferred from spike neurons to the convolutions by a re-parameterization technique, and meanwhile, the real-valued spikes will be transformed into binary spikes, thus maintaining the event-driven and multiplication-addition transform advantages of SNNs. Besides, increasing the timestep of SNN will undoubtedly improve the SNN accuracy too, which has been proved in many works (Fang et al., 2021; Wu et al., 2018, 2019). To some extent, increasing the timestep is equivalent to increasing neuron output bits through the temporal dimension, which will increase the representation capability of feature map (Feng et al., 2022). However, using more timesteps achieves better performance at the cost of increasing inference time. **On the training technique level.** Some works attempted to improve the representative capability of the SNN on the training technique level, which can be categorized as regularization and distillation. Regularization is a technique that introduces another loss term to explicitly regularize the membrane potential or spike distribution to retain more useful information in the network that could indirectly help train the network as follows, \[\mathcal{L}_{Total}=\mathcal{L}_{CE}+\lambda\mathcal{L}_{DL} \tag{4}\] Figure 1: Different SNN ResNet architectures. where \(\mathcal{L}_{CE}\) is the common cross-entropy loss, \(\mathcal{L}_{DL}\) is the distribution loss for learning the proper membrane potential or spike, and \(\lambda\) is a coefficient to balance the effect of the two types of losses. IM-Loss (Guo et al., 2022) argues that improving the activation information entropy can reduce the quantization error, and proposed an information maximization loss function that can maximize the activation information entropy. In RecDis-SNN (Guo et al., 2022), a loss for membrane potential distribution to explicitly penalize three undesired shifts was proposed. Though the work is not designed for reducing quantization error specifically, it still results in a bimodal membrane potential distribution, which has been proven can mitigate the quantization error problem. The distillation methodology aims to help train a small student model by transferring knowledge of a rather large trained teacher model based on the consensus that the representative ability of a teacher model is better than that of the student model. Recently, some interesting works that introduce the distillation method in the SNN domain were proposed. In (Kushawaha et al., 2021), a big teacher SNN model is used to guide the small SNN counterpart learning. While in (Yang et al., 2022; Takuya et al., 2021; Xu et al., 2023, 2023), an ANN-teacher is used to guide SNN-student learning. In specific, Local Tandem Learning (Yang et al., 2022) uses the intermediate feature representations of the ANN to supervise the learning of SNN. While in sparse-KD (Xu et al., 2023), the logit output of the ANN was adopted to guide the learning of the SNN. Furthermore, KDSNN (Xu et al., 2023) and SNN distillation (Takuya et al., 2021) used both feature-based and logit-based information to distill the SNN. #### 3.1.2 Relieving training difficulties The non-differentiability of the firing function impedes the deep SNN direct training. To handle this problem, recently, using the surrogate gradient (SG) function for spiking neurons has received much attention. SG method utilizes a differentiable surrogate function to replace the non-differentiable firing activity to calculate the gradient in the back-propagation (Fang et al., 2021; Rathi and Roy, 2020; Wu et al., 2019; Neftci et al., 2019). Though the SG method can alleviate the non-differentiability problem, there exists an obvious gradient mismatch between the gradient of the firing function and the surrogate gradient. And the problem easily leads to under-optimized SNNs with severe performance degradation. Intuitively, an elaborately designed surrogate gradient can help to relieve the gradient mismatch in the backward propagation. As a consequence, some works are focusing on designing better surrogate gradients. In addition, the gradient explosion/vanishing problem in SNNs is severer over ANNs, due to the adoption of tanh-like function for most SG methods. There are also some works focusing on handling the gradient explosion/vanishing problem. Note that, these methods in this section can also be classified as the improvement on the neuron level, network structure level, and training technique level, which can be seen in the Table 1. Nevertheless, to better introduce these works, we still organize them as designing the better surrogate gradient and relieving the gradient explosion/vanishing problem. **Designing the better surrogate gradient (SG)**. Most earlier works adopt fixed SG-based methods to handle the non-differentiability problem. For example, the derivative of a truncated quadratic function, the derivatives of a sigmoid, and a rectangular function were respectively adopted in (Bohte, 2011), (Zenke and Ganguli, 2018), and (Cheng et al., 2020). However, such a strategy would limit the learning capacity of the network. To this end, a dynamic SG method was proposed in (Guo et al., 2022; Chen et al., 2022), where the SG could change along with epochs as follows, \[\varphi(x)=\frac{1}{2}\mathrm{tanh}(K(i)(x-V_{\mathrm{th}}))+\frac{1}{2} \tag{5}\] where \(\varphi(x)\) is the backward approximation function for the firing activity and \(K(i)\) is a dynamic coefficient that changes along with the training epoch as follows, \[K(i)=\frac{(10^{\frac{i}{N}}-10^{0})K_{\max}+(10^{1}-10^{\frac{i}{N}})K_{\min}}{9} \tag{6}\] where \(K_{\min}\) and \(K_{\max}\) are the lower bound and the upper bound of \(K\), and \(i\) is the index of epoch starting from \(0\) to \(N-1\). The \(\varphi(x)\) and its gradient can be seen in Figure 2. Driven by \(K(i)\), it will gradually evolve to the firing function, thus ensuring sufficient weight updates at the beginning and accurate gradients at the end of the training. Nevertheless, the above SG methods are still designed manually. To find the optimal solution, in (Li et al., 2021), the Differentiable Spike method that can adaptively evolve during training to find the optimal shape and smoothness for gradient estimation based on the finite difference technique was proposed. Then, in (Leng et al., 2022), combined with the NAS technique, a differentiable SG search (DGS) method to find the optimized SGs for SNN was proposed. Different from designing a better SG for firing function, DSR (Meng et al., 2022) derived that the spiking dynamics with spiking neural models can be represented as some sub-differentiable mapping and trained the SNNs by the gradients of the mapping, thus avoiding the non-differentiability problem in SNN training. And NSNN (Ma et al., 2023) presented the noisy spiking neural network and the noise-driven learning rule (NDL) for the surrogate gradient. **Relieving the gradient explosion/vanishing problem**. The gradient explosion or vanishing problem is still severe in SG-only methods. There are three kinds of methods to solve this problem: using improved neurons or architectures, improved batch normalizations, and regularization. In (Zhang et al., 2022), a simple yet efficient rectified linear postsynaptic potential function (ReL-PSP) for spiking neurons, which benefits for handling the gradient explosion problem, was proposed. On the network architecture level, SEW-ResNet (Fang et al., 2021) showed that standard spiking ResNet is inapplicable to overcome identity mapping and vanishing/explosion gradient problems and advised using ResNet with activation before addition form. Recently, the pre-activation form-based ResNet was explored in MS-ResNet (Hu et al., 2021). This network topology can simultaneously handle the gradient explosion/vanishing problem and retain the advantages of the SNN. The normalization approaches are widely used in ANNs to train well-performed models, and these approaches are also introduced in the SNN field to handle the vanishing/explosion gradient problems. For Figure 2: The approximation function (left) under different values of the coefficient, \(k\) and its corresponding gradient (right). The blue curves represent the firing function (left) and its true gradient (right). example, NeuNorm (Wu et al., 2019c) normalized the data along the channel dimension like BN in ANNs through constructing auxiliary feature maps. Threshold-dependent batch normalization (tdBN) (Zheng et al., 2021) considers the SNN normalization from a temporal perspective and extends the scope of BN to the additional temporal dimension. Furthermore, some works (Kim and Panda, 2021; Ikegawa et al., 2022; Duan et al., 2022) argued that the distributions of different timesteps vary wildly, thus bringing a negative impact when using shared parameters. Subsequently, the temporal Batch Normalization Through Time (BNTT), postsynaptic potential normalization (PSP-BN), and temporal effective batch normalization (TEBN) that can regulate the spike flows by utilizing separate sets of BN parameters on different timesteps were proposed. Though adopting temporal BN parameters on different timesteps can obtain more well-performed SNN models, this kind of BN technique can not fold the BN parameters into the weights and will increase the computations and running time in the inference stage, which should also be noticed. Using the regularization loss can also mitigate the gradient explosion/vanishing problem. In RecDissNN (Guo et al., 2022c), a new perspective to further classify the gradient explosion/vanishing difficulty of SNNs into three undesired shifts of the membrane potential distribution was presented. To avoid these undesired shifts, a membrane potential regularization loss was proposed in RecDis-SNN, this loss introduces no additional operations in the SNN inference phase. In TET (Deng et al., 2022), an extra temporal regularization loss to compensate for the loss of momentum in the gradient descent with SG methods was proposed. With this loss, TET can converge into flatter minima with better generalizability. Since ANNs are fully differentiable to be trained with gradient descent, there is also some work utilizing ANN to guide the SNN's optimization (Wu et al., 2021b, a; Guo et al., 2023). In (Wu et al., 2021a) a tandem learning framework was proposed, that consists of an SNN and an ANN that share the same weight. In this framework, the spike count as the discrete neural representation in the SNN would be presented to the coupled ANN activation function in the forward phase. And in the backward phase, the error back-propagation is performed on the ANN to update the shared weight for both the SNN and the ANN. Furthermore, in (Wu et al., 2021b), a progressive tandem learning framework was proposed, that introduces a layer-wise learning method to fine-tune the the shared network weights. Considering the difference between the ANN and SNN, Joint A-SNN (Guo et al., 2023) developed a partial weight-sharing regime for the joint training of weight-shared ANN and SNN, that applies the Singular Value Decomposition (SVD) to the weights parameters and keep the same eigenvectors while the separated eigenvalues for the ANN and SNN. ### Efficiency Improvement Methods An important reason why have SNNs received extensive attention recently is that they are seen as more energy efficient than ANNs due to their event-driven computation mechanism and the replacement of energy-consuming weight multiplication with addition. To further explore the efficiency advantages of SNNs so that they can be applied to energy-constrained devices is also a hot topic in the SNN field. This kind of method can be mainly categorized into network compression techniques and sparse SNNs. #### 3.2.1 Network compression techniques Network compression techniques have been widely used in ANNs. There are also some works applying these techniques in SNNs. In the literature, approaches for compressing deep SNNs can be classified into three categories: parameter pruning, NAS, and knowledge distillation. **Parameter pruning**. Parameter pruning mainly focuses on eliminating the redundant parameters in the model by removing the uncritical ones. SNNs, unlike their non-spiking counterparts, consist of a temporal dimension. Along with considering temporal information, a spatial and temporal pruning of SNNs is proposed in [15]. Generally speaking, pruning will cause accuracy degradation to some extent. To avoid this, SD-SNN [16] and Grad R [17] proposed the pruning-regeneration method for removing the redundancy in SNNs from the brain development plasticity mechanism. With synaptic regeneration, these works can effectively prevent and repair over-pruning. Recently, an interesting temporal pruning, which is specific for SNNs, was proposed in [15]. This method starts with an SNN of \(T\) timesteps and reduces \(T\) every iteration of training, which results in a continuum of accurate and efficient SNNs from \(T\) timesteps, down to \(1\) timestep. **Neural Architecture Searching (NAS)**. Obviously, a compact network carefully designed can reduce the storage and computation complexity of SNNs. However, due to the limitations of humans' inherent knowledge, it is difficult for people to jump out of their original thinking paradigm and design an optimal compact model. Therefore, there are some works using NAS techniques to let the algorithm automatically design the compact neural architecture [14, 16]. Furthermore, in [16], the lottery ticket hypothesis was investigated which shows that dense SNN networks contain smaller SNN subnetworks, _i.e._, winning tickets, which can achieve comparable performance to the dense ones, and the smaller compact one is picked as to be used network. \begin{table} \begin{tabular}{c c l c c c} \hline \hline \multicolumn{2}{c}{**Type**} & \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Key Technology**} & \multicolumn{2}{c}{**On the Level\({}^{*}\)**} \\ \cline{5-6} & & & NL & NSL & TTL \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & & Spatio-Temporal Pruning [15] & Spatio-temporal pruning & & \(\checkmarkmark\) \\ & & SD-SNN [16] & Pruning-regeneration method & & \(\checkmarkmark\) \\ & & Grad R [17] & Pruning-regeneration method & & \(\checkmarkmark\) \\ & & Temporal pruning [15] & Temporal pruning & & \(\checkmark\) \\ & & Autosn [14] & Neural architecture searching & & \(\checkmarkmark\) \\ & & SNASNet [16] & Neural architecture searching & & \(\checkmarkmark\) \\ & & Lottery ticket Hypothesis [16] & Lottery ticket hypothesis & & \(\checkmark\) \\ & & Distilling spikes [14] & Knowledge distillation & & \(\checkmark\) \\ & & Local Random Learning [15] & Tandem Learning & & \(\checkmark\) \\ & & sparse-KD [14] & Knowledge distillation & & \(\checkmark\) \\ & & KDSNN [14] & Knowledge distillation & & \(\checkmark\) \\ & & SNN distillation [14] & Knowledge distillation & & \(\checkmark\) \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & & ANN [16] & A lot of adaptive spiking neurons & \(\checkmark\) & \\ & & Correlation-based regularization [16] & Correlation-based regularized & & \(\checkmark\) \\ & & Superspike [16] & Heterosynaptic regularization term & & \(\checkmark\) \\ & & RecBlo-SNN [17] & Membrane potential distribution & & \(\checkmark\) \\ & & Low-activity SNN [17] & Regularization term & & \(\checkmark\) \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & & Sequence approximation [14] & Dual-search-space optimization & & \(\checkmark\) \\ & Sequential learning [15] & Improved recurrence dynamics & \(\checkmark\) & & \\ & SNN_HAR [14] & Spatio-temporal extraction & & \(\checkmark\) & \\ & & Robust SNN [16] & Temporal penalty settings & & \(\checkmark\) \\ & & Tandem learning-based SNN model [16] & Tandem learning & & \(\checkmark\) \\ & learning & SG-based SNN model [16] & Surrogate gradient method & & \(\checkmark\) \\ & & Combination-based SNN [18] & Combination of many techniques & \(\checkmark\) & \(\checkmark\) & \\ & & Low-activity SNN [17] & Regularization term & & \(\checkmark\) \\ & & SNNCNN [14] & Combination of CNNs and SNNs & & \(\checkmark\) & \(\checkmark\) \\ & & RSNNs [16] & offline supervised learning rule & & \(\checkmark\) \\ \hline \multirow{8}{*}{ \begin{tabular}{} \end{tabular} } & & adaptive-spikement [14] & Learning neuronal dynamics & \(\checkmark\) & \\ & & StereoSpike [16] & Modified U-Net-like architecture & & \(\checkmark\) & \(\checkmark\) \\ & SuperFast [17] & Event-enhanced frame interpolation & & \(\checkmark\) & \\ & & E-SAI [17] & Synthetic aperture imaging method & & \(\checkmark\) \\ & Cooperating & EVSNN [16] & Potential-assisted SNN & & \(\checkmark\) & \\ & with & Spiking-Fer [1] & Deep CSNN & & \(\checkmark\) & \(\checkmark\) \\ & neuromorphic & Automotive Detection [18] & PIF \& SG \& Event encoding & \(\checkmark\) & \(\checkmark\) \\ & cameras & STNet [16] & Spiking transformer network & & \(\checkmark\) & \(\checkmark\) \\ & & LaneSNNs [16] & offline supervised learning rule & & \(\checkmark\) \\ & & HALSE [14] & Hybrid approach & & \(\checkmark\) \\ & SpikeMS [15] & Spatio-temporal loss & & \(\checkmark\) & \(\checkmark\) \\ & & Event-based Pose Tracking [17] & Spiking Spatiotemporal Transformer & & \(\checkmark\) \\ \hline \hline \end{tabular} \end{table} Table 2: Overview of Direct Learning-Based Deep Spiking Neural Networks: Part II **Knowledge distillation**. The knowledge distillation methods aim at obtaining a compact model from a large model. In [21], a larger teacher SNN model is used to distill a smaller SNN model. And in [22, 23, 24], the same architecture ANN-teacher is used to distill SNN-student. #### 3.2.2 Sparse SNNs Different from ANNs, SNNs transmit information by spike events, and the computation occurs only when the neuron receives spike events. Benefitting from this event-driven computation mechanism, SNNs can greatly save energy and run efficiently when implemented on neuromorphic hardware. Hence, limiting the firing rate of spiking neurons to achieve a sparse SNN is also a widely used way to improve the efficiency of the SNN. These kinds of methods can limit the firing rate of the SNN on both the neuron level and training technique level. **On the neuron level.** In ASNN [25], an adaptive SNN based on a group of adaptive spiking neurons was proposed. These adaptive spiking neurons can optimize their firing rate using asynchronous pulsed Sigma-Delta coding efficiently. **On the training technique level.** In [12], a correlation-based regularizer, which is incorporated into a loss function, was proposed to minimize the redundancies between the features at each layer for structural sparsity. Obviously, this method is beneficial for energy-efficient. Superspike [26] added a heterosynaptic regularization term to the learning rule of the hidden layer weights to avoid pathologically high firing rates. RecDis-SNN [14] incorporated a membrane potential loss into the SNN to regulate the membrane potential distribution to an appropriate range to avoid high firing rates. In [17], to enforce sparse spiking activity, a \(l_{1}\) or \(l_{2}\) regularization on the total number of spikes emitted by each layer was applied. ### Temporal Dynamics Utilization Methods Different from ANNs, SNNs enjoy rich temporal dynamics characteristics, which makes them more suitable for some particular temporal tasks and some vision sensors with high resolution in time, _e.g._, neuromorphic cameras, which can capture temporally rich information asynchronously inspired by the information process form of eyes. Given such characteristics, a great number of methods falling in sequential learning and cooperating with neuromorphic cameras have been proposed for SNNs. #### 3.3.1 Sequential learning As aforementioned in Section 2, SNNs maintain a dynamic state in the neuron memory. In [18], the usefulness of the inherent recurrence dynamics of the SNN for sequential learning was demonstrated, that it can retain important information. Thus, SNNs show better performance on sequential learning compared to ANNs with similar scales in many works. In [27], a function approximation theoretical basis was developed that any spike-sequence-to-spike-sequence mapping functions can be approximated by an SNN with one neuron per layer using skip-layer connections. And then, based on the basis, a suitable SNN model for the classification of spatio-temporal data was designed. In [10], SNNs were leveraged to study the Human Activity Recognition (HAR) task. Since SNNs allow spatio-temporal extraction of features and enjoy low-power computation with binary spikes, they can reduce up to 94% energy consumption while achieving better accuracy compared with homogeneous ANN counterparts. In [19], an interesting phenomenon was found that SNNs trained with the appropriate temporal penalty settings are more robust against adversarial images than ANNs. As the common sequential signal, many preliminary works on speech recognition systems based on spiking neural networks have been explored (Tavanaei and Maida, 2017, 2018; Hao et al., 2020; Wu et al., 2020; Zhang et al., 2019; Wu et al., 2018, 2018, 2019b). In (Wu et al., 2020), a deep spiking neural network was trained by the tandem learning method to handle the large vocabulary automatic speech recognition task. The experimental results demonstrated that the deep SNN trained could compete with its ANN counterpart while requiring as low as 0.68 times total synaptic operations to their ANN counterparts. There are also some works training deep SNN directly with SG methods for the speech task. In (Ponghiran and Roy, 2022), inspired by the LSTM, a custom version of SNNs was defined that combines a forget gate with multi-bit outputs instead of binary spikes, yielding better accuracy than that of LSTMs, but with 2\(\times\) fewer parameters. In (Bittar and Garner, 2022), the spiking neural networks trained like recurrent neural networks only using the standard surrogate gradient method can achieve promising results on speech recognition tasks, which shows the advantage of SNNs to handle this kind of task. In (Bittar and Garner, 2022), a combination of adaptation, recurrence, and surrogate gradient techniques for spiking neural networks was proposed. And with these improvements, light spiking architectures that are not only able to compete with ANN solutions but also retain a high degree of compatibility with them were yielded. In (Pellegrini et al., 2021), the dilated convolution spiking layers and a new regularization term to penalize the averaged number of spikes were used to train low-activity supervised convolutional spiking neural networks. The results showed that the SNN models can reach an error rate very close to standard DNNs while very energy efficient for speech tasks. In Sadovsky et al. (2023), a new technique for speech recognition that combines convolutional neural networks with spiking neural networks was presented to create an SNNCNN model. The results showed that the combination of CNNs and SNNs outperforms both MLPs and ANNs, providing a new route to further improvements in the field. In (Yin et al., 2021), an activity-regularizing surrogate gradient method combined with recurrent networks of tunable and adaptive spiking neurons for SNNs was proposed, and the method performed well on the speech recognition task. #### 3.3.2 Cooperating with neuromorphic cameras Neuromorphic camera, which is also called event-based cameras, have recently shown great potential for high-speed motion estimation owing to their ability to capture temporally rich information asynchronously. SNNs, with their spatio-temporal and event-driven processing mechanisms, are very suitable for handling such asynchronous data. Many excellent works combine SNNs and neuromorphic cameras to solve real-world large-scale problems. In (Hagenaars et al., 2021; Kosta and Roy, 2022), an event-based optical flow estimation method was presented. In StereoSpike (Rancon et al., 2021) a depth estimation method was provided. SuperFast (Gao et al., 2022) leveraged an SNN and an event camera to present an event-enhanced high-speed video frame interpolation method. SuperFast can generate a very high frame rate (up to 5000 FPS) video from the input low frame rate (25 FPS) video. Furthermore, Based on a hybrid network composed of SNNs and ANNs, E-SAI (Yu et al., 2022) provided a novel synthetic aperture imaging method, which can see through dense occlusions and extreme lighting conditions from event data. And in EVSNN (Zhu et al., 2022) a novel Event-based Video reconstruction framework was proposed. To fully use the information from different modalities, HALSIE (Biswas et al., 2022) proposed a hybrid approach for semantic segmentation comprised of dual encoders with an SNN branch to provide rich temporal cues from asynchronous events, and an ANN branch for extracting spatial information from regular frame data by simultaneously leveraging image and event modalities. There are also some works that apply this technique in autonomous driving. In (Cordone et al., 2022), fast and efficient automotive object detection with spiking neural networks on automotive event data was proposed. In (Zhang et al., 2022), a spiking transformer network, STNet, which can dynamically extract and fuse information from both temporal and spatial domains was proposed for single object tracking using event data. Besides, since event cameras enjoy extremely low latency and high dynamic range, they can also be used to handle the harsh environment, _i.e._, extreme lighting conditions or dense occlusions. LaneSNNs (Viale et al., 2022) presented an SNN-based approach for detecting the lanes marked on the streets using the event-based camera input. The experimental results show a very low power consumption of about 1 W, which can significantly increase the lifetime and autonomy of battery-driven systems. Based on the event-based cameras and SNNs, some works attempted to assist the behavioral recognition research. For examples, Spiking-Fer (Barchid et al., 2023) proposed a new end-to-end deep convolutional SNN method to predict facial expression. SpikeMS (Parameshwara et al., 2021) proposed a deep encoder-decoder SNN architecture and a novel spatio-temporal loss for motion segmentation using the event-based DVS camera as input. In (Zou et al., 2023), a dedicated end-to-end sparse deep SNN consisting of the Spike-Element-Wise (SEW) ResNet and a novel Spiking Spatiotemporal Transformer was proposed for event-based pose tracking. This method achieves a significant computation reduction of 80% in FLOPS, demonstrating the superior advantage of SNN in this kind of task. ## 4 Future Trends and Conclusions The spiking neural networks, born in mimicking the information process of brain neurons, enjoy many specific characteristics and show great potential in many tasks, but meanwhile suffer from many weaknesses. As a consequence, a number of direct learning-based deep SNN solutions for handling these disadvantages or utilizing the advantages of SNNs have been proposed recently. As we summarized in this survey, these methods can be roughly categorized into i) accuracy improvement methods, ii) efficiency improvement methods, and iii) temporal dynamics utilization methods. Though successful milestones and progress have been achieved through these works, there are still many challenges in the field. On the accuracy improvement aspect, the SNN still faces serious performance loss, especially for the large network and datasets. The main reasons might include: * _Lack of measurement of information capacity:_ it is still unclear how to precisely calculate the information capacity of the spike maps and what kind of neuron types or network topology is suitable for preserving information while the information passing through the network, even after firing function. We believe SNN neurons and architectures should not be referenced from brains or ANNs completely. Specific designs in regard to the characteristic of SNNs for preserving information should be explored. For instance, to increase the spiking neuron representative ability, the binary spike {0, 1}, which is used to mimic the activation or silence in the brain, can be replaced by ternary spike {-1, 0, 1}, thus the information capacity of the spiking neuron will be boosted, but the event-driven and multiplication-free operation advantages of the binary spike can be preserved still. And as aforementioned, the widely used standard ResNet backbone in ANNs is not suitable for SNNs. And the PreAct ResNet backbone performs better since the membrane potential in neurons before the firing function will be added to the next block, thus the complete information will be transmitted simultaneously. While for the standard ResNet backbone, only quantized information is transmitted. To further preserve the information, adding the shortcut layer by layer in the PreAct ResNet backbone is better in our experiment, which is much different from the architectures in ANNs and is a promising exploration direction. * _Inherent optimization difficulties:_ It is still a difficult problem to optimize the SNN in a discrete space, even though many novel gradient estimators or approximate functions have been proposed, there are still some huge obstacles in the field. Such as the gradient explosion/vanishing problem, with the increasing timestep, the problem along with the gradient errors will become severer and make the network hard to converge. Thus how to completely eliminate the impact of this problem to directly train an SNN with large timesteps is still under exploration. We believe more theoretical studies and practical tricks will emerge to answer this question in the future. It is also worth noting that accuracy is not the only criterion of SNNs, the versatility is another key criterion, that measures whether a method can be used in practice. Some methods proposed in prior works are very versatile, such as learnable spike factors proposed in Real Spike (Guo et al., 2022d), membrane potential rectifier proposed in InfLoR-SNN (Guo et al., 2022b), temporal regularization loss proposed in TET (Deng et al., 2022), _etc_. These methods enjoy simple implementation and low coupling, thus having become common widely used practices to improve the accuracy of SNNs. Some methods improve the accuracy of SNNs by designing complex spiking neurons or specific architectures. Such improvements usually show a stronger ability to increase performance. However, as we have pointed out before, some of them suffer complicated computation and even lose the energy-efficiency advantage, which violates the original intention of SNNs. Therefore, purely pursuing high accuracy without considering versatility has limited significance in practice. The balance between accuracy and versatility is also an essential criterion for SNN research that should be considered in the following works. On the efficiency improvement aspect, some prior works ignore the important fact, that the event-driven paradigm and friendly to the neuromorphic hardware make SNNs much different from ANNs. When implemented on the neuromorphic hardware, the computation in the SNN occurs only if the spiking neuron receives spike events. Hence, the direct reason for improving the efficiency of the SNN is reducing the number of the firing spikes, not reducing network size. Some methods intending to improve the efficiency of SNNs by pruning inactive neurons as doing in ANNs can not make sense in this situation. We even think that under the condition the SNN network size does not exceed the capacity of the neuromorphic hardware, enlarging the network size but limiting the the number of the firing spikes at the same time may be a potential route to improve the accuracy and efficiency simultaneously. In this way, different weights of the SNN may respond to different data, thus being equivalent to improving the representative capabilities of the SNN. However, a more systematic study needs to be done in the future. On the temporal dynamics utilization aspect, a great number of interesting methods have been proposed and shown wide success. We think it is a very potential direction in the SNN field. Some explainable machine learning-related study indicates that different network types follow different patterns and enjoy different advantages. In this sense, it might be more meaningful to dive into the temporal dynamics of the SNN deeply, but not to pursue higher accuracy as ANNs. Meanwhile, considering the respective advantages, to use ANNs and SNNs together needs to be studied further. Last but not least, more special applications for SNNs also should be explored still. Though SNNs have been used widely in many fields, including the neuromorphic camera, HAR task, speech recognition, autonomous driving, _etc_, as aforementioned and the object detection (Kim et al., 2020; Zhou et al., 2020), object tracking (Luo et al., 2020), image segmentation(Patel et al., 2021), robotic (Dupeyroux et al., 2021; Stagsted et al., 2020), _etc_, where some remarkable studies have applied SNNs on recently, compared to ANNs, their real-world applications are still very limited. considering the unique advantage, efficiency of SNNs, we think there is a great opportunity for applying SNNs in the Green Artificial Intelligence (GAI), which has become an important subfield of Artificial Intelligence and has notable practical value. We believe many studies focusing on using SNNs for GAI will emerge soon. ## Conflict of Interest Statement The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Author Contributions Yufei Guo and Xuhui Huang wrote the paper with Zhe Ma being active contributors toward editing and revising the paper as well as supervising the project. All authors contributed to the article and approved the submitted version. ## Funding This work is supported by grants from the National Natural Science Foundation of China under contracts No.12202412 and No.12202413.
2309.17032
Refined Kolmogorov Complexity of Analog, Evolving and Stochastic Recurrent Neural Networks
We provide a refined characterization of the super-Turing computational power of analog, evolving, and stochastic neural networks based on the Kolmogorov complexity of their real weights, evolving weights, and real probabilities, respectively. First, we retrieve an infinite hierarchy of classes of analog networks defined in terms of the Kolmogorov complexity of their underlying real weights. This hierarchy is located between the complexity classes $\mathbf{P}$ and $\mathbf{P/poly}$. Then, we generalize this result to the case of evolving networks. A similar hierarchy of Kolomogorov-based complexity classes of evolving networks is obtained. This hierarchy also lies between $\mathbf{P}$ and $\mathbf{P/poly}$. Finally, we extend these results to the case of stochastic networks employing real probabilities as source of randomness. An infinite hierarchy of stochastic networks based on the Kolmogorov complexity of their probabilities is therefore achieved. In this case, the hierarchy bridges the gap between $\mathbf{BPP}$ and $\mathbf{BPP/log^*}$. Beyond proving the existence and providing examples of such hierarchies, we describe a generic way of constructing them based on classes of functions of increasing complexity. For the sake of clarity, this study is formulated within the framework of echo state networks. Overall, this paper intends to fill the missing results and provide a unified view about the refined capabilities of analog, evolving and stochastic neural networks.
Jérémie Cabessa, Yann Strozecki
2023-09-29T07:38:50Z
http://arxiv.org/abs/2309.17032v1
# Refined Kolmogorov Complexity of Analog, Evolving and Stochastic Recurrent Neural Networks ###### Abstract We provide a refined characterization of the super-Turing computational power of analog, evolving, and stochastic neural networks based on the Kolmogorov complexity of their real weights, evolving weights, and real probabilities, respectively. First, we retrieve an infinite hierarchy of classes of analog networks defined in terms of the Kolmogorov complexity of their underlying real weights. This hierarchy is located between the complexity classes \(\mathbf{P}\) and \(\mathbf{P/poly}\). Then, we generalize this result to the case of evolving networks. A similar hierarchy of Kolomogorov-based complexity classes of evolving networks is obtained. This hierarchy also lies between \(\mathbf{P}\) and \(\mathbf{P/poly}\). Finally, we extend these results to the case of stochastic networks employing real probabilities as source of randomness. An infinite hierarchy of stochastic networks based on the Kolmogorov complexity of their probabilities is therefore achieved. In this case, the hierarchy bridges the gap between \(\mathbf{BPP}\) and \(\mathbf{BPP/log^{*}}\). Beyond proving the existence and providing examples of such hierarchies, we describe a generic way of constructing them based on classes of functions of increasing complexity. For the sake of clarity, this study is formulated within the framework of echo state networks. Overall, this paper intends to fill the missing results and provide a unified view about the refined capabilities of analog, evolving and stochastic neural networks. keywords: Recurrent Neural Networks; Echo state networks; Computational Power; Computability Theory; Analog Computation; Stochastic Computation; Kolmogorov Complexity. + Footnote †: journal: ## 1 Introduction Philosophical considerations aside, it can reasonably be claimed that several brain processes are of a computational nature. "The idea that brains are computational in nature has spawned a range of explanatory hypotheses in theoretical neurobiology" [20]. In this regard, the question of the computational capabilities of neural networks naturally arises, among many others. Since the early 1940s, the theoretical approach to neural computation has been focused on comparing the computational powers of neural network models and abstract computing machines. In 1943, McCulloch and Pitts proposed a modeling of the nervous system as a finite interconnection of logical devices and studied the computational power of "nets of neurons" from a logical perspective [62]. Along these lines, Kleene and Minsky proved that recurrent neural networks composed of McCulloch and Pitts (i.e., Boolean) cells are computationally equivalent to finite state automata [44; 64]. These results paved the way for a future stream of research motivated by the expectation to implement abstract machines on parallel hardware architectures (see for instance [1; 21; 34; 38; 45; 69; 83]). In 1948, Turing introduced the B-type unorganized machine, a kind of neural network composed of interconnected NAND neuronal-like units [89]. He suggested that the consideration of sufficiently large B-type unorganized machines could simulate the behavior of a universal Turing machine with limited memory. The Turing universality of neural networks involving infinitely many Boolean neurons has been further investigated (see for instance in [24; 25; 32; 71; 80]). Besides, Turing brilliantly anticipated the concepts of "learning" and "training" that would later become central to machine learning. These concepts took shape with the introduction of the _perceptron_, a formal neuron that can be trained to discriminate inputs using Hebbian-like learning [33; 77; 78]. But the computational limitations of the perceptron dampened the enthusiasms for artificial neural networks [65]. The ensuing winter of neural networks lasted until the 1980s, when the popularization of the backpropagation algorithm, among other factors, paved the way for the great success of deep learning [79; 81]. Besides, in the late 50's, von Neumann proposed an alternative approach to brain information processing from the hybrid perspective of digital and analog computations [68]. Along these lines, Siegelmann and Sontag studied the capabilities of _sigmoidal neural networks_, (instead of Boolean ones). They showed that recurrent neural networks composed of linear-sigmoid cells and rational synaptic weights are Turing complete [37; 67; 88]. This result has been generalized to a broad class of sigmoidal networks [43]. Following the developments in analog computation [84], Siegelmann and Sontag argued that the variables appearing in the underlying chemical and physical phenomena could be modeled by continuous rather than discrete (rational) numbers. Accordingly, they introduced the concept of an _analog neural network_ - a sigmoidal recurrent neural net equipped with real instead of rational weights. They proved that analog neural networks are computationally equivalent to Turing machines with advice, and hence, decide the complexity class \(\mathbf{P/poly}\) in polynomial time of computation [86; 87]. Analog networks are thus capable of _super-Turing_ capabilities and could capture chaotic dynamical features that cannot be described by Turing machines [82]. Based to these considerations, Siegelmann and Sontag formulated the so-called Thesis of Analog Computation - an analogous to the Church-Turing thesis in the realm of analog computation - stating that no reasonable abstract analog device can be more powerful than first-order analog recurrent neural networks [84; 87]. Inspired by the learning process of neural networks, Cabessa and Siegelmann studied the computational capabilities of evolving neural networks [10; 12]. In summary, evolving neural networks using either rational, real, or binary evolving weights are all equivalent to analog neural networks. They also decide the class \(\mathbf{P/poly}\) in polynomial time of computation. The computational power of stochastic neural networks has also been investigated in detail. For rational-weighted networks, the addition of a discrete source of stochasticity increases the computational power from \(\mathbf{P}\) to \(\mathbf{BPP/log^{*}}\), while for the case of real-weighted networks, the capabilities remain unchanged to the \(\mathbf{P/poly}\) level [85]. On the other hand, the presence of analog noise would strongly reduce the computational power of the systems to that of finite state automata, or even below [4; 57; 61]. Based on these considerations, a refined approach to the computational power of recurrent neural networks has been undertaken. On the one hand, the sub-Turing capabilities of Boolean rational-weighted networks containing 0, 1, 2 or 3 additional sigmoidal cells have been investigated [90; 91]. On the other hand, a refinement of the super-Turing computational power of analog neural networks has been described in terms of the Kolmogorov complexity of the un derlying real weights [3]. The capabilities of analog networks with weights of increasing Kolmogorov complexity shall stratify the gap between the complexity classes \(\mathbf{P}\) and \(\mathbf{P/poly}\). The capabilities of analog and evolving neural networks have been generalized to the context of infinite computation, in connection with the attractor dynamics of the networks [6, 7, 11, 13, 14, 15, 16, 17, 18]. In this framework, the expressive power of the networks is characterized in terms of topological classes from the Cantor space (the space of infinite bit streams). A refinement of the computational power of the networks based on the complexity of the underlying real and evolving weights has also been described in this context [8, 9]. The computational capabilities of _spiking neural networks_ (instead of sigmoidal one) has also been extensively studied [54, 55]. In this approach, the computational states are encoded into the temporal differences between spikes rather than within the activation values of the cells. Maass proved that single spiking neurons are strictly more powerful than single threshold gates [59, 60]. He also characterized lower and upper bounds on the complexity of networks composed of classical and noisy spiking neurons (see [47, 49, 51, 52, 56, 58] and [48, 50], respectively). He further showed that networks of spiking neurons are capable of simulating analog recurrent neural networks [53]. In the 2000s, Paun introduced the concept of a _P system_ - a highly parallel abstract model of computation inspired by the membrane-like structure of the biological cell [70, 72]. His work led to the emergence of a highly active field of research. The capabilities of various models of so-called _neural P systems_ have been studied (see for instance [73, 74, 75, 39, 76]). In particular, neural P systems provided with a bio-inspired source of acceleration were shown to be capable of hypercomputational capabilities, spanning all levels of the arithmetical hierarchy [19, 26]. In terms of practical applications, recurrent neural networks are natural candidates for sequential tasks, involving time series or textual data for instance. Classical recurrent architectures, like LSTM and GRU, have been applied with great success in many situations [29]. A 3-level formal hierarchy of the sub-Turing expressive capabilities of these architectures, based on the notions of space complexity and rational recurrence, has been established [63]. Echo state networks are another kind of recurrent neural networks enjoying an increasing popularity due to their training efficiency [40, 41, 42, 46]. The computational capa bilities of echo state networks have been studied from the alternative perspective of universal approximation theorems [35, 36]. In this context, echo state networks are shown to be universal, in the sense of being capable of approximating different classes of filters of infinite discrete time signals [27, 28, 30, 31]. These works fit within the field of functional analysis rather than computability theory. In this paper, we extend the refined Kolmogorov-based complexity of analog neural networks [3] to the cases of evolving and stochastic neural networks [12, 85]. More specifically, we provide a refined characterization of the super-Turing computational power of analog, evolving, and stochastic neural networks based on the Kolmogorov complexity of their real weights, evolving weights, and real probabilities, respectively. First, we retrieve an infinite hierarchy of complexity classes of analog networks defined in terms of the Kolmogorov complexity of their underlying real weights. This hierarchy is located between the complexity classes \(\mathbf{P}\) and \(\mathbf{P/poly}\). Using a natural identification between real numbers and infinite sequences of bits, we generalize this result to the case of evolving networks. Accordingly, a similar hierarchy of Kolomogorov-based complexity classes of evolving networks is obtained. This hierarchy also lies between \(\mathbf{P}\) and \(\mathbf{P/poly}\). Finally, we extend these results to the case of stochastic networks employing real probabilities as source of randomness. An infinite hierarchy of complexity classes of stochastic networks based on the Kolmogorov complexity of their real probabilities is therefore achieved. In this case, the hierarchy bridges the gap between \(\mathbf{BPP}\) and \(\mathbf{BPP/log^{*}}\). Beyond proving the existence and providing examples of such hierarchies, we describe a generic way of constructing them based on classes of functions of increasing complexity. Technically speaking, the separability between non-uniform complexity classes is achieved by means of a generic diagonalization technique, a result of interest per se which improves upon the previous approach [3]. For the sake of clarity, this study is formulated within the framework of echo state networks. Overall, this paper intends to fill the missing results and provide a unified view about the refined capabilities of analog, evolving and stochastic neural networks. This paper is organized as follows. Section 2 describes the related works. Section 3 provides the mathematical notions necessary to this study. Section 4 presents recurrent neural networks within the formalism of echo state networks. Section 5 introduces the different models of analog, evolving and stochastic recurrent neural networks, and establishes their tight relations to non-uniform complexity classes defined in terms of Turing machines with advice. Section 6 provides the hierarchy theorems, which in turn, lead to the descriptions of strict hierarchies of classes of analog, evolving and stochastic neural network. Section 7 offers some discussion and concluding remarks. ## 2 Related Works Kleene and Misnky showed the equvalence between Boolean recurrent neural networks and finite state automata [44; 64]. Siegelmann and Sontag proved the Turing universality of rational-weighted neural networks [88]. Kilian and Siegelmann generalized the result to a broader class of sigmoidal neural networks [43]. In connection with analog computation, Siegelmann and Sontag characterized the super-Turing capabilities of real-weighted neural networks [82; 86; 88]. Cabessa and Siegelmann extended the result to evolving neural networks [12]. The computational power of various kinds of stochastic and noisy neural networks has been characterized [4; 57; 61; 84]. Sima graded the sub-Turing capabilites of Boolean networks containing 0, 1, 2 or 3 additional sigmoidal cells [90; 91]. Balcazar et al. hierarchized the super-Turing computational power of analog networks in terms of the Kolmogorov complexity of their underlying real weights [3]. Cabessa et al. pursued the study of the computational capabilities of analog and evolving neural networks from the perspective of infinite computation [6; 7; 11; 13; 14; 15; 16; 17; 18]. Besides, the computational power of spiking neural networks has been extensively studied by Maass [47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 58; 59; 60]. In addition, since the 2000s, the field of P systems, which involves neural P systems in particular, has been booming (see for instance [70; 72; 73; 74; 39; 75; 76]). The countless variations of proposed models are generally Turing complete. Regarding modern architectures, a hierarchy of the sub-Turing expressive power of LSTM and GRU neural networks has been established [63]. Furthermore, the universality of echo state networks has been studied from the perspective of universal approximation theorems [27; 28; 30; 31]. ## 3 Preliminaries The binary alphabet is denoted by \(\Sigma=\{0,1\}\), and the set of finite words, finite words of length \(n\), infinite words, and finite or infinite words over \(\Sigma\) are denoted by \(\Sigma^{*}\), \(\Sigma^{n}\), \(\Sigma^{\omega}\), and \(\Sigma^{\leq\omega}\), respectively. Given some finite or infinite word \(w\in\Sigma^{\leq\omega}\), the \(i\)-th bit of \(w\) is denoted by \(w_{i}\), the sub-word from index \(i\) to index \(j\) is \(w[i:j]\), and the length of \(w\) is \(|w|\), with \(|w|=\infty\) if \(w\in\Sigma^{\omega}\). A _Turing machine (TM)_ is defined in the usual way. A _Turing machine with advice (TM/A)_ is a TM provided with an additional advice tape and function \(\alpha:\mathbb{N}\to\Sigma^{*}\). On every input \(w\in\Sigma^{n}\) of length \(n\), the machine first queries its advice function \(\alpha(n)\), writes this word on its advice tape, and then continues its computation according to its finite program. The advice \(\alpha\) is called _prefix_ if \(m\leq n\) implies that \(\alpha(m)\) is a prefix of \(\alpha(n)\), for all \(m,n\in\mathbb{N}\). The advice \(\alpha\) is called _unbounded_ if the length of the successive advice words tends to infinity, i.e., if \(\lim_{n\to\infty}|\alpha(n)|=\infty\).1 In this work, we assume that every advice \(\alpha\) is prefix and unbounded, which ensures that \(\lim_{n\to\infty}\alpha(n)\in\Sigma^{\omega}\) is well-defined. For any non-decreasing function \(f:\mathbb{N}\to\mathbb{N}\), the advice \(\alpha\) is said to be of size \(f\) if \(|\alpha(n)|=f(n)\), for all \(n\in\mathbb{N}\). We let poly be the set of univariate polynomials with integer coefficients and log be the set of functions of the form \(n\to C\log(n)\) where \(C\in\mathbb{N}\). The advice \(\alpha\) is called _polynomial_ or _logarithmic_ if it is of size \(f\in\text{poly}\) or \(f\in\text{log}\), respectively. A TM/A \(\mathcal{M}\) equipped with some prefix unbounded advice is assumed to satisfy the following additional consistency property: for any input \(w\in\Sigma^{n}\), \(\mathcal{M}\) accepts \(w\) using advice \(\alpha(n)\) iff \(\mathcal{M}\) accepts \(w\) using advice \(\alpha(n^{\prime})\), for all \(n^{\prime}\geq n\). Footnote 1: Note that if \(\alpha\) is not unbounded, then it can be encoded into the program of a TM, and thus, doesn’t add any computational power to the TM model. The class of languages decidable in polynomial time by some TM is \(\mathbf{P}\). The class of languages decidable in time \(t:\mathbb{N}\to\mathbb{N}\) by some TM/A with advice \(\alpha\) is denoted by \(\mathbf{TMA}[\alpha,t]\). Given some class of advice functions \(\mathcal{A}\subseteq(\Sigma^{*})^{\mathbb{N}}\) and some class of time functions \(\mathcal{T}\subseteq\mathbb{N}^{\mathbb{N}}\), we naturally define \[\mathbf{TMA}\left[\mathcal{A},\mathcal{T}\right]=\bigcup_{\alpha\in\mathcal{A }}\bigcup_{t\in\mathcal{T}}\mathbf{TMA}\left[\alpha,t\right].\] The class of languages decidable in polynomial time by some TM/A with polynomial prefix and non-prefix advice are \(\mathbf{P/poly}^{*}\) and \(\mathbf{P/poly}\), respectively. It can be noticed that \(\mathbf{P/poly}^{*}=\mathbf{P/poly}\). A _probabilistic Turing machine (PTM)_ is a TM with two transition functions. At each computational step, the machine chooses one or the other tran sition function with probability \(\frac{1}{2}\), independently from all previous choices, and updates its state, tapes' contents, and heads accordingly. A PTM \(\mathcal{M}\) is assumed to be a decider, meaning that for any input \(w\), all possible computations of \(\mathcal{M}\) end up either in an accepting or in a rejecting state. Accordingly, the random variable corresponding to the decision (0 or 1) that \(\mathcal{M}\) makes at the end of its computation over \(w\) is denoted by \(\mathcal{M}(w)\). Given some language \(L\subseteq\Sigma^{*}\), we say that the PTM \(\mathcal{M}\) decides \(L\) in time \(t:\mathbb{N}\to\mathbb{N}\) if, for every \(w\in\Sigma^{*}\), \(\mathcal{M}\) halts in \(t(|w|)\) steps regardless of its random choices, and \(Pr[\mathcal{M}(w)=1]\geq\frac{2}{3}\) if \(w\in L\) and \(Pr[\mathcal{M}(w)=0]\geq\frac{2}{3}\) if \(w\not\in L\). The class of languages decidable in polynomial time by some PTM is \(\mathbf{BPP}\). A _probabilistic Turing machine with advice (PTM/A)_ is a PTM provided with an additional advice tape and function \(\alpha:\mathbb{N}\to\Sigma^{*}\). The class of languages decided in time \(t\) by some PTM/A with advice \(\alpha\) is denoted by \(\mathbf{PTMA}[\alpha,t]\). Given some class of advice functions \(\mathcal{A}\subseteq(\Sigma^{*})^{\mathbb{N}}\) and some class of time functions, we also define \[\mathbf{PTMA}\left[\mathcal{A},\mathcal{T}\right]=\bigcup_{\alpha\in\mathcal{ A}}\bigcup_{t\in\mathcal{T}}\mathbf{PTMA}\left[\alpha,t\right].\] The class of languages decidable in polynomial time by some PTM/A with logarithmic prefix and non-prefix advice are \(\mathbf{BPP}/\mathbf{log}^{*}\) and \(\mathbf{BPP}/\mathbf{log}\), respectively. In this probabilistic case however, it can be shown that \(\mathbf{BPP}/\mathbf{log}^{*}\subsetneq\mathbf{BPP}/\mathbf{log}\). In the sequel, we will be interested in the size of the advice functions. Hence, we define the following _non-uniform complexity classes_.2 Given a class of languages (or associated machines) \(\mathcal{C}\subseteq 2^{\Sigma^{*}}\) and a function \(f:\mathbb{N}\to\mathbb{N}\), we say that \(L\in\mathcal{C}/f^{*}\) if there exist some \(L^{\prime}\in\mathcal{C}\) and some prefix advice function \(\alpha:\mathbb{N}\to\Sigma^{*}\) such that, for all \(n\in\mathbb{N}\) and for all \(w\in\Sigma^{n}\), the following properties hold: Footnote 2: This definition is non-standard. Usually, non-uniform complexity classes are defined with respect to a class of advice functions \(\mathcal{H}\subseteq(\Sigma^{*})^{\mathbb{N}}\) instead of a class of advice functions’ size \(\mathcal{F}\subseteq\mathbb{N}^{\mathbb{N}}\). 1. \(|\alpha(n)|=f(n)\), for all \(n\geq 0\); 2. \(w\in L\Leftrightarrow\langle w,\alpha(n)\rangle\in L^{\prime}\); 3. \(\langle w,\alpha(n)\rangle\in L^{\prime}\Leftrightarrow\langle w,\alpha(k) \rangle\in L^{\prime},\text{ for all }k\geq n\). Given a class of functions \(\mathcal{F}\subseteq\mathbb{N}^{\mathbb{N}}\), we naturally set \[\mathcal{C}/\mathcal{F}^{*}=\bigcup_{f\in\mathcal{F}}\mathcal{C}/f^{*}.\] The non-starred complexity classes \(\mathcal{C}/f\) and \(\mathcal{C}/\mathcal{F}\) are defined analogously, except that the prefix property of \(\alpha\) and the last condition are not required. For instance, the class of languages decidable in polynomial time by some Turing machine (resp. probabilistic Turing machines) with prefix advice of size \(f\) is \(\mathbf{P}/f^{*}\) (resp. \(\mathbf{BPP}/f^{*}\)). Besides, for any \(w=w_{0}w_{1}w_{2}\cdots\in\Sigma^{\leq\omega}\), we consider the _base-2_ and _base-4 encoding_ functions \(\delta_{2}:\Sigma^{\leq\omega}\to[0,1]\) and \(\delta_{4}:\Sigma^{\leq\omega}\to[0,1]\) respectively defined by \[\delta_{2}(w)=\sum_{i=0}^{|w|}\frac{w_{i}+1}{2^{i+1}}\ \ \text{and}\ \ \delta_{4}(w)=\sum_{i=0}^{|w|}\frac{2w_{i}+1}{4^{i+1}}.\] The use of base 4 ensures that \(\delta_{4}\) is an injection. Setting \(\Delta:=\delta_{4}(\Sigma^{\omega})\subseteq[0,1]\) ensures that the restriction \(\delta_{4}:\Sigma^{\leq\omega}\to\Delta\) is bijective, and thus that \(\delta^{-1}\) is well-defined on the domain \(\Delta\). In the sequel, for any real \(r\in\Delta\), its base-4 expansion will be generally be denoted as \(\bar{r}=r_{0}r_{1}r_{2}\cdots\in\delta_{4}^{-1}(r)\in\Sigma^{\omega}\). For any \(R\subseteq\Delta\), we thus define \(\bar{R}=\{\bar{r}:r\in R\}\). Finally, for every probability space \(\Omega\) and every events \(A_{1},\ldots,A_{n}\subseteq\Omega\), the probability that at least one event \(A_{i}\) occurs can be bounded by the _union bound_ defined by \[\Pr\left(\bigcup_{i=1}^{n}A_{i}\right)\leq\sum_{i=1}^{n}\Pr\left(A_{i}\right).\] ## 4 Recurrent Neural Networks We consider a specific model of recurrent neural networks complying with the _echo state networks_ architecture [40; 41; 42; 46]. More specifically, a recurrent neural network is composed of an input layer, a pool of interconnected neurons sometimes referred to as the _reservoir_, and an output layer, as illustrated in Figure 1. The networks read and accept or reject finite words over the alphabet \(\Sigma=\{0,1\}\) using a input-output encoding described below. Accordingly, they are capable of performing decisions of formal languages. **Definition 1**.: _A rational-weighted recurrent neural network (RNN) is a tuple_ \[\mathcal{N}=\left(\mathbf{x},\mathbf{h},\mathbf{y},\mathbf{W_{in}},\mathbf{W_{ res}},\mathbf{W_{out}},\mathbf{h}^{0}\right)\] _where_ * \(\mathbf{x}=(x_{0},x_{1})\) _is a sequence of two input cells, the data input_ \(x_{0}\) _and the validation input_ \(x_{1}\)_;_ * \(\mathbf{h}=(h_{0},\ldots,h_{K-1})\) _is a sequence of_ \(K\) _hidden cells, sometimes referred to as the reservoir;_ * \(\mathbf{y}=(y_{0},y_{1})\) _is a sequence of two output cells, the data output_ \(y_{0}\) _and the validation output_ \(y_{1}\)_;_ * \(\mathbf{W_{in}}\in\mathbb{Q}^{K\times(2+1)}\) _is a matrix of input weights and biases, where_ \(w_{ij}\) _is the weight from input_ \(x_{j}\) _to cell_ \(h_{i}\)_, for_ \(j\neq 2\)_, and_ \(w_{i2}\) _is the bias of_ \(h_{i}\)_;_ * \(\mathbf{W_{res}}\in\mathbb{Q}^{K\times K}\) _is a matrix of internal weights, where_ \(w_{ij}\) _is the weight from cell_ \(h_{j}\) _to cell_ \(h_{i}\)_;_ * \(\mathbf{W_{out}}\in\mathbb{Q}^{2\times K}\) _is a matrix of output weights, where_ \(w_{ij}\) _is the weight from cell_ \(h_{j}\) _to output_ \(y_{i}\)_;_ * \(\mathbf{h}^{0}=(h_{0}^{0},\ldots,h_{K-1}^{0})\in[0,1]^{K}\) _is the initial state of_ \(\mathcal{N}\)_, where each component_ \(h_{i}^{0}\) _is the initial activation value of cell_ \(h_{i}\)_._ The _activation value_ of the cell \(x_{i}\), \(h_{j}\) and \(y_{k}\) at time \(t\) is denoted by \(x_{i}^{t}\in\{0,1\}\), \(h_{j}^{t}\in[0,1]\) and \(y_{k}^{t}\in\{0,1\}\), respectively. Note that the activation values of input and output cells are Boolean, as opposed to those of the hidden cells. The _input_, _output_ and _(hidden) state_ of \(\mathcal{N}\) at time \(t\) are the vectors \[\mathbf{x}^{t}=(x_{0}^{t},x_{1}^{t})\in\mathbb{B}^{2},\quad\mathbf{y}^{t}=(y_ {0}^{t},y_{1}^{t})\in\mathbb{B}^{2}\ \ \text{and}\ \ \mathbf{h}^{t}=(h_{0}^{t},\ldots,h_{K-1}^{t})\in[0,1]^{K}\,,\] respectively. Given some input \(\mathbf{x}^{t}\) and state \(\mathbf{h}^{t}\) at time \(t\), the state \(\mathbf{h}^{t+1}\) and the output \(\mathbf{y}^{t+1}\) at time \(t+1\) are computed by the following equations: \[\mathbf{h}^{t+1} = \sigma\left(\mathbf{W_{in}}(\mathbf{x}^{t}:1)+\mathbf{W_{res}} \mathbf{h}^{t}\right) \tag{1}\] \[\mathbf{y}^{t+1} = \theta\left(\mathbf{W_{out}}\mathbf{h}^{t+1}\right) \tag{2}\] where \((\mathbf{x}^{t}:1)\) denotes the vector \((x_{0}^{t},x_{1}^{t},1)\), and \(\sigma\) and \(\theta\) are the _linear sigmoid and the _hard-threshold_ functions respectively given by \[\sigma(x)=\begin{cases}0&\text{if }x<0\\ x&\text{if }0\leq x\leq 1\\ 1&\text{if }x>1\end{cases}\quad\text{and}\quad\theta(x)=\begin{cases}0&\text{if }x<0\\ 1&\text{if }x\geq 1\end{cases}.\] The constant value \(1\) in the input vector \((\mathbf{x}^{t},1)\) ensure that the hidden cells \(\mathbf{h}\) receive the last column of \(\mathbf{W_{in}}\in\mathbb{Q}^{K\times(2+1)}\) as biases at each time step \(t\geq 0\). In the sequel, the bias of cell \(h_{i}\) will be denoted as \(w_{i2}\). An _input_\(\mathbf{x}\) of length \(n\) for the network \(\mathcal{N}\) is an infinite sequence of inputs at successive time steps \(t=0,1,2,\dots\), such that the \(n\) first validation bits are equal to \(1\), while the remaining data and validation bits are equal to \(0\), i.e., \[\mathbf{x}=\mathbf{x}^{0}\mathbf{x}^{1}\cdots\mathbf{x}^{n-1}\mathbf{0}^{ \omega}\in\left(\mathbb{B}^{2}\right)^{\omega}\] where \(\mathbf{x}^{i}=(x_{0}^{i},1)\) and \(x_{0}^{i}\in\{0,1\}\), for \(i=1,\dots,n-1\), and \(\mathbf{0}=(0,0)\). Suppose that the network \(\mathcal{N}\) is in the initial state \(\mathbf{h}^{0}\) and that input \(\mathbf{x}\) is presented to \(\mathcal{N}\) step by step. The dynamics given by Equations (1) and (2) ensures that that \(\mathcal{N}\) will generate the sequences of states and outputs \[\mathbf{h} = \mathbf{h}^{0}\mathbf{h}^{1}\mathbf{h}^{2}\cdots\in\left([0,1]^{ K}\right)^{\omega}\] \[\mathbf{y} = \mathbf{y}^{1}\mathbf{y}^{2}\mathbf{y}^{3}\cdots\in\left(\mathbb{ B}^{2}\right)^{\omega}\] Figure 1: A recurrent neural network. The network is composed of two Boolean input cells (data and validation), two Boolean output cells (data and validation) and a set of \(K\) hidden cells, the reservoir, that are recurrently interconnected. The weight matrices \(\mathbf{W_{in}}\), \(\mathbf{W_{res}}\), and \(\mathbf{W_{out}}\) labeling the connections between these layers are represented. In this illustration, the network reads the finite word \(10100\) by means of its data and validation input cells (blue), and eventually rejects it, as shown by the pattern of the data and validation output cells (red). step by step, where \(\mathbf{y}\coloneqq\mathcal{N}(\mathbf{x})\) is the _output_ of \(\mathcal{N}\) associated with input \(\mathbf{x}\). Now, let \(w=w_{0}w_{1}\cdots w_{n-1}\in\Sigma^{*}\) be some finite word of length \(n\), let \(\tau\in\mathbb{N}\) be some integer, and let \(f:\mathbb{N}\to\mathbb{N}\) be some non-decreasing function. The word \(w\in\Sigma^{*}\) can naturally be associated with the input \[\mathbf{w}=\mathbf{w}^{0}\mathbf{w}^{1}\cdots\mathbf{w}^{n-1}\mathbf{0}^{ \omega}\in\left(\mathbb{B}^{2}\right)^{\omega}\] defined by \(\mathbf{w}^{i}=(x_{0}^{i},x_{1}^{i})=(w_{i},1)\), for \(i=1,\ldots,n-1\). The input pattern is thus given by \[\begin{array}{l}x_{0}^{0}\ x_{0}^{1}\ x_{0}^{2}\ \cdots\ =w_{0}\ w_{1}\ \cdots\ w_{n-1}\ 0\ 0\ 0\ \cdots\\ x_{1}^{0}\ x_{1}^{1}\ x_{1}^{2}\ \cdots\ =\ 1\ \ \ 1\ \ \cdots\ \ \ \ 1\ \ \ 0\ 0\ 0\ \cdots\end{array}\] where the validation bits indicate whether an input is actively being processed or not, and the corresponding data bits represent the successive values of the input (see Figure 1). The word \(w\) is said to be _accepted_ or _rejected by \(\mathcal{N}\) in time \(\tau\)_ if the output \[\mathcal{N}(\mathbf{w})=\mathbf{y}=\mathbf{y}^{1}\mathbf{y}^{2}\cdots\mathbf{y }^{\tau}\mathbf{0}^{\omega}\in\left(\mathbb{B}^{2}\right)^{\omega}\] is such that \(\mathbf{y}^{\mathbf{i}}=(0,0)\), for \(i=1,\ldots,\tau-1\), and \(\mathbf{y}^{\tau}=(1,1)\) or \(\mathbf{y}^{\tau}=(0,1)\), respectively. The output pattern is thus given by \[\begin{array}{l}y_{0}^{0}\ y_{0}^{1}\ y_{0}^{2}\ \cdots\ =0\ 0\ \cdots\ y_{1}^{\tau}\ 0\ 0\ \cdots\\ y_{1}^{0}\ y_{1}^{1}\ y_{1}^{2}\ \cdots\ =0\ 0\ \cdots\ \ 1\ \ 0\ 0\ \cdots\end{array}\] where \(y_{1}^{\tau}=1\) or \(y_{1}^{\tau}=0\), respectively. In addition, the word \(w\) is said to be _accepted_ or _rejected by \(\mathcal{N}\) in time \(f\)_ if it is accepted or rejected in time \(\tau\leq f(n)\), respectively.3 A language \(L\subseteq\Sigma^{*}\) is _decided by \(\mathcal{N}\) in time \(f\)_ if for every word \(w\in\Sigma^{*}\), Footnote 3: The choice of the letter \(f\) (instead of \(t\)) for referring to a computation time is deliberate, since the computation time of the networks will later be linked to the advice length of the Turing machines. \[\begin{array}{l}w\in L\mbox{ implies that }\mathbf{w}\mbox{ is accepted by }\mathcal{N}\mbox{ in time }f\mbox{ and}\\ w\not\in L\mbox{ implies that }\mathbf{w}\mbox{ is rejected by }\mathcal{N}\mbox{ in time }f.\end{array}\] A language \(L\subseteq\Sigma^{*}\) is _decided by \(\mathcal{N}\) in polynomial time_ if there exists a polyno mial \(f\) such that \(L\) is decided by \(\mathcal{N}\) in time \(f\). If it exists, the _language decided by \(\mathcal{N}\) in time \(f\)_ is denoted by \(L_{f}(\mathcal{N})\). Besides, a network \(\mathcal{N}\) is said to be a _decider_ if any finite word is eventually accepted or rejected by it. In this case, the _language decided by \(\mathcal{N}\)_ is unique and is denoted by \(L(\mathcal{N})\). We will assume that all the networks that we consider are deciders. s Recurrent neural networks with rational weights have been shown to be computationally equivalent to Turing machines [88]. **Theorem 1**.: _Let \(L\subseteq\Sigma^{*}\) be some language. The following conditions are equivalent:_ 1. \(L\) _is decidable by some TM;_ 2. \(L\) _is decidable by some RNN._ Proof.: (sketch) (ii) \(\rightarrow\) (i): The dynamics of any RNN \(\mathcal{N}\) is governed by Equations (1) and (2), which involves only rational weights, and thus can clearly be simulated by some Turing machine \(\mathcal{M}\). (i) \(\rightarrow\) (ii): We provide a sketch of the original proof of this result [88]. This proof is based on the fact that any finite (and infinite) binary word can be encoded into and the activation value of a neuron, and decoded from this activation value bit by bit. This idea will be reused in some forthcoming proofs. First of all, recall that any TM \(\mathcal{M}\) is computationally equivalently to, and can be simulated in real time by, some \(p\)-stack machine \(\mathcal{S}\) with \(p\geq 2\). We thus show that any \(p\)-stack machine \(\mathcal{S}\) can be simulated by some RNN \(\mathcal{N}\). Towards this purpose, we encode every stack content \[w=w_{0}\cdots w_{n-1}\in\{0,1\}^{*}\] as the rational number \[q_{w}=\delta_{4}(w)=\sum_{i=0}^{n-1}\frac{2\cdot w(i)+1}{4^{i+1}}\in[0,1].\] For instance, \(w=1110\) is encoded into \(q_{w}=\frac{3}{4}+\frac{3}{16}+\frac{3}{64}+\frac{1}{256}\). With this base-4 encoding, the required stack operations can be performed by simple functions involving the sigmoid-linear function \(\sigma\), as described below: * Reading the top of the stack: \(\text{top}(q_{w})=\sigma(4q_{w}-2)\) * Pushing \(0\) into the stack: \(\mathrm{push}_{0}(q_{w})=\sigma(\frac{1}{4}q_{w}+\frac{1}{4})\) * Pushing \(1\) into the stack: \(\mathrm{push}_{1}(q_{w})=\sigma(\frac{1}{4}q_{w}+\frac{3}{4})\) * Popping the stack: \(\mathrm{pop}(q_{w})=\sigma(4q_{w}-(2\mathrm{top}(q_{w})+1))\) * Emptiness of the stack: \(\mathrm{empty}(q_{w})=\sigma(4q_{w})\) Hence, the content \(w\) of each stack can be encoded into the rational activation value \(q_{w}\) of a stack neuron, and the stack operations (reading the top, pushing \(0\) or \(1\), popping and testing the emptiness) can be performed by simple neural circuits implementing the functions described above. Based on these considerations, we can design an RNN \(\mathcal{N}\) which correctly simulates the \(p\)-stack machine \(\mathcal{S}\). The network \(\mathcal{N}\) contains \(3\) neurons per stack: one for storing the encoded content of the stack, one for reading the top element of the stack, and one for storing the answer of the emptiness test of the stack. Moreover, \(\mathcal{N}\) contains two pools of neurons implementing the computational states and transition function of \(\mathcal{S}\), respectively. For any computational state, input bit, and contents of the stacks, \(\mathcal{N}\) computes the next computational state and updates the stacks' contents in accordance with the transition function of \(\mathcal{S}\). In this way, the network \(\mathcal{N}\) simulates the behavior of the \(p\)-stack machine \(\mathcal{S}\) correctly. The network \(\mathcal{N}\) is illustrated in Figure 2. It can be noticed that the simulation process described in the proof of Theorem 1 is performed in real time. More precisely, if a language \(L\subseteq\Sigma^{*}\) is decided Figure 2: Construction of an RNN that simulates a \(p\)-stack machine. by some TM in time \(f(n)\), then \(L\) is decided by some RNN in time \(f(n)+O(n)\). Hence, when restricted to polynomial time of computation, RNNs decide the complexity class \(\mathbf{P}\). **Corollary 2**.: _Let \(L\subseteq\Sigma^{*}\) be some language. The following conditions are equivalent:_ 1. \(L\in\mathbf{P}\)_;_ 2. \(L\) _is decidable by some RNN in polynomial time._ ## 5 Analog, Evolving and Stochastic Recurrent Neural Networks We now introduce analog, evolving and stochastic recurrent neural networks, which are all variants of the RNN model. In polynomial time, these models capture the complexity classes \(\mathbf{P/poly}\), \(\mathbf{P/poly}\) and \(\mathbf{BPP/log}^{*}\), respectively, which all strictly contain the class \(\mathbf{P}\) and include non-recursive languages. According to these considerations, these augmented models have been qualified as _super-Turing_. For each model, a tight characterization in terms of Turing machines with specific kinds of advice is provided. ### Analog networks An _analog recurrent neural network (ANN)_ is an RNN as defined in Definition 1, except that the weight matrices are real instead of rational [87]. Formally, an ANN is an RNN \[\mathcal{N}=\left(\mathbf{x},\mathbf{h},\mathbf{y},\mathbf{W_{in}},\mathbf{W_ {res}},\mathbf{W_{out}},\mathbf{h}^{0}\right)\] such that \[\mathbf{W_{in}}\in\mathbb{R}^{K\times(2+1)},\ \ \mathbf{W_{res}}\in\mathbb{R}^{K \times K}\ \ \text{and}\ \ \mathbf{W_{out}}\in\mathbb{R}^{2\times K}.\] The definitions of acceptance and rejection of words as well as of decision of languages is the same as for RNNs. It can been shown that any ANN \(\mathcal{N}\) containing the irrational weights \(r_{1},\ldots,r_{k}\in\mathbb{R}\setminus\mathbb{Q}\) is computationally equivalent to some ANN \(\mathcal{N}^{\prime}\) using only a single irrational weight \(r\in\mathbb{R}\setminus\mathbb{Q}\) such that \(r\in\Delta\subseteq[0,1]\) and \(r\) is the bias \(w_{02}\) of the hidden cell \(h_{0}\)[87]. Hence, without loss of generality, we restrict our attention to such networks. Let \(r\in\Delta\) and \(R\subseteq\Delta\). * \(\mathrm{ANN}[r]\) denotes the class of ANNs such that all weights but \(w_{02}\) are rational and \(w_{02}=r\). * \(\mathrm{ANN}[R]\) denotes the class of ANNs such that all weights but \(w_{02}\) are rational and \(w_{02}\in R\). In this definition, \(r\) is allowed to be a rational number. In this case, an \(\mathrm{ANN}[r]\) is just a specific RNN. In exponential time of computation, analog recurrent neural networks can decide any possible language. In fact, any language \(L\subseteq\Sigma^{*}\) can be encoded into the infinite word \(\bar{r}_{L}\in\Sigma^{\omega}\), where the \(i\)-th bit of \(\bar{r}_{L}\) equals \(1\) iff the \(i\)-th word of \(\Sigma^{*}\) belongs to \(L\), according to some enumeration of \(\Sigma^{*}\). Hence, we can build some ANN containing the real weight \(r_{L}=\delta_{4}(\bar{r}_{L})\), which, for every input \(w\), decides whether \(w\in L\) or \(w\not\in L\) by decoding \(\bar{r}_{L}\) and reading the suitable bit. In polynomial time of computation, however, the ANNs decide the complexity class \(\mathbf{P/poly}\), and hence, are computationally equivalent to Turing machines with polynomial advice (TM/poly(A)). The following result holds [87]: **Theorem 3**.: _Let \(L\subseteq\Sigma^{*}\) be some language. The following conditions are equivalent:_ 1. \(L\in\mathbf{P/poly}\)_;_ 2. \(L\) _is decidable by some ANN in polynomial time._ Given some ANN \(\mathcal{N}\) and some \(q\in\mathbb{N}\), the _truncated network_\(\mathcal{N}|_{q}\) is defined as the network \(\mathcal{N}\) whose all weights and activation values are truncated after \(q\) precision bits at each step of the computation. The following result shows that, up to time \(q\), the network \(\mathbb{N}\) can limit itself to \(O(q)\) precision bits without affecting the result of its computation [87]. **Lemma 4**.: _Let \(\mathcal{N}\) be some ANN computing in time \(f:\mathbb{N}\rightarrow\mathbb{N}\). Then, there exists some constant \(c>0\) such that, for every \(n\in\mathbb{N}\) and every input \(w\in\Sigma^{n}\), the networks \(\mathcal{N}\) and \(\mathcal{N}|_{cf(n)}\) produce the same outputs up to time \(f(n)\)._ The computational relationship between analog neural networks and Turing machines with advice can actually be strengthened. Towards this purpose, for any non-decreasing function \(f:\mathbb{N}\rightarrow\mathbb{N}\) and any class of such functions \(\mathcal{F}\), we define the following classes of languages decided by analog neural networks in time \(f\) and \(\mathcal{F}\), respectively: \[\mathbf{ANN}\left[r,f\right] = \left\{L\subseteq\Sigma^{\omega}:L=L_{f}(\mathcal{N})\text{ for some }\mathcal{N}\in\text{ANN}[r]\right\}\] \[\mathbf{ANN}\left[R,\mathcal{F}\right] = \bigcup_{r\in R}\bigcup_{f\in\mathcal{F}}\mathbf{ANN}\left[r,f \right].\] In addition, for any real \(r\in\Delta\) and any function \(f:\mathbb{N}\rightarrow\mathbb{N}\), the prefix advice \(\alpha(\bar{r},f):\mathbb{N}\rightarrow\Sigma^{*}\) of length \(f\) associated with \(r\) is defined by \[\alpha(\bar{r},f)(n)=r_{0}r_{1}\cdots r_{f(n)-1}\] for all \(n\in\mathbb{N}\). For any set of reals \(R\subseteq\Delta\) and any class of functions \(\mathcal{F}\subseteq\mathbb{N}^{\mathbb{N}}\), we naturally set \[\alpha(\bar{R},\mathcal{F})=\bigcup_{\bar{r}\in\bar{R}}\bigcup_{f\in\mathcal{ F}}\left\{\alpha(\bar{r},f)\right\}\] Conversely, note that any prefix unbounded advice \(\alpha:\mathbb{N}\rightarrow\Sigma^{*}\) is of the form \(\alpha(\bar{r},f)\), where \(\bar{r}=\lim_{n\rightarrow\infty}\alpha(n)\in\Sigma^{\omega}\) and \(f:\mathbb{N}\rightarrow\mathbb{N}\) is defined by \(f(n)=|\alpha(n)|\). The following result clarifies the tight relationship between analog neural networks using real weights and Turing machines using related advices. Note that the real weights of the networks correspond precisely to the advice of the machines, and the computation time of the networks are related to the advice length of the machines. **Proposition 5**.: _Let \(r\in\Delta\) be some real weight and \(f:\mathbb{N}\rightarrow\mathbb{N}\) be some non-decreasing function._ 1. \(\mathbf{ANN}\left[r,f\right]\subseteq\mathbf{TMA}\left[\alpha(\bar{r},cf),O(f ^{3})\right]\)_, for some_ \(c>0\)_._ 2. \(\mathbf{TMA}\left[\alpha(\bar{r},f),f\right]\subseteq\mathbf{ANN}\left[r,O(f)\right]\)_._ Proof.: (i) Let \(L\in\mathbf{ANN}\left[r,f\right]\). Then, there exists some \(\text{ANN}[r]\)\(\mathcal{N}\) such that \(L_{f}(\mathcal{N})=L\). By Lemma 4, there exists some constant \(c>0\) such that the network \(\mathcal{N}\) and the truncated network \(\mathcal{N}|_{cf(n)}\) produce the same outputs up to time step \(f(n)\), for all \(n\in\mathbb{N}\). Now, consider Procedure 1 below. In this procedure, all instructions except the query one (line 2) are recursive. Besides, the simulation of each step of \(\mathcal{N}|_{cf(n)}\) involves a constant number of multiplications and additions of rational numbers, all representable by \(cf(n)\) bits, and can thus be performed in time \(O(f^{2}(n))\) (for the products). Consequently, the simulation of the \(f(n)\) steps of \(\mathcal{N}|_{cf(n)}\) can be performed in time \(O(f^{3}(n))\). Hence, Procedure 1 can be simulated by some TM/A \(\mathcal{M}\) using advice \(\alpha(\bar{r},cf)\) in time \(O(f^{3}(n))\). In addition, Lemma 4 ensures that \(w\) is accepted by \(\mathcal{M}\) iff \(w\) is accepted by \(\mathcal{N}\), for all \(w\in\Sigma^{*}\). Hence, \(L(\mathcal{M})=L(\mathcal{N})=L\), and therefore \(L\in\mathbf{TMA}\left[\alpha(\bar{r},cf),O(f^{3})\right]\). ``` Input: input \(w\in\Sigma^{n}\) 1 Query the advice \(\alpha(\bar{r},cf)(n)=r_{0}r_{1}\cdots r_{cf(n)-1}\) 2for\(t=0,1,\ldots,f(n)-1\)do 3 Simulate the truncated network \(\mathcal{N}|_{cf(n)}\) which uses the rational approximation \(\tilde{r}=\delta_{4}(r_{0}r_{1}\cdots r_{cf(n)-1})\) of \(r\) as its weight; return Output of \(\mathcal{N}|_{cf(n)}\) over \(w\) at time step \(f(n)\) ``` **Procedure 1** (ii) Let \(L\in\mathbf{TMA}\left[\alpha(\bar{r},f),f\right]\). Then, there exists some TM/A \(\mathcal{M}\) with advice \(\alpha(\bar{r},f)\) such that \(L_{f}(\mathcal{M})=L\). We show that \(\mathcal{M}\) can be simulated by some analog neural network \(\mathcal{N}\) with real weight \(r\). The network \(\mathcal{N}\) simulates the advice tape of \(\mathcal{M}\) as described in the Proof of Theorem 1: the left and right contents of the tape are encoded and stored into two stack neurons \(x_{l}\) and \(x_{r}\), respectively, and the tape operations are simulated using appropriate neural circuits. On every input \(w\in\Sigma^{n}\), the network \(\mathcal{N}\) works as follows. First, \(\mathcal{N}\) copies its real bias \(r=\delta_{4}(r_{0}r_{1}r_{2}\cdots)\) into some neuron \(x_{a}\). Every time \(\mathcal{M}\) reads some new advice bit \(r_{i}\), then \(\mathcal{N}\) first pops \(r_{i}\) from its neuron \(x_{a}\), which thus takes the updated activation value \(\delta_{4}(r_{i+1}r_{i+2}\cdots)\), and next pushes \(r_{i}\) into neuron \(x_{r}\). This process is performed in constant time. At this point, neurons \(x_{l}\) and \(x_{r}\) contain the encoding of the bits \(r_{0}r_{1}\cdots r_{i}\). Hence, \(\mathcal{N}\) can simulates the recursive instructions of \(\mathcal{M}\) in the usual way, in real time, until the next bit \(r_{i+1}\) is read [88]. Overall, \(\mathcal{N}\) simulates the behavior of \(\mathcal{M}\) in time \(O(f(n))\). We now show that \(\mathcal{M}\) and \(\mathcal{N}\) output the same decision for input \(w\). If \(\mathcal{M}\) does not reach the end of its advice word \(\alpha(n)\), the behaviors of \(\mathcal{M}\) and \(\mathcal{N}\) are identical, and so are their outputs. If at some point, \(\mathcal{M}\) reaches the end of \(\alpha(n)\) and reads successive blank symbols, then \(\mathcal{N}\) continues to pop the successive bits \(r_{|\alpha(n)|T|_{\alpha(n)|+1}}\cdots\) from neuron \(x_{a}\), to push them into neuron \(x_{r}\), and to simulates the behavior of \(\mathcal{M}\). In this case, \(\mathcal{N}\) simulates the behavior of \(\mathcal{M}\) working with some extension of the advice \(\alpha(n)\), which by the consistency property of \(\mathcal{M}\) (cf. Section 3), produces the same output as if working with advice \(\alpha(n)\). In this way, \(w\) is accepted by \(\mathcal{M}\) iff \(w\) is accepted by \(\mathcal{N}\), and thus \(L(\mathcal{M})=L(\mathcal{N})\). Therefore, \(L\in\mathbf{ANN}\left[r,O(f)\right]\). The following corollary shows that the class of languages decided in polynomial time by analog networks with real weights and by Turing machines with related advices are the same. **Corollary 6**.: _Let \(r\in\Delta\) be some real weight and \(R\subseteq\Delta\) be some set of real weights._ 1. \(\mathbf{ANN}\left[r,\mathrm{poly}\right]=\mathbf{TMA}\left[\alpha(\bar{r}, \mathrm{poly}),\mathrm{poly}\right]\)_._ 2. \(\mathbf{ANN}\left[R,\mathrm{poly}\right]=\mathbf{TMA}\left[\alpha(\bar{R}, \mathrm{poly}),\mathrm{poly}\right]\)_._ Proof.: (i) Let \(L\in\mathbf{ANN}\left[r,\mathrm{poly}\right]\). Then, there exists \(f\in\mathrm{poly}\) such that \(L\in\mathbf{ANN}\left[r,f\right]\). By Proposition 5-(i), \(L\in\mathbf{TMA}\left[\alpha(\bar{r},cf),O(f^{3})\right]\), for some \(c>0\). Thus \(L\in\mathbf{TMA}\left[\alpha(\bar{r},\mathrm{poly}),\mathrm{poly}\right]\). Conversely, let \(L\in\mathbf{TMA}\left[\alpha(\bar{r},\mathrm{poly}),\mathrm{poly}\right]\). Then, there exist \(f,f^{\prime}\in\mathrm{poly}\) such that \(L\in\mathbf{TMA}\left[\alpha(\bar{r},f^{\prime}),f\right]\). By the consistency property of the TM/A, we can assume without loss of generality that \(f^{\prime}=f\). By Proposition 5-(ii), \(L\in\mathbf{ANN}\left[r,O(f)\right]\), and hence \(L\in\mathbf{ANN}\left[r,\mathrm{poly}\right]\). (ii) This point follows directly from point (i) by taking the union over all \(r\in R\). ### Evolving networks An _evolving recurrent neural network (ENN)_ is an RNN where the weight matrices can evolve over time inside a bounded space instead of staying static [12]. Formally, an ENN is a tuple \[\mathcal{N}=\left(\mathbf{x},\mathbf{h},\mathbf{y},\left(\mathbf{W}_{\mathbf{ in}}^{t}\right)_{t\in\mathbb{N}},\left(\mathbf{W}_{\mathbf{res}}^{t}\right)_{t \in\mathbb{N}},\left(\mathbf{W}_{\mathbf{out}}^{t}\right)_{t\in\mathbb{N}}, \mathbf{h}^{0}\right)\] where \(\mathbf{x},\mathbf{h},\mathbf{y},\mathbf{h}^{0}\) are defined as in Definition 1, and \[\mathbf{W}_{\mathbf{in}}^{t}\in\mathbb{Q}^{K\times(2+1)},\ \ \mathbf{W}_{\mathbf{res}}^{t}\in \mathbb{Q}^{K\times K}\ \ \text{and}\ \ \mathbf{W}_{\mathbf{out}}^{t}\in\mathbb{Q}^{2\times K}.\] are input, reservoir and output weight matrices at time \(t\) such that \(\|\mathbf{W}_{\mathbf{in}}^{t}\|_{\max}=\|\mathbf{W}_{\mathbf{res}}^{t}\|_{ \max}=\|\mathbf{W}_{\mathbf{out}}^{t}\|_{\max}\leq C\) for some constant \(C>1\) and for all \(t\in\mathbb{N}\). The boundedness condition expresses the fact that the synaptic weights are confined into a certain range of values imposed by the biological constitution of the neurons. The successive values of an evolving weight \(w_{ij}\) is denoted by \((w_{ij}^{t})_{t\in\mathbb{N}}\). The dynamics of an ENN is given by the following adapted equations \[\mathbf{h}^{t+1} = \sigma\left(\mathbf{W}_{\mathbf{in}}^{t}(\mathbf{x}^{t}:1)+ \mathbf{W}_{\mathbf{res}}^{t}\mathbf{h}^{t}\right) \tag{3}\] \[\mathbf{y}^{t+1} = \theta\left(\mathbf{W}_{\mathbf{out}}^{t}\mathbf{h}^{t+1}\right). \tag{4}\] The definition of acceptance and rejection of words and decision of languages is the same as for RNNs. In this case also, it can been shown that any ENN \(\mathcal{N}\) containing the evolving weights \(\bar{e}_{1},\ldots,\bar{e}_{n}\in[-C,C]^{\omega}\) is computationally equivalent to some ENN \(\mathcal{N}^{\prime}\) containing only one evolving weight \(\bar{e}\in[-C,C]^{\omega}\), such that \(\bar{e}\) evolves only among the binary values \(0\) and \(1\), i.e. \(\bar{e}\in\Sigma^{\omega}\), and \(\bar{e}\) is the evolving bias \((w_{02}^{t})_{t\in\mathbb{N}}\) of the hidden cell \(h_{0}\)[12]. Hence, without loss of generality, we restrict our attention to such networks. Let \(\bar{e}\in\Sigma^{\omega}\) be some binary evolving weight and \(\bar{E}\subseteq\Sigma^{\omega}\). * ENN\([\bar{e}]\) denoted the class of ENNs such that all weights but \(w_{02}\) are static, and \((w_{02}^{t})_{t\in\mathbb{N}}=\bar{e}\). * ENN\([\bar{E}]\) denotes the class of ENNs such that all weights but \(w_{02}\) are static, and \((w_{02}^{t})_{t\in\mathbb{N}}\in\bar{E}\). Like analog networks, evolving recurrent neural networks can decide any possible language in exponential time of computation. In polynomial time, they decide the complexity class \(\mathbf{P/poly}\), and thus, are computationally equivalent to Turing machines with polynomial advice (TM/poly(A)). The following result holds [12]: **Theorem 7**.: _Let \(L\subseteq\Sigma^{*}\) be some language. The following conditions are equivalent:_ 1. \(L\in\mathbf{P/poly}\)_;_ 2. \(L\) _is decidable by some ENN in polynomial time._ An analogous version of Lemma 4 holds for the case of evolving networks [12]. Note that the boundedness condition on the weights is involved in this result. **Lemma 8**.: _Let \(\mathcal{N}\) be some ENN computing in time \(f:\mathbb{N}\rightarrow\mathbb{N}\). Then, there exists some constant \(c\) such that, for every \(n\in\mathbb{N}\) and every input \(w\in\Sigma^{n}\), the networks \(\mathcal{N}\) and \(\mathcal{N}|_{cf(n)}\) produce the same outputs up to time \(f(n)\)._ Once again, the computational relationship between evolving neural networks and Turing machines with advice can be strengthen. For this purpose, we define the following classes of languages decided by evolving neural networks in time \(f\) and \(\mathcal{F}\), respectively: \[\mathbf{ENN}\left[\bar{e},f\right] = \left\{L\subseteq\Sigma^{\omega}:L=L_{f}(\mathcal{N})\text{ for some }\mathcal{N}\in\text{ENN}[\bar{e}]\right\}\] \[\mathbf{ENN}\left[\bar{E},\mathcal{F}\right] = \bigcup_{\bar{e}\in\bar{E}}\bigcup_{f\in\mathcal{F}}\mathbf{ENN} \left[\bar{e},f\right].\] For any \(\bar{e}\in\Sigma^{\omega}\) and any function \(f:\mathbb{N}\rightarrow\mathbb{N}\), we consider the prefix advice \(\alpha(\bar{e},f):\mathbb{N}\rightarrow\Sigma^{*}\) associated with \(e\) and \(f\) defined by \[\alpha(\bar{e},f)(n)=e_{0}e_{1}\cdots e_{f(n)-1}\] for all \(n\in\mathbb{N}\). Conversely, any prefix advice \(\alpha:\mathbb{N}\rightarrow\Sigma^{*}\) is clearly of the form \(\alpha(\bar{e},f)\), where \(\bar{e}=\lim_{n\rightarrow\infty}\alpha(n)\in\Sigma^{\omega}\) and \(f(n)=|\alpha(n)|\) for all \(n\in\mathbb{N}\). The following relationships between neural networks with evolving weights and Turing machines with related advice hold: **Proposition 9**.: _Let \(e\in\Sigma^{\omega}\) be some binary evolving weight and \(f:\mathbb{N}\rightarrow\mathbb{N}\) be some non-decreasing function._ 1. \(\mathbf{ENN}\left[\bar{e},f\right]\subseteq\mathbf{TMA}\left[\alpha(\bar{e}, cf),O(f^{3})\right]\)_, for some_ \(c>0\)_._ 2. \(\mathbf{TMA}\left[\alpha(\bar{e},f),f\right]\subseteq\mathbf{ENN}\left[\bar{e}, O(f)\right]\)_._ Proof.: The proof is very similar to that of Proposition 5. (i) Let \(L\in\mathbf{ENN}\left[\bar{e},f\right]\). Then, there exists some \(\text{ENN}[\bar{e}]\)\(\mathcal{N}\) such that \(L_{f}(\mathcal{N})=L\). By Lemma 8, there exists some constant \(c>0\) such that the network \(\mathcal{N}\) and the truncated network \(\mathcal{N}|_{cf(n)}\) produce the same outputs up to time step \(f(n)\), for all \(n\in\mathbb{N}\). Now, consider Procedure 2 below. In this procedure, all instructions except the query one are recursive. Procedure 2 can be simulated by some TM/A \(\mathcal{M}\) using advice \(\alpha(\bar{e},f)\) in time \(O(f^{3}(n))\), as described in the proof of Proposition 5. In addition, \(\mathcal{M}\) and \(\mathcal{N}\) decide the same language \(L\), and therefore \(L\in\mathbf{TMA}\left[\alpha(\bar{e},f),O(f^{3})\right]\). (ii) Let \(L\in\mathbf{TMA}\left[\alpha(\bar{e},f),f\right]\). Then, there exists to some TM/A \(\mathcal{M}\) with advice \(\alpha(\bar{e},f)\) such that \(L_{f}(\mathcal{M})=L\). The machine \(\mathcal{M}\) can be simulated by the network \(\mathcal{N}\) with evolving weight \(\bar{e}=e_{0}e_{1}e_{2}\cdots\) as follows. First, \(\mathcal{N}\) simultaneously counts and pushes into a stack neuron \(x_{a}\) the successive bits of \(\bar{e}\) as they arrive. Then, for \(k=1,2,\dots\) and until it produces a decision, \(\mathcal{N}\) proceeds as follows. If necessary, \(\mathcal{N}\) waits for \(x_{a}\) to contain more than \(2^{k}\) bits, copies the content \(e_{0}e_{1}\cdots e_{2^{k}}\cdots\) of \(x_{a}\) in reverse order into another stack neuron \(x_{a^{\prime}}\), and simulates \(\mathcal{M}\) with advice \(e_{0}e_{1}\cdots e_{2^{k}}\cdots\) in real time. Every time \(\mathcal{M}\) reads a new advice bit, \(\mathcal{N}\) tries to access it from its stack \(x_{a^{\prime}}\). If \(x_{a^{\prime}}\) does not contain this bit, then \(\mathcal{N}\) restart the whole process with \(k+1\). When \(k=\log(f(n))\), the stack \(x_{a}\) contains \(2^{k}=f(n)\) bits, which ensures that \(\mathcal{N}\) properly simulates \(\mathcal{M}\) with advice \(e_{0}e_{1}\cdots e_{f(n)-1}\). Hence, the whole simulation process is achieved in time \(O(\sum_{k=1}^{\log(f(n))}2^{k})=O(2^{\log(f(n))+1})=O(f(n))\). In this way, \(\mathcal{M}\) and \(\mathcal{N}\) decide the same language \(L\), and \(\mathcal{M}\) is simulated by \(\mathcal{N}\) in time \(O(f)\). Therefore, \(L\in\mathbf{ENN}\left[\bar{e},O(f)\right]\). The class of languages decided in polynomial time by evolving networks and Turing machines using related evolving weights and advices are the same. **Corollary 10**.: _Let \(\bar{e}\in\Sigma^{\omega}\) be some binary evolving weight and \(\bar{E}\subseteq\Sigma^{\omega}\) be some set of binary evolving weights._ 1. \(\mathbf{ENN}\left[\bar{e},\mathrm{poly}\right]=\mathbf{TMA}\left[\alpha(\bar{e },\mathrm{poly}),\mathrm{poly}\right]\)_._ 2. \(\mathbf{ENN}\left[\bar{E},\mathrm{poly}\right]=\mathbf{TMA}\left[\alpha(\bar{ E},\mathrm{poly}),\mathrm{poly}\right]\)_._ Proof.: The proof is similar to that of Corollary 6. ### Stochastic networks A _stochastic recurrent neural network (SNN)_ is an RNN as defined in Definition 1, except that the network contains additional stochastic cells as inputs [84]. Formally, an SNN is an RNN \[\mathcal{N}=\left(\mathbf{x},\mathbf{h},\mathbf{y},\mathbf{W_{in}},\mathbf{W_ {res}},\mathbf{W_{out}},\mathbf{h}^{0}\right)\] such that \(\mathbf{x}=(x_{0},x_{1},x_{2},\cdots,x_{k+1})\), where \(x_{0}\) and \(x_{1}\) are the data and validation input cells, respectively, and \(x_{2},\ldots,x_{k+1}\) are \(k\) additional stochastic cells. The dimension of the input weight matrix is adapted accordingly, namely \(\mathbf{W_{in}}\in\mathbb{R}^{K\times((k+2)+1)}\). Each stochastic cell \(x_{i}\) is associated with a probability \(p_{i}\in[0,1]\), and at each time step \(t\geq 0\), the activation of the cell \(x_{i}^{t}\) takes value \(1\) with probability \(p_{i}\), and value \(0\) with probability \(1-p_{i}\). The dynamics of an SNN is then governed by Equations (1) and (2), but with the adapted inputs \(\mathbf{x}^{t}=(x_{0}^{t},x_{1}^{t},x_{2}^{t},\cdots,x_{k+1}^{t})\in\mathbb{B} ^{k+2}\) for all \(t\geq 0\). For some SNN \(\mathcal{N}\), we assume that any input \(w=w_{0}w_{1}\cdots w_{n-1}\in\Sigma^{*}\) is decided by \(\mathcal{N}\) in the same amount of time \(\tau(n)\), regardless of the random pattern of the stochastic cells \(x_{i}^{t}\in\{0,1\}\), for all \(i=2,\ldots,k+1\). Hence, the number of possible computations of \(\mathcal{N}\) over \(w\) is finite. The input \(w\) is _accepted_ (resp. _rejected_) by \(\mathcal{N}\) if the number of accepting (resp. rejecting) computations over the total number of computations on \(w\) is greater than or equal to \(2/3\). This means that the error probability of \(\mathcal{N}\) is bounded by \(1/3\). If \(f:\mathbb{N}\rightarrow\mathbb{N}\) is a non-decreasing function, we say that \(w\) is _accepted_ or _rejected_ by \(\mathcal{N}\)_in time \(f\)_ if it is accepted or rejected in time \(\tau(n)\leq f(n)\), respectively. We assume that any SNN is a decider. The definition of decision of languages is the same as in the case of RNNs. Once again, any SNN is computationally equivalent to some SNN with only one stochastic cell \(x_{2}\) associated with a real probability \(p\in\Delta\)[84]. Without loss of generality, we restrict our attention to such networks. Let \(p\in\Delta\) be some probability and \(P\subseteq\Delta\). * SNN\([p]\) denotes the class of SNNs such that the probability of the stochastic cell \(x_{2}\) is equal to \(p\). * SNN\([P]\) denotes the class of SNNs such that the probability of the stochastic cell \(x_{2}\) is equal to some \(p\in P\). In polynomial time of computation, the SNNs with rational probabilities decide the complexity class \(\mathbf{BPP}\). By contrast, the SNNs with real probabilities decide the complexity class \(\mathbf{BPP}/\mathbf{log}^{*}\), and hence, are computationally equivalent to probabilistic Turing machines with logarithmic advice (PTM/log(A)). The following result holds [84]: **Theorem 11**.: _Let \(L\subseteq\Sigma^{*}\) be some language. The following conditions are equivalent:_ 1. \(L\in\mathbf{BPP/log}^{*}\)_;_ 2. \(L\) _is decidable by some SNN in polynomial time._ As for the two previous models, the computational relationship between stochastic neural networks and Turing machines with advice can be precised. We define the following classes of languages decided by stochastic neural networks in time \(f\) and \(\mathcal{F}\), respectively: \[\mathbf{SNN}\left[p,f\right] = \left\{L\subseteq\Sigma^{\omega}:L=L_{f}(\mathcal{N})\text{ for some }\mathcal{N}\in\text{ SNN}[p]\right\}\] \[\mathbf{SNN}\left[P,\mathcal{F}\right] = \bigcup_{p\in P}\bigcup_{f\in\mathcal{F}}\mathbf{SNN}\left[p,f \right].\] The tight relationships between stochastic neural networks using real probabilities and Turing machines using related advices can now be described. In this case however, the advice of the machines are logarithmically related to the computation time of the networks. **Proposition 12**.: _Let \(p\in\Delta\) be some real probability and \(f:\mathbb{N}\rightarrow\mathbb{N}\) be some non-decreasing function._ 1. \(\mathbf{SNN}[p,f]\subseteq\mathbf{PTMA}\left[\alpha(\bar{p},\log(5f)),O(f^{3} )\right]\)_._ 2. \(\mathbf{PTMA}\left[\alpha(\bar{p},\log(f)),f\right]\subseteq\mathbf{SNN} \left[p,O(f^{2})\right]\)_._ Proof.: (i) Let \(L\in\mathbf{SNN}\left[p,f\right]\). Then, there exists some SNN\([p]\)\(\mathcal{N}\) deciding \(L\) in time \(f\). Note that the stochastic network \(\mathcal{N}\) can be considered as a classical rational-weighted RNN with an additional input cell \(x_{2}\). Since \(\mathcal{N}\) has rational weights, it can be noticed that up to time \(f(n)\), the activation values of its neurons are always representable by rationals of \(O(f(n))\) bits. Now, consider Procedure 3 below. This procedure can then be simulated by some PTM/A \(\mathcal{M}\) using advice \(\alpha(\bar{p},\log(5f))\) in time \(O(f^{3})\), as described in the proof of Proposition 5. It remains to show that \(\mathcal{N}\) and \(\mathcal{M}\) decide the same language \(L\). For this purpose, consider a hypothetical device \(\mathcal{M}^{\prime}\) working as follows: at each time \(t\), \(\mathcal{M}^{\prime}\) takes the sequence of bits \(\bar{b}\) generated by Procedure 3 and concatenates it with some infinite sequence of bits \(\bar{b}^{\prime}\in\Sigma^{\omega}\) drawn independently with probability \(\frac{1}{2}\), thus producing the infinite sequence \(\bar{b}\bar{b}^{\prime}\in\Sigma^{\omega}\). Then, \(\mathcal{M}^{\prime}\) generates a bit \(c^{\prime}_{t}\) iff \(\bar{b}\bar{b}^{\prime}<_{lex}\bar{p}\), which happens precisely with probability \(p\), since \(p=\delta_{2}(\bar{p})\)[2]. Finally, \(\mathcal{M}^{\prime}\) simulates the behavior of \(\mathcal{N}\) at time \(t\) using the stochastic bit \(x_{2}^{t}=c_{t}^{\prime}\). Clearly, \({\cal M}^{\prime}\) and \({\cal N}\) produce random bits with same probability \(p\), behave in the same way, and thus decide the same language \(L\). We now evaluate the error probability of \({\cal M}\) at deciding \(L\), by comparing the behaviors of \({\cal M}\) and \({\cal M}^{\prime}\). Let \(w\in\Sigma^{n}\) be some input and let \[\bar{p}_{\cal M}=\alpha(\bar{p},\log(5f))(n)=p_{0}p_{1}\cdots p_{\log(5f(n))-1} \ \ \mbox{and}\ \ p_{\cal M}=\delta_{2}(\bar{p}_{\cal M}).\] According to Procedure 3, at each time step \(t\), the machine \({\cal M}\) generates \(c_{t}=1\) iff \(\bar{b}<_{lex}\bar{p}_{\cal M}\), which happens precisely with probability \(p_{\cal M}\), since \(p_{\cal M}=\delta_{2}(\bar{p}_{\cal M})\)[2]. On the other hand, \({\cal M}^{\prime}\) generates \(c_{t}^{\prime}=1\), with probability \(p\), showing that \({\cal M}\) and \({\cal M}^{\prime}\) might differ in their decisions. Since \(\bar{p}_{\cal M}\) is a prefix of \(\bar{p}\), it follows that \(p_{\cal M}\leq p\) and \[p-p_{\cal M}=\sum_{i=\log(5f(n))}^{\infty}\frac{p_{i}}{2^{i+1}}\leq\frac{1}{2^ {\log(5f(n))}}=\frac{1}{5f(n)}.\] In addition, the bits \(c_{t}\) and \(c_{t}^{\prime}\) are generated by \({\cal M}\) and \({\cal M}^{\prime}\) based on the sequences \(\bar{b}\) and \(\bar{b}\bar{b}^{\prime}\) satisfying \(\bar{b}<_{lex}\bar{b}\bar{b}^{\prime}\). Hence, \[\Pr(c_{t}\neq c_{t}^{\prime})=\Pr(\bar{p}_{\cal M}<_{lex}\bar{b}\bar{b}^{ \prime}<_{lex}\bar{p})=p-p_{\cal M}\leq\frac{1}{5f(n)}.\] By a union bound argument, the probability that the sequences \(\bar{c}=c_{0}c_{1}\cdots c_{f(n)-1}\) and \(\bar{c}^{\prime}=c_{0}^{\prime}c_{1}^{\prime}\cdots c_{f(n)-1}^{\prime}\) generated by \({\cal M}\) and \({\cal M}^{\prime}\) differ satisfies \[\Pr(\bar{c}\neq\bar{c}^{\prime})\leq\frac{1}{5f(n)}f(n)=\frac{1}{5}\ \ \mbox{and thus}\ \ \Pr(\bar{c}=\bar{c}^{\prime})\geq 1-\frac{1}{5}.\] Since \({\cal M}^{\prime}\) classifies \(w\) correctly with probability at least \(\frac{2}{3}\), it follows that \({\cal M}\) classifies \(w\) correctly with probability at least \((1-\frac{1}{5})\frac{2}{3}>\frac{8}{15}>\frac{1}{2}\). This probability can be increased above \(\frac{2}{3}\) by repeating Procedure 3 a constant number of time and taking the majority of the decisions as output [2]. Consequently, the devices \({\cal M}\), \({\cal M}^{\prime}\) and \({\cal N}\) all decide the same language \(L\), and therefore, \(L\in{\bf PTMA}\left[\alpha(\bar{p},\log(5f)),O(f^{3})\right]\). (ii) Let \(L\in{\bf PTMA}[\alpha(\bar{p},\log(f)),f]\). Then, there exists some PTM/log(A) \({\cal M}\) with logarithmic advice \(\alpha(\bar{p},\log(f))\) deciding \(L\) in time \(f\). For simplicity purposes, let the advice of \({\cal M}\) be denoted by \(\bar{p}=p_{0}\cdots p_{\log(f(n))-1}\) (\(\bar{p}\) is not anymore the binary expansion of \(p\) from now on). Now, consider Procedure 4 below. The first for loop computes an estimation \(\bar{p}^{\prime}\) of the advice \(\bar{p}\) defined by \[\bar{p}^{\prime}=\delta_{2}^{-1}(p^{\prime})[0:\log(f(n))-1]=p_{0}^{\prime} \cdots p_{\log(f(n))-1}^{\prime}\] where \[p^{\prime}=\frac{1}{k(n)}\sum_{i=0}^{k(n)-1}b_{i}\ \ \text{and}\ \ k(n)={}^{ \ulcorner}10p(1-p)f^{2}(n){}^{\urcorner}\] and the \(b_{i}\) are drawn according to a Bernouilli distribution of parameter \(p\). The second for loop computes a sequence of random choices \[\bar{c}=c_{0}\cdots c_{f(n)-1}\] using von Neumann's trick to simulate a fair coin with a biased one [2]. The third loop simulates the behavior of the PTM/log(A) \(\mathcal{M}\) using the alternative advice \(\bar{p}^{\prime}\) and the sequence of random choices \(\bar{c}\). This procedure can clearly be simulated by some SNN\([p]\)\(\mathcal{N}\) in time \(O(k+2f)=O(f^{2})\), where the random samples of bits are given by the stochastic cell and the remaining recursive instructions simulated by a rational-weighted sub-network. It remains to show that \(\mathcal{M}\) and \(\mathcal{N}\) decide the same language \(L\). For this purpose, we estimate the error probability of \(\mathcal{N}\) at deciding language \(L\). First, we show that \(\bar{p}^{\prime}\) is a good approximation of the advice \(\bar{p}\) of \(\mathcal{M}\). Note that \(\bar{p}^{\prime}\neq\bar{p}\) iff \(|p^{\prime}-p|>\frac{1}{2^{\log(f(n))}}=\frac{1}{f(n)}\). Note also that by definition, \(p^{\prime}=\frac{\#1}{k(n)}\), where \(\#1\sim\mathcal{B}(k(n),p)\) is a binomial random variable of parameters \(k(n)\) and \(p\) with \(\mathrm{E}(\#1)=k(n)p\) and \(\mathrm{Var}(\#1)=k(n)p(1-p)\). It follows that \[\Pr\left(\bar{p}^{\prime}\neq\bar{p}\right) = \Pr\left(|p^{\prime}-p|>\frac{1}{f(n)}\right)\] \[= \Pr\left(|k(n)p^{\prime}-k(n)p|>\frac{k(n)}{f(n)}\right)\] \[= \Pr\left(|\#1-\mathrm{E}(\#1)|>\frac{k(n)}{f(n)}\right).\] The Chebyshev's inequality ensures that \[\Pr\left(\bar{p}^{\prime}\neq\bar{p}\right)\leq\frac{\mathrm{Var}(\#1)f^{2}(n )}{k^{2}(n)}=\frac{p(1-p)f^{2}(n)}{k(n)}<\frac{1}{10}\] since \(k(n)>10p(1-p)f^{2}(n)\). We now estimate the source of error coming from the simulation of a fair coin by a biased one in Procedure 4 (loop of Line 6). Note that at each step \(i\), if the two bits \(bb^{\prime}\) are different (01 or 10), then \(c_{t}\) is drawn with fair probability \(\frac{1}{2}\), like in the case of the machine \(\mathcal{M}\). Hence, the sampling process of \(\mathcal{N}\) and \(\mathcal{M}\) differ in probability precisely when all of the \(K\) draws produce identical bits \(bb^{\prime}\) (00 or 11). The probability that the two bits \(bb^{\prime}\) are identical at step \(i\) is \(p^{2}+(1-p)^{2}\), and hence, the probability that the \(K=\frac{-4-\log(f(n))}{\log(p^{2}+(1-p)^{2})}\) independent draws all produce identical bits \(bb^{\prime}\) satisfies \[\left(p^{2}+(1-p)^{2}\right)^{K}\leq 2^{-4-\log(f(n))}=\frac{1}{16f(n)}.\] by using the fact that \(x^{1/\log(x)}\geq 2\). By a union bound argument, the probability that some \(c_{t}\) in the sequence \(c_{0}\cdots c_{f(n)-1}\) is not drawn with a fair probability \(\frac{1}{2}\) is bounded by \(\frac{1}{16}\). Equivalently, the probability that all random random bits \(c_{t}\) of the sequence \(c_{0}\cdots cs_{f(n)-1}\) are drawn with fair probability \(\frac{1}{2}\) is at least \(\frac{15}{16}\). To safely estimate the error probability of \(\mathcal{N}\), we restrict ourselves to situations when \(\mathcal{M}\) and \(\mathcal{N}\) behave the same, and assume that \(\mathcal{N}\) always makes errors otherwise. These situations happen when \(\mathcal{M}\) and \(\mathcal{N}\) use the same advice as well as the same fair probability for their random processes. These two events are independent and of probability at least \(\frac{9}{10}\) and at least \(\frac{15}{16}\), respectively. Hence, \(\mathcal{M}\) and \(\mathcal{N}\) agree on any input \(w\) with probability at least \(\frac{9}{10}\cdot\frac{15}{16}>\frac{4}{5}\). Consequently, the probability that \(\mathcal{N}\) decides correctly whether \(w\in L\) or not is bounded by \(\frac{4}{5}\cdot\frac{2}{3}>\frac{1}{2}\). As before, this probability can be made larger than \(\frac{2}{3}\) by repeating Procedure 4 a constant number of times and taking the majority of the decisions as output [2]. This shows that \(L(\mathcal{N})=L(\mathcal{M})=L\), and therefore \(L\in\mathbf{SNN}\left[p,O(f^{2})\right]\). ``` Input: input \(w\in\Sigma^{n}\) 1for\(i=0,\ldots,k(n):=\ulcorner 10p(1-p)f^{2}(n)\urcorner\)do 2 Draw a random bit \(b_{i}\) with probability \(p\) 3 Compute the estimation of the advice of \(\mathcal{M}\)\(\bar{p}^{\prime}=p_{0}^{\prime}\cdots p_{\log(f(n))-1}^{\prime}=\delta_{2}^{-1}( \frac{1}{k}\sum_{i=0}^{k(n)-1}b_{i})[0:\log(f(n))-1]\) 4for\(t=0,\ldots,f(n)-1\)do 5\(c_{t}=0\); 6for\(i=0,\ldots,\ulcorner\frac{-4-\log(f(n))}{\log(p^{2}+(1-p)^{2})}\urcorner\)do 7 Draw 2 random bits \(b\) and \(b^{\prime}\) with probability \(p\) 8if\(bb^{\prime}=01\)then\(c_{t}=0\); break 9if\(bb^{\prime}=10\)then\(c_{t}=1\); break 10 11for\(t=0,\ldots,f(n)-1\)do 12 Simulate the PTM/log(A) \(\mathcal{M}\) using the advice \(\bar{p}^{\prime}=p_{0}^{\prime}\cdots p_{\log(f(n))-1}^{\prime}\) and the sequence of random choices \(\bar{c}=c_{0}\cdots c_{f(n)-1}\) 13return Output of \(\mathcal{M}\) over \(w\) at time step \(f(n)\) ``` **Procedure 4** The class of languages decided in polynomial time by stochastic networks using real probabilities and Turing machines using related advices are the same. In this case, however, the length of the advice are logarithmic instead of polynomial. **Corollary 13**.: _Let \(p\in\Delta\) be some real probability and \(P\subseteq\Delta\) be some set of real probabilities._ 1. \(\mathbf{SNN}[p,\mathrm{poly}]=\mathbf{PTMA}[\alpha(\bar{p},\log),\mathrm{poly}]\)_._ 2. \(\mathbf{SNN}[P,\mathrm{poly}]=\mathbf{PTMA}[\alpha(\bar{P},\log),\mathrm{poly}]\)_._ Proof.: The proof is similar to that of Corollary 6. ## 6 Hierarchies In this section, we provide a refined characterization of the super-Turing computational power of analog, evolving, and stochastic neural networks based on the Kolmogorov complexity of their real weights, evolving weights, and real probabilities, respectively. More specifically, we show the existence of infinite hierarchies of classes of analog and evolving neural networks located between \(\mathbf{P}\) and \(\mathbf{P/poly}\). We also establish the existence of an infinite hierarchy of classes of stochastic neural networks between \(\mathbf{BPP}\) and \(\mathbf{BPP/log^{*}}\). Beyond proving the existence and providing examples of such hierarchies, we describe a generic way of constructing them based on classes of functions of increasing complexity. Towards this purpose, we define the _Kolmogorov complexity_ of a real number as stated in a related work [3]. Let \(\mathcal{M}_{U}\) be a universal Turing machine, \(f,g:\mathbb{N}\rightarrow\mathbb{N}\) be two functions, and \(\alpha\in\Sigma^{\omega}\) be some infinite word. We say that \(\alpha\in\bar{K}_{g}^{f}\) if there exists \(\beta\in\Sigma^{\omega}\) such that, for all but finitely many \(n\), the machine \(\mathcal{M}_{U}\) with inputs \(\beta[0:m-1]\) and \(n\) will output \(\alpha[0:n-1]\) in time \(g(n)\), for all \(m\geq f(n)\). In other words, \(\alpha\in\bar{K}_{g}^{f}\) if its \(n\) first bits can be recovered from the \(f(n)\) first bits of some \(\beta\) in time \(g(n)\). The notion expressed its interest when \(f(n)\leq n\), in which case \(\alpha\in\bar{K}_{g}^{f}\) means that every \(n\)-long prefix of \(\alpha\) can be compressed into and recovered from a smaller \(f(n)\)-long prefix of \(\beta\). Given two classes of functions \(\mathcal{F}\) and \(\mathcal{G}\), we define \(\bar{K}_{\mathcal{G}}^{\mathcal{F}}=\bigcup_{f\in\mathcal{F}}\bigcup_{g\in \mathcal{G}}\bar{K}_{g}^{f}\). Finally, for any real number \(r\in\Delta\) with associated binary expansion \(\bar{r}=\delta_{4}^{-1}(r)\in\Sigma^{\omega}\), we say that \(r\in K_{g}^{f}\) (resp. \(r\in K_{\mathcal{G}}^{\mathcal{F}}\)) iff \(\bar{r}\in\bar{K}_{g}^{f}\) (resp. \(\bar{r}\in\bar{K}_{\mathcal{G}}^{\mathcal{F}}\)). Given some set of functions \(\mathcal{F}\subseteq\mathbb{N}^{\mathbb{N}}\), we say \(\mathcal{F}\) is a class of _reasonable advice bounds_ if the following conditions hold: * Sub-linearity: for all \(f\in\mathcal{F}\), then \(f(n)\leq n\) for all \(n\in\mathbb{N}\). * Dominance by a polynomially computable function: for all \(f\in\mathcal{F}\), there exists \(g\in\mathcal{F}\) such that \(f\leq g\) and \(g\) is computable in polynomial time. * Closure by polynomial composition on the right: For all \(f\in\mathcal{F}\) and for all \(p\in\text{poly}\), there exist \(g\in\mathcal{F}\) such that \(f\circ p\leq g\). For instance, \(\log\) is a class of reasonable advice bounds. All properties in this definition are necessary for our separation theorems. The first and second conditions are necessary to define Kolmogorov reals associated to advices of bounded size. The third condition comes from the fact that RNNs can access any polynomial number of bits from their weights during polynomial time of computation. Note that our definition is slightly weaker than that of Balcazar et al., who further assume that the class should be closed under \(O(.)\)[3]. The following theorem relates non-uniform complexity classes, based on polynomial time of computation \(\mathbf{P}\) and reasonable advice bounds \(\mathcal{F}\), with classes of analog and evolving networks using weights inside \(K_{\mathrm{poly}}^{\mathcal{F}}\) and \(\bar{K}_{\mathrm{poly}}^{\mathcal{F}}\), respectively. **Theorem 14**.: _Let \(\mathcal{F}\) be a class of reasonable advice bounds, and let \(K_{\mathrm{poly}}^{\mathcal{F}}\subseteq\Delta\) and \(\bar{K}_{\mathrm{poly}}^{\mathcal{F}}\subseteq\Sigma^{\omega}\) be the sets of Kolmogorov reals associated with \(\mathcal{F}\). Then_ \[\mathbf{P}/\mathcal{F}^{*}=\mathbf{ANN}\left[K_{\mathrm{poly}}^{\mathcal{F}}, \mathrm{poly}\right]=\mathbf{ENN}\left[\bar{K}_{\mathrm{poly}}^{\mathcal{F}}, \mathrm{poly}\right].\] Proof.: We prove the first equality. By definition, \(\mathbf{P}/\mathcal{F}^{*}\) is the class of languages decided in polynomial time by some TM/A using any possible prefix advice of length \(f\in\mathcal{F}\), namely, \[\mathbf{P}/\mathcal{F}^{*}=\mathbf{TMA}\left[\alpha\big{(}\Sigma^{\omega}, \mathcal{F}\big{)},\mathrm{poly}\right].\] In addition, Corollary 6 ensures that \[\mathbf{ANN}\left[K_{\mathrm{poly}}^{\mathcal{F}},\mathrm{poly}\right]= \mathbf{TMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}},\mathrm{ poly}\big{)},\mathrm{poly}\right].\] Hence, we need to show that \[\mathbf{TMA}\left[\alpha\big{(}\Sigma^{\omega},\mathcal{F}\big{)},\mathrm{ poly}\right]=\mathbf{TMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}}, \mathrm{poly}\big{)},\mathrm{poly}\right]. \tag{5}\] Equation 5 can be understood as follows: in polynomial time of computation, the TM/As using small advices (of size \(\mathcal{F}\)) are equivalent to those using larger but compressible advices (of size \(\mathrm{poly}\) and inside \(\bar{K}_{\mathrm{poly}}^{\mathcal{F}}\)). For the sake of simplicity, we suppose that the polynomial time of computation of the TM/As are clear from the context by introducing the following abbreviations: \[\mathbf{TMA}\left[\alpha\big{(}\Sigma^{\omega},\mathcal{F}\big{)},\mathrm{poly}\right] :=\mathbf{TMA}\left[\alpha\big{(}\bar{\Sigma}^{\omega},\mathcal{F} \big{)}\right]\] \[\mathbf{TMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F }},\mathrm{poly}\big{)},\mathrm{poly}\right] :=\mathbf{TMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F }},\mathrm{poly}\big{)}\right].\] We show the backward inclusion of Eq. (5). Let \[L\in\mathbf{TMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}}, \mathrm{poly}\big{)}\right].\] Then, there exists some TM/A \(\mathcal{M}\) using advice \(\alpha(\bar{r},p_{1})\), where \(\bar{r}\in\bar{K}_{\mathrm{poly}}^{\mathcal{F}}\) and \(p_{1}\in\text{poly}\), deciding \(L\) in time \(p_{2}\in\text{poly}\). Since \(\bar{r}\in\bar{K}_{\text{poly}}^{\mathcal{F}}\), there exist \(\beta\in\Sigma^{\omega}\), \(f\in\mathcal{F}\) and \(p_{3}\in\text{poly}\) such that the \(p_{1}(n)\) bits of \(\bar{r}\) can be computed from the \(f(p_{1}(n))\) bits of \(\beta\) in time \(p_{3}(p_{1}(n))\). Hence, the TM/A \(\mathcal{M}\) can be simulated by the TM/A \(\mathcal{M}^{\prime}\) with advice \(\alpha(\beta,f\circ p_{1})\) working as follows: on every input \(w\in\Sigma^{n}\), \(\mathcal{M}^{\prime}\) first queries its advice string \(\beta_{0}\beta_{1}\cdots\beta_{f(p_{1}(n))-1}\), then reconstructs the advice \(r_{0}r_{1}\ldots r_{p_{1}(n)-1}\) in time \(p_{3}(p_{1}(n))\), and finally simulates the behavior of \(\mathcal{M}\) over input \(w\) in real time. Clearly, \(L(\mathcal{M}^{\prime})=L(\mathcal{M})=L\). In addition, \(p_{3}\circ p_{1}\in\text{poly}\), and since \(\mathcal{F}\) is a class of reasonable advice bounds, there is \(g\in\mathcal{F}\) such that \(f\circ p_{1}\leq g\). Therefore, \[L\in\mathbf{TMA}\left[\alpha(\beta,g)\right]\subseteq\mathbf{TMA}\left[\alpha \big{(}\Sigma^{\omega},\mathcal{F}\big{)}\right].\] We now prove the forward inclusion of Eq. (5). Let \[L\in\mathbf{TMA}\left[\alpha\big{(}\Sigma^{\omega},\mathcal{F}\big{)}\right].\] Then, there exists some TM/A \(\mathcal{M}\) with advice \(\alpha(\bar{r},f)\), with \(\bar{r}\in\Sigma^{\omega}\) and \(f\in\mathcal{F}\), deciding \(L\) in time \(p_{1}\in\text{poly}\). Since \(\mathcal{F}\) is a class of reasonable advice bounds, there exists \(g\in\mathcal{F}\) such that \(f\leq g\) and \(g\) is computable in polynomial time. We now define \(\bar{s}\in\bar{K}_{\text{poly}}^{\mathcal{F}}\) using \(\bar{r}\) and \(g\) as follows: for each \(i\geq 0\), let \(\bar{r}_{i}\) be the sub-word of \(\bar{r}\) defined by \[\bar{r}_{i}=\begin{cases}r_{0}r_{1}\cdots r_{g(0)-1}&\text{if }i=0\\ r_{g(i-1)}r_{g(i-1)+1}\cdots r_{g(i)-1}&\text{if }i>0\end{cases}\] and let \[\bar{s}=\bar{r}_{0}0\bar{r}_{1}0\bar{r}_{2}0\cdots.\] Given the \(g(n)\) first bits of \(\bar{r}\), we can build the \(g(n)+n\geq n\) first bits of \(\bar{s}\) by computing \(g(i)\) and the corresponding block \(\bar{r}_{i}\) (which can be empty) for all \(i\leq n\), and then intertwining those with \(0\)'s. This process can be done in polynomial time, since \(g\) is computable is polynomial time. Therefore, \(\bar{s}\in\bar{K}_{\text{poly}}^{\mathcal{F}}\). Let \(q(n)=2n\). Since \(\mathcal{F}\) is a class of reasonable advice bounds, \(g(n)\leq n\), and thus \(q(n)=2n\geq g(n)+n\). Now, consider the TM/A \(\mathcal{M}^{\prime}\) with advice \(\alpha(\bar{s},q)\) working as follows. On every input \(w\in\Sigma^{n}\), the machine \(\mathcal{M}^{\prime}\) first queries its advice \(\alpha(\bar{s},q)(n)=s_{0}s_{1}\cdots s_{q(n)-1}\). Then, \(\mathcal{M}^{\prime}\) reconstructs the string \(r_{0}r_{1}\cdots r_{g(n)-1}\) by computing \(g(i)\) and then removing \(n\)\(0\)'s from \(\alpha(\bar{s},q)(n)\) at positions \(g(i)+i\), for all \(i\leq n\). This is done in polynomial time, since \(g\) is computable in polynomial time. Finally, \(\mathcal{M}^{\prime}\) simulates \(\mathcal{M}\) with advice \(r_{0}r_{1}\cdots r_{g(n)-1}\) in real time. Clearly, \(L(\mathcal{M}^{\prime})=L(\mathcal{M})=L\). Therefore, \[L\in\mathbf{TMA}\left[\alpha(\bar{s},q)\right]\subseteq\mathbf{TMA}\left[ \alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}},\mathrm{poly}\big{)}\right].\] The property that have just been established together with Corollary 10 proves the second equality. We now prove the analogous of Theorem 14 for the case of probabilistic complexity classes and machines. In this case, however, the class of advice bounds does not anymore correspond exactly to the Kolmogorov space bounds of the real probabilities. Instead, a logarithmic correcting factor needs to be introduced. Given some class of functions \(\mathcal{F}\), we let \(\mathcal{F}\circ\log\) denote the set \(\{f\circ\log\mid f\in\mathcal{F}\}\). **Theorem 15**.: _Let \(\mathcal{F}\) be a class of reasonable advice bounds, then_ \[\mathbf{BPP}/(\mathcal{F}\circ\log)^{*}=\mathbf{SNN}\left[K_{\mathrm{poly}}^{ \mathcal{F}},\mathrm{poly}\right].\] Proof.: By definition, \(\mathbf{BPP}/(\mathcal{F}\circ\log)^{*}\) is the class of languages decided by PTM/A using any possible prefix advice of length \(f\in\mathcal{F}\circ\log\): \[\mathbf{BPP}/(\mathcal{F}\circ\log)^{*}=\mathbf{PTMA}\left[\alpha\big{(} \Sigma^{\omega},\mathcal{F}\circ\log\big{)},\mathrm{poly}\right].\] Moreover, Corollary 13 ensures that \[\mathbf{SNN}\left[K_{\mathrm{poly}}^{\mathcal{F}},\mathrm{poly}\right]= \mathbf{PTMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}},\log \big{)},\mathrm{poly}\right].\] Hence, we need to prove the following equality: \[\mathbf{PTMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}},\log \big{)},\mathrm{poly}\right]=\mathbf{PTMA}\left[\alpha\big{(}\Sigma^{\omega}, \mathcal{F}\circ\log\big{)},\mathrm{poly}\right] \tag{6}\] We first prove the forward inclusion of Eq. 6. Let \[L\in\mathbf{PTMA}\left[\alpha\big{(}\bar{K}_{\mathrm{poly}}^{\mathcal{F}}, \log\big{)},\mathrm{poly}\right].\] Then, there exists some PTM/A \(\mathcal{M}\) using advice \(\alpha(\bar{r},c\log)\), where \(\bar{r}\in\bar{K}_{\mathrm{poly}}^{\mathcal{F}}\) and \(c>0\), that decides \(L\) in time \(p_{1}\in\mathrm{poly}\). Since \(\bar{r}\in\bar{K}^{\mathcal{F}}_{\mathrm{poly}}\), there exist \(\beta\in\Sigma^{\omega}\) and \(f\in\mathcal{F}\) such that \(\bar{r}[0:n-1]\) can be computed from \(\beta[0:f(n)-1]\) in time \(p_{2}(n)\in\mathrm{poly}\), for all \(n\geq 0\). Consider the PTM/A \(\mathcal{M}^{\prime}\) with advice \(\alpha(\beta,f\circ c\log)\) working as follows. First, \(\mathcal{M}^{\prime}\) queries its advice \(\beta[0:f(c\log(n))-1]\), then it computes \(\bar{r}[0:c\log(n)-1]\) from this advice in time \(p_{2}(\log(n))\), and finally it simulates \(\mathcal{M}\) with advice \(\bar{r}[0:c\log(n)-1]\) in real time. Consequently, \(\mathcal{M}^{\prime}\) decides the same language \(L\) as \(\mathcal{M}\), and works in time \(O(p_{1}+p_{2}\circ\log)\in\mathrm{poly}\). Therefore, \[L\in\mathbf{PTMA}\left[\alpha\big{(}\Sigma^{\omega},\mathcal{F}\circ\log \big{)},\mathrm{poly}\right].\] We now prove the backward inclusion of Eq. 6. Let \[L\in\mathbf{PTMA}\left[\alpha\big{(}\Sigma^{\omega},\mathcal{F}\circ\log \big{)},\mathrm{poly}\right].\] Then, there exists some PTM/A \(\mathcal{M}\) using advice \(\alpha(\bar{r},f\circ c\log)\), where \(\bar{r}\in\Sigma^{\omega}\), \(f\in\mathcal{F}\) and \(c>0\), that decides \(L\) in time \(p_{1}\in\mathrm{poly}\). Using the same argument as in the proof of Theorem 14, there exist \(\bar{s}\in\bar{K}^{\mathcal{F}}_{\mathrm{poly}}\) and \(g\in\mathcal{F}\) such that \(f\leq g\) and the smaller word \(\bar{r}[0:g(n)-1]\) can be retrieved from the larger one \(\bar{s}[0:2n-1]\) in time \(p_{2}(n)\in\mathrm{poly}\), for all \(n\geq 0\). Now, consider the PTM/A \(\mathcal{M}^{\prime}\) using advice \(\alpha(\bar{s},2c\log)\) and working as follows. First, \(\mathcal{M}^{\prime}\) queries its advice \(\bar{s}[0:2c\log(n)-1]\), then it reconstructs \(\bar{r}[0:g(c\log(n))-1]\) from this advice in time \(O(p_{2}(\log(n)))\), and finally, it simulates \(\mathcal{M}\) with advice \(\bar{r}[0:g(c\log(n))-1]\) in real time. Since \(\bar{r}[0:g(c\log(n))-1]\) extends \(\bar{r}[0:f(c\log(n))-1]\), \(\mathcal{M}^{\prime}\) and \(\mathcal{M}\) decide the same language \(L\). In addition, \(\mathcal{M}^{\prime}\) works in time \(O(p_{1}+p_{2}\circ\log)\in\mathrm{poly}\). Therefore, \[\mathbf{PTMA}\left[\alpha\big{(}\bar{K}^{\mathcal{F}}_{\mathrm{poly}},\log \big{)},\mathrm{poly}\right].\] We now prove the separation of non-uniform complexity classes of the form \(\mathcal{C}/f\). Towards this purpose, we assume that each class of languages \(\mathcal{C}\) is defined on the basis of a set of machines that decide these languages. For the sake of simplicity, we naturally identify \(\mathcal{C}\) with its associated class of machines. For instance, \(\mathbf{P}\) and \(\mathbf{BPP}\) are identified with the set of Turing machines and probabilistic Turing machines working in polynomial time, respectively. In this context, we show that, as soon as the advice is increased by a single bit, the capabilities of the corresponding machines are also increased. To achieve this result, the two following weak conditions are required. First, \(\mathcal{C}\) must contain the machines capable of reading their full inputs (of length \(n\)) and advices (of length \(f(n)\)) (otherwise, any additional advice bit would not change anything). Hence, \(\mathcal{C}\) must at least include the machines working in time \(O(n+f(n))\). Secondly, the advice length \(f(n)\) should be smaller than \(2^{n}\), for otherwise, the advice could encode any possible language, and the corresponding machine would have full power. The following result is proven for machines with general (i.e., non-prefix) advices, before being stated for the particular case of machines with prefix advices. **Theorem 16**.: _Let \(f,g:\mathbb{N}\rightarrow\mathbb{N}\) be two increasing functions such that \(f(n)<g(n)\leq 2^{n}\), for all \(n\in\mathbb{N}\). Let \(\mathcal{C}\) be a set of machines containing the Turing machines working in time \(O(n+f(n))\). Then \(\mathcal{C}/f\subsetneq\mathcal{C}/g\)._ Proof.: Any Turing machine \(M\) with advice \(\alpha\) of size \(f\) can be simulated by some Turing machine \(M^{\prime}\) with an advice \(\alpha^{\prime}\) of size \(g>f\). Indeed, take \(\alpha^{\prime}(n)=(1)^{g(n)-f(n)-1}0\alpha(n)\). Then, on any input \(w\in\Sigma^{n}\), the machine \(M^{\prime}\) queries its advice \(\alpha^{\prime}(n)\), erases all \(1\)'s up to and including the first encountered \(0\), and then simulates \(M\) with advice \(\alpha(n)\). Clearly, \(M\) and \(M^{\prime}\) decide the same language. To prove the strictness of the inclusion, we proceed by diagonalization. Recall that the set of (probabilistic) Turing machines is countable. Let \(M_{0},M_{1},M_{2},\ldots\) be an enumeration of the machines in \(\mathcal{C}\). For any \(M_{k}\in\mathcal{C}\) and any advice \(\alpha:\mathbb{N}\rightarrow\Sigma^{*}\) of size \(f\), let \(M_{k}/\alpha\) be the associated (probabilistic) machine with advice, and let \(L(M_{k}/\alpha)\) be its associated language. The language \(L(M_{k}/\alpha)\) can be written as the union of its sub-languages of words of length \(n\), i.e. \[L(M_{k}/\alpha)=\bigcup_{n\in\mathbb{N}}L(M_{k}/\alpha)^{n}.\] For each \(k,n\in\mathbb{N}\), consider the set of sub-languages of words of length \(n\) decided by \(M_{k}/\alpha\), for all possible advices \(\alpha\) of size \(f\), i.e.: \[\mathcal{L}_{k}^{n}=\big{\{}L(M_{k}/\alpha)^{n}:\alpha\text{ is an advice of size }f\big{\}}.\] Since there are at most \(2^{f(n)}\) advice strings of length \(f(n)\), it follows that \(|\mathcal{L}_{k}^{n}|\leq 2^{f(n)}\), for all \(k\in\mathbb{N}\), and in particular, that \(|\mathcal{L}_{n}^{n}|\leq 2^{f(n)}\). By working on the diagonal \(\mathcal{D}=\big{(}\mathcal{L}_{n}^{n}\big{)}_{n\in\mathbb{N}}\) of the sequence \(\big{(}\mathcal{L}_{k}^{n}\big{)}_{k,n\in\mathbb{N}}\) (illustrated in Table 1), we will build a language \(A=\bigcup_{n\in\mathbb{N}}A^{n}\) that cannot be decided by any Turing machine in \(\mathcal{C}\) with advice of size \(f\), but can be decided by some Turing machine in \(\mathcal{C}\) with advice of size \(g\). It follows that \(A\in(\mathcal{C}/g)\setminus(\mathcal{C}/f)\), and therefore, \(\mathcal{C}/f\subsetneq\mathcal{C}/g\). Let \(n\in\mathbb{N}\). For each \(i<2^{n}\), let \(b(i)\in\Sigma^{n}\) be the binary representation of \(i\) over \(n\) bits. For any subset \(\mathcal{L}\subseteq\mathcal{L}_{n}^{n}\), let \[\mathcal{L}\big{(}b(i)\big{)}=\left\{L\in\mathcal{L}:b(i)\in L\right\}\ \ \text{and}\ \ \bar{\mathcal{L}}\big{(}b(i)\big{)}=\left\{L\in\mathcal{L}:b(i)\not\in L \right\}.\] Consider the sequence \((\mathcal{L}_{n,0}^{n},\ldots,\mathcal{L}_{n,f(n)+1}^{n})\) of decreasing subsets of \(\mathcal{L}_{n}^{n}\) and the sequence \((A_{0}^{n},\ldots,A_{f(n)+1}^{n})\) of sub-languages of words of length \(n\) defined by induction for every \(0\leq i\leq f(n)\) as follows \[\mathcal{L}_{n,0}^{n}=\mathcal{L}_{n}^{n}\ \ \text{and}\ \ \mathcal{L}_{n,i+1}^{n}=\begin{cases} \mathcal{L}_{n,i}^{n}\big{(}b(i)\big{)}&\text{if }|\mathcal{L}_{n,i}^{n}\big{(}b(i)\big{)}|<|\bar{ \mathcal{L}}_{n,i}^{n}\big{(}b(i)\big{)}|\\ \bar{\mathcal{L}}_{n,i}^{n}\big{(}b(i)\big{)}&\text{otherwise}\end{cases}\] \[A_{0}^{n}=\emptyset\ \ \text{and}\ \ \mathcal{A}_{i+1}^{n}=\begin{cases} \mathcal{A}_{i}^{n}\cup\{b(i)\}&\text{if }|\mathcal{L}_{n,i}^{n}\big{(}b(i)\big{)}|<|\bar{ \mathcal{L}}_{n,i}^{n}\big{(}b(i)\big{)}|\\ \mathcal{A}_{i}^{n}&\text{otherwise}.\end{cases}\] This construction is illustrated in Figure 3. Note that the \(n\)-bit representation \(b(i)\) of \(i\) is well-defined, since \(0\leq i\leq f(n)<2^{n}\). In addition, the construction ensures that \(|\mathcal{L}_{n,i+1}^{n}|\leq\frac{1}{2}|\mathcal{L}_{n,i}^{n}|\), and since \(|\mathcal{L}_{n,0}^{n}|=|\mathcal{L}_{n}^{n}|\leq 2^{f(n)}\), it follows that \(|\mathcal{L}_{n,f(n)+1}^{n}|=0\), meaning that \(\mathcal{L}_{n,f(n)+1}^{n}=\emptyset\). Furthermore, the construction also ensure that \(\mathcal{L}_{i}^{n}\subseteq\mathcal{L}_{i+1}^{n}\) and \(A_{f(n)+1}^{n}\in\mathcal{L}_{n,i}^{n}\), for all \(0\leq i\leq f(n)\). Now, towards, a contradiction, suppose that \(A^{n}_{f(n)+1}\in\mathcal{L}^{n}_{n}\). The previous properties imply that \[A^{n}_{f(n)+1}\in\bigcap_{0\leq i\leq f(n)}\mathcal{L}^{n}_{n,i}=\mathcal{L}^{n}_ {n,f(n)+1}=\emptyset\] which is a contradiction. Therefore, \(A^{n}_{f(n)+1}\not\in\mathcal{L}^{n}_{n}\), for all \(n\in\mathbb{N}\). Now, consider the language \[A=\bigcup_{n\in\mathbb{N}}A^{n}_{f(n)+1}.\] By construction, \(A^{n}_{f(n)+1}\) is the set of words of length \(n\) of \(A\), meaning that \(A^{n}_{f(n)+1}=A^{n}\), for all \(n\in\mathbb{N}\). We now show that \(A\) cannot be decided by any machine in \(\mathcal{C}\) with advice of size \(f\). Towards a contradiction, suppose that \(A\in\mathcal{C}/f\). Then, there exist \(M_{k}\in\mathcal{C}\) and \(\alpha:\mathbb{N}\to\Sigma^{*}\) of size \(f\) such that \(L(M_{k}/\alpha)=A\). On the one hand, the definition of \(\mathcal{L}^{k}_{k}\) ensures that \(L(M_{k}/\alpha)^{k}\in\mathcal{L}^{k}_{k}\). On the other hand, \(L(M_{k}/\alpha)^{k}=A^{k}=A^{k}_{f(k)+1}\not\in\mathcal{L}^{k}_{k}\), which is a contradiction. Therefore, \(A\not\in\mathcal{C}/f\). We now show that \(A\in\mathcal{C}/g\). Consider the advice function \(\alpha:\mathbb{N}\to\Sigma^{*}\) of size \(g=f+1\) given by \(\alpha(n)=\alpha_{0}^{n}\alpha_{1}^{n}\cdots\alpha_{f(n)}^{n}\), where \[\alpha_{i}^{n}=\begin{cases}1&\text{if }b(i)\in A^{n}\\ 0&\text{otherwise},\end{cases}\] for all \(0\leq i\leq f(n)\). Note that the advice string \(\alpha(n)\) encodes the sub-language \(A^{n}\), for all \(n\in\mathbb{N}\), since the latter is a subset of \(\{b(i):i\leq f(n)\}\) by definition. Consider the Turing machine with advice \(M/\alpha\) which, on every input \(w=w_{0}w_{1}\cdots w_{n-1}\) of length \(n\), moves its advice head up to the \(i\)-th bit \(\alpha_{i}^{n}\) of \(\alpha(n)\), where \(i=b^{-1}(w)\), if this bit exists (note that \(i<2^{n}\) and \(|\alpha(n)|=f(n)+1\)), and accepts \(w\) if and only if \(\alpha_{i}^{n}=1\). Note that these instructions can be computed in time \(O(f(n)+n)\). In particular, moving the advice head up to the \(i\)-th bit of \(\alpha(n)\) does not require to compute \(i=b^{-1}(w)\) explicitly, but can be achieved by moving simultaneously the input head from the end of the input to the beginning and the advice head from left to right, in a suitable way. It follows that \[w\in L(M/\alpha)^{n}\ \ \text{iff}\ \ \alpha_{b^{-1}(w)}^{n}=1\ \ \text{iff}\ \ b(b^{-1}(w))\in A^{n}\ \ \text{iff}\ \ w\in A^{n}.\] Hence, \(L(M/\alpha)^{n}=A^{n}\), for all \(n\in\mathbb{N}\), and thus \[L(M/\alpha)=\bigcup_{n\in\mathbb{N}}L(M/\alpha)^{n}=\bigcup_{n\in\mathbb{N}}A ^{n}=A.\] Therefore, \(A\in\mathcal{C}/g\). The argument can be generalized in a straightforward way to any advice size \(g\) such that \(f(n)+1\leq g(n)\leq 2^{n}\). Finally, the two properties \(A\not\in\mathcal{C}/f\) and \(A\in\mathcal{C}/g\) imply that \(\mathcal{C}/f\subsetneq\mathcal{C}/g\). We now prove the analogous of Theorem 16 for the case of machines with prefix advice. In this case, however, a stronger condition on the advice lengths \(f\) and \(g\) is required: \(f\in o(g)\) instead of \(f\leq g\). **Theorem 17**.: _Let \(f,g:\mathbb{N}\to\mathbb{N}\) be two increasing functions such that \(f\in o(g)\) and \(\lim_{n\to\infty}g(n)=+\infty\). Let \(\mathcal{C}\) be a set of machines containing the Turing machines working in time \(O(n+f(n))\). Then \(\mathcal{C}/f^{*}\subsetneq\mathcal{C}/g^{*}\)._ Proof.: The proof is similar to that of Theorem 16, except that we will construct the language \(A\) on the basis of a sequence of integers \((n_{i})_{i\in\mathbb{N}}\). Consider the sequence \((n_{i})_{i\in\mathbb{N}}\) defined for all \(i\geq 0\) as follows \[n_{0} = \min\left\{n\in\mathbb{N}:2(f(n)+1)\leq g(n)\right\}\] \[n_{i+1} = \min\Big{\{}n\in\mathbb{N}:2\sum_{j=0}^{i}(f(n_{j})+1)+2(f(n)+1) \leq g(n)\Big{\}}.\] We show by induction that the sequence \((n_{i})_{i\in\mathbb{N}}\) is well-defined. Since \(f\in o(g)\) and \(\lim_{n\to\infty}g(n)=+\infty\), the following limits hold \[\lim_{n\to\infty}\frac{2(f(n)+1)}{g(n)} = 2\lim_{n\to\infty}\frac{f(n)}{g(n)}+\lim_{n\to\infty}\frac{2}{g( n)}=0\] \[\lim_{n\to\infty}\frac{2\sum_{j=0}^{i}(f(n_{j})+1)+2(f(n)+1)}{g( n)} = 2\lim_{n\to\infty}\frac{f(n)}{g(n)}+\lim_{n\to\infty}\frac{C+2}{g( n)}=0\] where \(C=2\sum_{j=0}^{i}(f(n_{j})+1)\), which ensure that \(n_{0}\) and \(n_{i+1}\) are well-defined. For each \(k,i\in\mathbb{N}\), consider the set of sub-languages of words of length \(n_{i}\) decided by the machine \(M_{k}\in\mathcal{C}\) using any possible advice \(\alpha\) of size \(f\), i.e., \[\mathcal{L}_{k}^{n_{i}}=\left\{L(M_{k}/\alpha)^{n_{i}}:\text{$\alpha$ is an advice of size $f$}\right\}.\] Consider the diagonal \(\mathcal{D}=\left(\mathcal{L}_{i}^{n_{i}}\right)_{i\in\mathbb{N}}\) of the set \(\left(\mathcal{L}_{k}^{n_{i}}\right)_{k,n_{i}\in\mathbb{N}}\). Since there are at most \(2^{f(n_{i})}\) advice strings of length \(f(n_{i})\), it follows that \(|\mathcal{L}_{i}^{n_{i}}|\leq 2^{f(n_{i})}\). Using a similar construction as in the proof of Theorem 16, we can define by induction a sub-language \(A_{f(n_{i})+1}^{n_{i}}\subseteq\Sigma^{n_{i}}\) such that \(A_{f(n_{i})+1}^{n_{i}}\not\in\mathcal{L}_{i}^{n_{i}}\). Then, consider the language \[A=\bigcup_{i\in\mathbb{N}}A_{f(n_{i})+1}^{n_{i}}=\bigcup_{i\in\mathbb{N}}A^{n_ {i}}.\] Once again, a similar argument as in the proof of Theorem 16 ensures that \(A\not\in\mathcal{C}/f\). Since \(\mathcal{C}/f^{*}\subseteq\mathcal{C}/f\), it follows that \(A\not\in\mathcal{C}/f^{*}\). We now show that \(A\in\mathcal{C}/g^{*}\). Recall that, by construction, \(A^{n_{i}}\subseteq\{b(j):0\leq j\leq f(n_{i})\}\). Consider the word homomorphism \(h:\Sigma^{*}\to\Sigma^{*}\) induced by the mapping \(0\mapsto 00\) and \(1\mapsto 11\), and define the symbol \(\#=01\). For each \(i\in\mathbb{N}\), consider the encoding \(\beta_{0}^{n_{i}}\beta_{1}^{n_{i}}\cdots\beta_{f(n_{i})}^{n_{i}}\) of \(A^{n_{i}}\) given by \[\beta_{j}^{n_{i}}=\begin{cases}1&\text{if $b(j)\in A^{n_{i}}$},\\ 0&\text{otherwise},\end{cases}\] for all \(0\leq j\leq f(n_{i})\), and let \(\beta(n_{i})=h(\beta_{0}^{n_{i}}\beta_{1}^{n_{i}}\cdots\beta_{f(n_{i})}^{n_{i}})\). Note that \(|\beta(n_{i})|=2(f(n_{i})+1)\). Now, consider the advice function \(\alpha:\mathbb{N}\to\Sigma^{*}\) given by the concatenation of the encodings of the successive \(A^{n_{i}}\) separated by symbols \(\#\). Formally, \[\alpha(n)=\begin{cases}\beta(n_{0})\#\beta(n_{1})\#\cdots\#\beta(n_{i})&\text {if $n=n_{i}$ for some $i\geq 0$}\\ \beta(n_{0})\#\beta(n_{1})\#\cdots\#\beta(n_{i})\#&\text{if $n_{i}<n<n_{i+1}$.} \end{cases}\] Note that \(|\alpha(n)|=2\sum_{j=0}^{i}(f(n_{i})+1)\leq g(n_{i})\leq g(n)\) for \(n_{i}\leq n<n_{i+1}\), and that \(\alpha\) satisfies the prefix property: \(m\leq n\) implies \(|\alpha(m)|\leq|\alpha(n)|\). If necessary, the advice strings can be extended by dummy symbols \(10\) in order to achieve the equality \(|\alpha(n)|=g(n)\), for all \(n\geq 0\) (assuming without loss of generality that \(g(n)\) is even). Now, consider the machine with advice \(M/\alpha\) which, on every input \(w\) of length \(n\), first reads its advice string \(\alpha(n)\) up to the end. If the last symbol of \(\alpha(n)\) is \(\#\), then it means that \(|w|\neq n_{i}\) for all \(i\geq 0\), and the machine rejects \(w\). Otherwise, the input is of length \(n_{i}\) for some \(i\geq 0\). Hence, the machine moves its advice head back up to the last \(\#\) symbol, and then moves one step to the right. At this point, the advice head points at the beginning of the advice substring \(\beta(n_{i})\). Then, the machine decodes \(\beta_{0}^{n_{i}}\beta_{1}^{n_{i}}\cdots\beta_{f(n_{i})}^{n_{i}}\) from \(\beta(n_{i})\) by removing one out of two bits. Next, as in the proof of Theorem 16, the machine moves its advice head up to the \(j\)-th bit \(\beta_{j}^{n_{i}}\), where \(j=b^{-1}(w)\), if this bit exists (note that \(j<2^{n_{i}}\) and \(|\beta(n_{i})|=f(n_{i})+1\)), and accepts \(w\) if and only if \(\beta_{j}^{n_{i}}=1\). These instructions can be computed in time \(O(2\sum_{j=0}^{i}(f(n_{j})+1)+n_{i})\). It follows that \(w\in L(M/\alpha)^{n_{i}}\) iff \(w\in A^{n_{i}}\). Thus \(L(M/\alpha)^{n_{i}}=A^{n_{i}}\), for all \(i\in\mathbb{N}\), and hence \[L(M/\alpha)=\bigcup_{i\in\mathbb{N}}L(M/\alpha)^{n_{i}}=\bigcup_{i\in\mathbb{ N}}A^{n_{i}}=A\] Therefore, \(A\in\mathcal{C}/g^{*}\). Finally, the two properties \(A\not\in\mathcal{C}/f^{*}\) and \(A\in\mathcal{C}/g^{*}\) imply that \(\mathcal{C}/f^{*}\subsetneq\mathcal{C}/g^{*}\). The separability between classes of analog, evolving, and stochastic recurrent neural networks using real weights, evolving weights, and probabilities of different Kolmogorov complexities, respectively, can now be obtained. **Corollary 18**.: _Let \(\mathcal{F}\) and \(\mathcal{G}\) be two classes of reasonable advice bounds such _that, there is a \(g\in\mathcal{G}\), such that for every \(f\in\mathcal{F}\), \(f\in o(g)\) and \(\underset{n\rightarrow\infty}{\lim}g(n)=+\infty\). Then_ 1. \(\mathbf{ANN}\left[K^{\mathcal{F}}_{\mathrm{poly}},\mathrm{poly}\right]\subsetneq \mathbf{ANN}\left[K^{\mathcal{G}}_{\mathrm{poly}},\mathrm{poly}\right]\)__ 2. \(\mathbf{ENN}\left[\bar{K}^{\mathcal{F}}_{\mathrm{poly}},\mathrm{poly}\right]\subsetneq \mathbf{ENN}\left[\bar{K}^{\mathcal{G}}_{\mathrm{poly}},\mathrm{poly}\right]\)__ 3. \(\mathbf{SNN}\left[K^{\mathcal{F}}_{\mathrm{poly}},\mathrm{poly}\right]\subsetneq \mathbf{SNN}\left[K^{\mathcal{G}}_{\mathrm{poly}},\mathrm{poly}\right]\)__ Proof.: (i) and (ii): Theorem 14 shows that \[\mathbf{P}/\mathcal{F}^{*} = \mathbf{ANN}\left[K^{\mathcal{F}}_{\mathrm{poly}},\mathrm{poly} \right]=\mathbf{ENN}\left[\bar{K}^{\mathcal{F}}_{\mathrm{poly}},\mathrm{poly} \right]\text{ \ and }\] \[\mathbf{P}/\mathcal{G}^{*} = \mathbf{ANN}\left[K^{\mathcal{G}}_{\mathrm{poly}},\mathrm{poly} \right]=\mathbf{ENN}\left[\bar{K}^{\mathcal{G}}_{\mathrm{poly}},\mathrm{poly} \right].\] In addition, Theorem 17 ensures that \[\mathbf{P}/\mathcal{F}^{*}\subsetneq\mathbf{P}/\mathcal{G}^{*}\] The strict inclusions of Points (i) and (ii) directly follow. (iii): Theorem 15 states that \[\mathbf{BPP}/(\mathcal{F}\circ\log)^{*} = \mathbf{SNN}\left[K^{\mathcal{F}}_{\mathrm{poly}},\mathrm{poly}\right]\] \[\mathbf{BPP}/(\mathcal{G}\circ\log)^{*} = \mathbf{SNN}\left[K^{\mathcal{G}}_{\mathrm{poly}},\mathrm{poly} \right].\] In addition, note that if \(f\in\mathcal{F}\) and \(g\in\mathcal{G}\) satisfy the hypotheses of Theorem 17, then so do \(f\circ l\in\mathcal{F}\circ\log\) and \(g\circ l\in\mathcal{G}\circ\log\), for all \(l\in\log\). Hence, Theorem 17 ensures that \[\mathbf{BPP}/(\mathcal{F}\circ\log)^{*}\subsetneq\mathbf{BPP}/(\mathcal{G} \circ\log)^{*}.\] The strict inclusion of Point (iii) ensues. Finally, Corollary 18 provides a way to construct infinite hierarchies of classes of analog, evolving and stochastic neural networks based on the Kolmogorov complexity of their underlying weights and probabilities, respectively. The hierarchies of analog and evolving networks are located between \(\mathbf{P}\) and \(\mathbf{P}/\mathbf{poly}\). Those of stochastic networks lie between \(\mathbf{BPP}\) and \(\mathbf{BPP}/\mathbf{log}^{*}\). For instance, define \(\mathcal{F}_{i}=O\left(\log(n)^{i}\right)\), for all \(i\in\mathbb{N}\). Each \(\mathcal{F}_{i}\) satisfies the three conditions for being a class of reasonable advice bounds (note that the sub-linearity \(\log(n)^{i}\) is satisfied for \(n\) sufficiently large). By Corollary 18, the sequence of classes \((\mathcal{F}_{i})_{i\in\mathbb{N}}\) induces the following infinite strict hierarchies of classes of neural networks: \[\mathbf{ANN}\left[K_{\mathrm{poly}}^{\mathcal{F}_{0}},\mathrm{poly}\right] \subsetneq\] \[\mathbf{ENN}\left[\bar{K}_{\mathrm{poly}}^{\mathcal{F}_{0}}, \mathrm{poly}\right] \subsetneq\] \[\mathbf{SNN}\left[K_{\mathrm{poly}}^{\mathcal{F}_{0}},\mathrm{poly }\right] \subsetneq\] \[\mathbf{SNN}\left[K_{\mathrm{poly}}^{\mathcal{F}_{0}},\mathrm{poly }\right] \subsetneq\] \[\mathbf{SNN}\left[K_{\mathrm{poly}}^{\mathcal{F}_{1}},\mathrm{poly }\right] \subsetneq\] We provide another example of hierarchy for stochastic networks only. In this case, it can be noticed that the third condition for being a class of reasonable advice bounds can be relaxed: we only need that \(\mathcal{F}\circ\log\) is a class of reasonable advice bounds and that the functions of \(\mathcal{F}\) are bounded by \(n\). Accordingly, consider some infinite sequence of rational numbers \((q_{i})_{i\in\mathbb{N}}\) such that \(0<q_{i}<1\) and \(q_{i}<q_{i+1}\), for all \(i\in\mathbb{N}\), and define \(\mathcal{F}_{i}=O(n^{q_{i}})\), for all \(i\in\mathbb{N}\). Each \(\mathcal{F}_{i}\) satisfies the required conditions. By Corollary 18, the sequence of classes \((\mathcal{F}_{i})_{i\in\mathbb{N}}\) induces the following infinite strict hierarchies of classes of neural networks: \[\mathbf{SNN}\left[K_{\mathrm{poly}}^{\mathcal{F}_{0}},\mathrm{poly}\right] \subsetneq\] ## 7 Conclusion We provided a refined characterization of the super-Turing computational power of analog, evolving, and stochastic recurrent neural networks based on the Kolmogorov complexity of their underlying real weights, rational weights, and real probabilities, respectively. For the two former models, infinite hierarchies of classes of analog and evolving networks lying between \(\mathbf{P}\) and \(\mathbf{P/poly}\) have been obtained. For the latter model, an infinite hierarchy of classes of stochastic networks located between \(\mathbf{BPP}\) and \(\mathbf{BPP/log}^{*}\) has been achieved. Beyond proving the existence and providing examples of such hierarchies, Corollary 18 establishes a generic way of constructing them based on classes of functions satisfying the reasonable advice bounds conditions. This work is an extension of the study from Balcazar et al. [3] about a Kolmogorov-based hierarchization of analog neural networks. In particular, the proof of Theorem 14 draws heavily on their Theorem 6.2 [3]. In our paper however, we adopted a relatively different approach guided by the intention of keeping the computational relationships between recurrent neural networks and Turing machines with advice as explicit as possible. In this regard, Propositions 5, 9 and 12 characterize precisely the connections between the real weights, evolving weights, or real probabilities of the networks, and the advices of different lengths of the corresponding Turing machines. On the contrary, the study of Balcazar et al. keeps these relationships somewhat hidden, by referring to an alternative model of computation: the Turing machines with tally oracles. Another difference between the two works is that our separability results (Theorem 16 and 17) are achieved by means of a diagonalization argument holding for any non-uniform complexity classes, which is a result of specific interest per se. In particular, our method does not rely on the existence of reals of high Kolmogorov complexity. A last difference is that our conditions for classes of reasonable advice bounds are slightly weaker than theirs. The computational equivalence between stochastic neural networks and Turing machines with advice relies on the results from Siegelmann [85]. Our Proposition 12 is fairly inspired by their Lemma 6.3 [85]. Yet once again, while their latter Lemma concerns the computational equivalence between two models of bounded error Turing machines, our Proposition 12 describes the explicit relationship between stochastic neural networks and Turing machines with logarithmic advice. Less importantly, our appeal to union bound arguments allows for technical simplifications of the arguments presented in their Lemma. The main message conveyed by our study is twofold: (1) the complexity of the real or evolving weights does matter for the computational power of analog and evolving neural networks; (2) the complexity of the source of randomness does also play a role in the capabilities of stochastic neural networks. These theoretical considerations contrast with the practical research path about approximate computing, which concerns the plethora of approximation techniques - among which precision scaling - that could sometimes lead to disproportionate gains in efficiency of the models [66]. In our context, the less compressible the weights or the source of stochasticity, the more information they contain, and in turn, the more powerful the neural networks employing them. For future work, hierarchies based on different notions than the Kolmogorov complexity could be envisioned. In addition, the computational universality of echo state networks could be studied from a probabilistic perspective. Given some echo state network \(\mathcal{N}\) complying with well-suited conditions on its reser voir, and given some computable function \(f\), is it possible to find output weights such that \(\mathcal{N}\) computes like \(f\) with high probability? Finally, the proposed study intends to bridge some gaps and present a unified view of the refined capabilities of analog, evolving and stochastic recurrent neural networks. The debatable question of the exploitability of the super-Turing computational power of neural networks lies beyond the scope of this paper, and fits within the philosophical approach to hypercomputation [22; 23; 84]. Nevertheless, we believe that the proposed study could contribute to the progress of analog computation [5]. ## Acknowledgements This research was partially supported by Czech Science Foundation, grant AppNeCo #GA22-02067S, institutional support RVO: 67985807.
2309.15631
Design and Optimization of Residual Neural Network Accelerators for Low-Power FPGAs Using High-Level Synthesis
Residual neural networks are widely used in computer vision tasks. They enable the construction of deeper and more accurate models by mitigating the vanishing gradient problem. Their main innovation is the residual block which allows the output of one layer to bypass one or more intermediate layers and be added to the output of a later layer. Their complex structure and the buffering required by the residual block make them difficult to implement on resource-constrained platforms. We present a novel design flow for implementing deep learning models for field programmable gate arrays optimized for ResNets, using a strategy to reduce their buffering overhead to obtain a resource-efficient implementation of the residual layer. Our high-level synthesis (HLS)-based flow encompasses a thorough set of design principles and optimization strategies, exploiting in novel ways standard techniques such as temporal reuse and loop merging to efficiently map ResNet models, and potentially other skip connection-based NN architectures, into FPGA. The models are quantized to 8-bit integers for both weights and activations, 16-bit for biases, and 32-bit for accumulations. The experimental results are obtained on the CIFAR-10 dataset using ResNet8 and ResNet20 implemented with Xilinx FPGAs using HLS on the Ultra96-V2 and Kria KV260 boards. Compared to the state-of-the-art on the Kria KV260 board, our ResNet20 implementation achieves 2.88X speedup with 0.5% higher accuracy of 91.3%, while ResNet8 accuracy improves by 2.8% to 88.7%. The throughputs of ResNet8 and ResNet20 are 12971 FPS and 3254 FPS on the Ultra96 board, and 30153 FPS and 7601 FPS on the Kria KV26, respectively. They Pareto-dominate state-of-the-art solutions concerning accuracy, throughput, and energy.
Filippo Minnella, Teodoro Urso, Mihai T. Lazarescu, Luciano Lavagno
2023-09-27T13:02:14Z
http://arxiv.org/abs/2309.15631v2
Design and Optimization of Residual Neural Network Accelerators for Low-Power FPGAs Using High-Level Synthesis ###### Abstract Residual neural networks (ResNets) are widely used in computer vision tasks. They enable the construction of deeper and more accurate models by mitigating the vanishing gradient problem. Their main innovation is the _residual block_ which allows the output of one layer to bypass one or more intermediate layers and be added to the output of a later layer. Their complex structure and the buffering required by the residual block makes them difficult to implement on resource-constrained platforms. We present a novel design flow for implementing deep learning models for field-programmable gate arrays (FPGAs) optimized for ResNets, using a strategy to reduce their buffering overhead to obtain a resource-efficient implementation of the residual layer. The current implementations of residual networks suffer from diminished performance and heightened computational latency attributable to the way residual blocks are implemented. Our high-level synthesis based flow encompasses a thorough set of design principles and optimization strategies, exploiting in novel ways standard techniques such as _temporal reuse_ and _loop merging_ to efficiently map ResNet models, and potentially other skip connection-based NN architectures, into FPGA. The models are quantized to 8-bit integers for both weights and activations, 16 bits for biases, and 32 bits for accumulations. The experimental results are obtained on the CIFAR-10 dataset using ResNet8 and ResNet20 implemented with Xilinx FPGAs using HLS on the Ultra96-V2 and Kria KV260 boards. Compared to the state-of-the-art on the Kria KV260 board, our ResNet20 implementation achieves \(2.88\times\) speedup with \(0.5\,\mathrm{\char 37}\) higher accuracy of \(91.3\,\mathrm{\char 37}\), while ResNet8 accuracy improves by \(2.8\,\mathrm{\char 37}\) to \(88.7\,\mathrm{\char 37}\). The throughputs of ResNet8 and ResNet20 are \(12\,971\,\mathrm{FPS}\) and \(3254\,\mathrm{FPS}\) on the Ultra96 board, and \(30\,153\,\mathrm{FPS}\) and \(7601\,\mathrm{FPS}\) on the Kria KV26, respectively. They Pareto-dominate state-of-the-art solutions with respect to accuracy, throughput, and energy. ## 1 Introduction Convolutional neural networks (CNNs) have consistently achieved state-of-the-art results in many tasks, including computer vision and speech recognition [20]. Their success is based on high accuracy and performance due to the improved computational intensity of convolutional layers compared to previous approaches, requiring less memory bandwidth than fully connected (FC) layers [25]. The choice of hardware for implementing convolutional layers profoundly impacts their applicability. Central processing units (CPUs) are versatile and easy to program, but their architecture makes them relatively inefficient. Graphical processing units (GPUs) are designed to handle massive parallelism, allowing them to process multiple computations simultaneously. This aligns well with the inherently parallel nature of CNNs but their energy consumption is notably higher [6]. application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs) offer different tradeoffs of cost and flexibility for algorithm acceleration [40]. The latter are less performance and energy efficient due to their reprogrammability, but they have much lower design cost and can be more easily customized for a specific application. Neural networks (NNs) optimized for embedded applications [16, 41] are designed to run efficiently on devices with limited processing power, memory, and energy. They can perform very well on small datasets, such as CIFAR-10 [19] and MNIST [5], and are often used in real-time contexts where response timeliness and low latency are critical. Residual neural networks (ResNets) [15] use residual blocks (see Fig. 1) to mitigate the vanishing gradient problem for deep networks through _skip connections_. They allow intermediate feature maps to be reprocessed at different points in the network computation, increasing accuracy. However, state-of-the-art implementations of _skip connections_ require significant on-chip buffering resources, which significantly reduce the benefits of streaming-based FPGA implementations. Thus, recent work has focused on optimizing and shrinking the residual structure of NNs [33] and on finding quantization strategies that improve the efficiency of their hardware design [8]. Deep networks have many parameters and require extensive quantization to reduce their size to fit into the FPGA on-chip memory [22]. For this reason, widely used tools such as FINN [3] focus on low-bit quantization (such as 1-bit [28] or 2-bit [4]) and make suboptimal use of resources for higher-bit quantization, as we will show later. However, low-bit quantizations degrade NN accuracy and may not be suitable for accurate inference for complex problems [30]. We propose an efficient residual block architecture for a concurrent dataflow FPGA implementation, in which each layer is a process and activations are streamed between processes, with the following main contributions: * An optimized architecture for CNNs that supports residual networks and minimizes buffering resources, allowing on-chip storage of the parameters and activations using 8 bits, which have been shown to achieve good accuracy for our target NN architectures [21]. * A custom high-level synthesis (HLS) code generation flow from the Python model to the FPGA bitstream using Vitis HLS [37]. * The validation of the architecture and the implementation flow using the ResNet8 and ResNet20 residual NNs on CIFAR-10 targeting the Ultra96-V2 and Kria KV260 FPGA boards, to demonstrate the advantages of the proposed solution. The rest of the paper is organized as follows. Section 2 presents the background and motivation for this work. Section 3 discusses training and quantization, and describes the accelerator architecture with a special focus on skip connection management. Section 4 presents the experimental setup and discusses the results. Section 5 concludes the paper. ## 2 Related Work The field of FPGA-based acceleration for deep neural networks has gained significant attention due to its potential to achieve high-performance and energy-efficient inference. Several approaches and architectures have been proposed in the literature to address this challenge [10]. In systolic array overlay-based architectures, each processing element (PE) is a single instruction multiple data (SIMD) vector accumulation module, which receives activation inputs and weights in each cycle from the horizontally and vertically adjacent PEs. Pipelined groups of PEs with short local communication and regular architectures can achieve high clock frequencies and efficient global data transfers [32]. The overlay architecture [39] performs the computation of the convolution layers in sequence over a systolic array. However, despite its flexibility, it has high latency due to frequent transfers between external memory [double data rate (DDR) or high bandwidth memory (HBM)] and on-chip memory. An alternative approach is to implement a _custom dataflow architecture where each layer is associated with a compute unit_. This structure can be pipelined, and activations and weights can be stored in on-chip memory (OCM), reducing latency and increasing throughput. The main limitation of this type of architecture is the number of digital signal processor blocks (DSPs) and lookup tables (LUTs) required to implement the convolutional layers, as well as the size of on-chip buffers for weight storage [27], while activations are streamed from layer to layer. Since streaming tasks have well-defined pipelining rates with static data flows, a customized approach can lead to optimized processing, resulting in improved performance and resource saving. Widely recognized as one of the leading frameworks for deploying deep neural networks (DNNs) on FPGAs, Xilinx Vitis AI [1] provides a comprehensive set of tools specifically designed for optimizing and deploying DNNs on Xilinx FPGAs. With support for popular frameworks such as TensorFlow [7], PyTorch [11], and Caffe [9], it incorporates various optimization techniques such as pruning, quantization, and kernel fusion to improve performance and reduce memory consumption. The deep learning processor unit (DPU) is the accelerator core used in Vitis AI and consists of several key modules, including a high-performance scheduler module, a hybrid computing array module, an instruction fetch unit module, and a global memory pool module [36]. The DPU is responsible for executing the microcode of the spec Figure 1: Basic residual block with a long branch with two convolutional layers, and the skip connection (red branch) that must store its input activations until output activation is generated, requiring much memory. ified DNN model, known as the _xmodel_. Vitis AI uses an overlay-based architecture where model weights and biases are stored in DDR memory and cached in the on-chip weight buffer during inference. Input and output data of the PE array are also cached in OCM. This architecture scales very well for DNN, but may have higher resource utilization and less performance compared to custom dataflow accelerators due to its general-purpose nature and the overhead associated with off-chip memory accesses. Another widely used tool is FINN [3], an open source framework developed by Xilinx that allows the generation of highly optimized DNNs for FPGA acceleration, with emphasis on dataflow-style architectures. FINN uses HLS to convert trained DNN models into hardware intellectual property (IP) blocks that can be easily integrated into FPGA-based systems. While FINN offers significant customization capabilities, it is primarily designed for low-bitwidth quantization schemes, such as binarized networks. Achieving high performance with FINN often leads to lower accuracy and/or higher resources, particularly when using the 8-bit quantization that has been shown to be the best compromise between resources and accuracy. [31] evaluates the accuracy, power, throughput, and design time of three different CNNs implemented on FPGAs and compares these metrics to their GPU equivalents. A comparison was also made between a custom implementation of two DNNs using System Verilog and an implementation using the Xilinx tools FINN and Vitis AI [24]. In addition, [14] reports a comparison between FINN and Vitis AI using a widely used set of ResNet model configurations. We propose an optimized pipelined dataflow architecture tailored for better resource management. Our solution specifically targets residuals NNs and allows using more bits during quantization than e.g. FINN, to improve the trade-off between accuracy, throughput, and memory usage [35]. Its effectiveness is compared with Vitis AI, FINN, and custom resource-efficient approaches [42], demonstrating its potential to efficiently implement complex residual NNs with state-of-the-art performance in terms of latency, energy, accuracy, and resource consumption. ## 3 Methodology Fig. 2 shows the high-level view of the flow that we use to generate C++ code from the quantized NN model, which includes the following main steps: * Use Brevitas [26] for NN quantization and extract its graph in QONNX format, which provides an easy-to-parse description of the network, including information such as layer type, input and output quantization, and layer connections (see Section 3.1); * Generate the C++ of the top function that instantiates all the layers of the network (see Section 3.7 and Section 3.2); * Generate the register-transfer level (RTL) code using Vitis HLS and a set of model-independent synthesizable C++ libraries that we wrote to implement the optimized layers (see Section 3.3); * Import the RTL code as a block into a Vivado design and generate the bitstream for FPGA programming. ### Quantization Quantization is done using the Brevitas framework [26]. NNs are trained using PyTorch, and both the weights (\(\mathit{bw_{\text{w}}}\)) and the activations (\(\mathit{bw_{\text{x}}}\)) are represented as 8-bit integers, because for a variety of applications this is a good compromise between memory usage and accuracy ([21]), while the biases are represented as 16-bit integers (\(\mathit{bs_{\text{b}}}\)) for the same reason. NN training uses floating-point calculations and models quantization by clamping and rounding. Back-propagation uses dequantized inputs and weights to improve convergence and accuracy, while loss evaluation uses quantization to match the results of the hardware implementation. Inference on the FPGA uses multiplications of operands quantized to different sizes, while the results are accumulated in 32-bit registers to avoid overflows and efficiently map to FPGA resources such as DSPs, as discussed in Section 3.3. Figure 2: Implementation flow The quantization \(Q(\cdot)\) of a value \(b\) on \(bw\) bits \[a=Q(b)=\text{clip}(\text{round}(b\cdot 2^{bw-s}),a_{\text{min}},a_{\text{max}}) \cdot 2^{s}\quad s\in\mathbb{N}\] (1) \[a_{\text{min}}=\mathit{act}_{\text{min}}(s)=\left\{\begin{array}{ll}0& \text{if unsigned}\\ -2^{bw-1-s}&\text{if signed}\end{array}\right.\] (2) \[a_{\text{max}}=\mathit{act}_{\text{max}}(s)=\left\{\begin{array}{ll}2^{ bw-s}-1&\text{if unsigned}\\ 2^{bw-1-s}&\text{if signed}\end{array}\right.\] (3) uses \(s\) as scaling factor, \(a_{\text{min}}\) is the lower clipping bound and \(a_{\text{max}}\) is the higher one. All _zero points_ are set to zero and omitted in the expressions above, while the _scaling factors_ are set to powers of two to map alignment operations between weights and activations into hardware-friendly bit shifts [18]. The bias scaling factor \(s_{\text{b}}\) is calculated as the sum of the input scaling factor \(s_{\text{x}}\) and the weight scaling factor \(s_{\text{w}}\). After the training, to avoid the hardware implementation of floating point operations, the batch normalization layers are merged with the quantized convolution layers [17] and retrained to calibrate and tune the quantization parameters. The final model is exported to the QONNX format [12, 29]. ### Accelerator architecture The _code generation_ step in Fig. 2 works on the optimized QONNX graph, i.e. after ReLU and batch normalization were merged with convolutional layers, and provides a C++ _top function_ that instantiates all the tasks (also known as dataflow processes) needed to implement network inference: * _Computation task_: one for each convolution or pooling node to implement the layer computations (see Section 3.3). * _Parameter task_: one for each convolution to correctly provide the convolution parameters to the computation task. One additional task is added to load parameters from _off-chip memory_ if UltraRAM (URAM) storage is used (see Section 3.4). * _Window buffer tasks_: multiple tasks for each convolution or pooling node, to format the input data to the computation tasks (see Section 3.6). All tasks are coded in a reusable, templated C++ library that can adapt to different activation and weight tensor dimensions, data types, and computational parallelism. Each task is composed of a main loop that performs multiple operations. To increase the accelerator throughput, pipelining is enabled at two different levels: * Inter-task: concurrent layer execution is achieved using the _dataflow_ pragma in the body of the top function. There is one computation task and, possibly, multiple window buffer tasks running for each of the network layers. The latency of the slowest task determines the overall accelerator throughput. * Intra-task: concurrent operation execution is used to reduce and balance task latencies. _Computation tasks_ are the most computationally intensive. Thus each top loop inside them is partially unrolled by a factor computed at compile time and based on the complexity of the corresponding computation. This effectively allocates a number of PEs, one for each unrolled iteration, for each _computation task_. Each PE performs one or more multiply and accumulates (MACs) operations per clock cycle. See Section 3.3 and Section 3.5 about how low-level DSP packing is used to increase the number of MACs executed by a DSP unit in a clock cycle. If the _computation task_ belongs to a convolution, the related _parameter task_ main loop is unrolled by the same factor, to provide enough data to support the increased computations parallelism. An integer linear programming (ILP) model described in Section 3.5 is used to globally optimize the unroll factors (number of PEs) that maximizes NN throughput under DSP resource constraints (DSPs are the most critical FPGA resource for the NN architectures that we considered in this paper). Network inputs and outputs are implemented as _streams_ of data. direct memory access (DMA) blocks read/write input/output tensors to/from the off-chip memory. Streams also transfer data between tasks in order minimize the memory requirements for on-chip activations storage. (See Section 3.6) The _data-driven_ execution approach is chosen to process the frames sequentially and as a continuous stream. This is achieved in Vitis HLS by using the ap_ctrl_none pragma in the top function that models the entire NN. Each task is then operating as soon as input data are available. Inference begins as soon as the DMA attached to the input port of the top-level interface is enabled to download input images. Each task is pipelined; the first stage reads the input stream, while the others process the data and write the output stream. As a further, tool-specific, implementation detail, intra-task pipelines are not flushable, which would consume more resources, but stalling with auto-rewind disabled, to both save resources and avoid deadlocks. Auto-rewind would start a new loop execution while the pipeline is processing the last iterations of the old one, but with data-driven ap_ctrl_none dataflow it would cause deadlocks at runtime. Performance is largely unaffected because the _intra-task pipeline_ latency is very small, just a few cycles, compared to the task latency, which is proportional to the number of iterations of the _intra-task pipeline_. ### Convolution computation task Each convolution _computation task_ receives a window of input activations from a _window buffer task_. Fig. 4 shows the pseudo-code for the convolution computation and examples of how the computation pipeline receives input data and computes the partial results. The PARFOR pseudo-code is implemented as an unrolled loop in synthesizable C++. Input tensors are mostly provided in depth-first-order to each convolution \(i\), as discussed below. The innermost loops are completely unrolled over the filter dimensions (\(\mathit{fh}_{i},\mathit{fw}_{i}\)) and part of the output channel (\(\mathit{och}_{i}\)) dimension. This unroll factor \(\mathit{och}_{i}^{\mathit{par}}\), where "par" means that the execution will be fully data-parallel, defines the number of PEs, as discussed above. It is chosen by the algorithm described in Section 3.5. The \(\mathit{och}_{i}^{\mathit{par}}\) unroll factor is limited by on-chip memory bandwidth and the number of arithmetic units that can be used simultaneously. Increasing the number of output channels computed in parallel per clock cycle requires the corresponding filter parameters to be provided in parallel, i.e., higher memory bandwidth and potentially more BlockRAM (BRAM) resources. Another optimization changes the order in which the windows are given to the data path, instead of channel first order, and unrolls along the output tensor width (\(\mathit{ow}_{i}\)) loop by a factor (\(\mathit{ow}_{i}^{\mathit{par}}\)). Unrolling along the tensor width allows us to reduce the computation time without requiring more memory bandwidth for the filter parameters, at the cost of more partitioning of the input activation window buffer, and hence of potentially more BRAM resources. This also allows the weights to be reused within an output stationary dataflow and can be exploited in future work where larger networks are considered and the off-chip memory is used to store network parameters. We now discuss how we exploit the DSP packing method described in [38] to reduce the hardware overhead of computing quantized data, by performing multiple operations on a single DSPs block. Unlike \(\mathit{och}_{i}^{\mathit{par}}\), which is resource dependent, \(\mathit{ow}_{i}^{\mathit{par}}\) depends on the activation quantization bits. Even though the number of operations packed in a DSP depends on the number of bits, this work only used the configuration described in [38], which presents a method for \(\mathit{bw}_{i}=8\) for both parameters and activations. Fig. 5 shows two examples of calculation pipelines, with different values of \(\mathit{ow}_{i}^{\mathit{par}}\). Each gray box is a PE that receives: * _Input activations:_\(\mathit{ow}_{i}^{\mathit{par}}\) inputs. These values change at each iteration of the \(\mathit{och}_{i}^{\mathit{groups}}\) loop. The input activations are multiplied in parallel by the PE input weight and are provided by the corresponding _window buffer tasks_. * _Input weight_: one input. This value is updated at each clock cycle. The input weight is provided by the corresponding _parameter task_. \begin{table} \begin{tabular}{l l} \hline \hline Symbol & Description \\ \hline \(\mathit{ich}_{i}\) & Input tensor channels \\ \(\mathit{ih}_{i}\) & Input tensor height \\ \(\mathit{iw}_{i}\) & Input tensor width \\ \(\mathit{och}_{i}\) & Output tensor channels \\ \(\mathit{oh}_{i}\) & Output tensor height \\ \(\mathit{ow}_{i}\) & Output tensor width \\ \(\mathit{fh}_{i}\) & Filter tensor height \\ \(\mathit{fw}_{i}\) & Filter tensor width \\ \(\mathit{s}_{i}\) & Convolution stride \\ \hline \hline \end{tabular} \end{table} Table 1: Symbol definitions for layer \(i\) Figure 3: Accelerator architecture with direct memory access (DMA) blocks for memory transfers (grey box) and concurrent tasks [computation (yellow), buffer (red), and parameter (green)] communicating through data streams. Parameter loading from off-chip memory to URAMs (dashed) can be enabled on platforms supporting it. * _Partial accumulation_: one input. This value is updated at each clock cycle. The partial accumulation is provided by the previous pipeline stage. Each PE receives an input weight every clock cycle, so sufficient OCM bandwidth must be provided (see Section 3.4). The two pipelines in Fig. 5 highlight how \(\mathit{och}_{i}^{\mathit{par}}\) allocates multiple PEs per pipeline stage (horizontal unroll), and how \(\mathit{ow}_{i}^{\mathit{par}}=2\) modifies the mapping of the input activations to the different stages of the pipelines, thus increasing the number of computations for each PE. The partial accumulation entering each PE comes from the previous pipeline stage. The only exception is the first stage, which receives as value to accumulate the _bias_ of the convolution. Each MAC calculation, for the case \(\mathit{ow}_{i}^{\mathit{par}}=1\), is done using a DSP from the _Xilinx_ architecture. If \(\mathit{ow}_{i}^{\mathit{par}}=2\) the two MACs have reduced resource usage thanks to the technique described in [38]. As shown by the pipeline in Fig. 5 with \(\mathit{ow}_{i}^{\mathit{par}}=2\), the operation packing is done by multiplying 2 activations (\(A\), \(D\)) by 1 parameter (\(B\)) and accumulating to a partial result (\(P_{i-1}\)). The output (\(P_{i}\)) is passed to the next pipeline stage. The multiplier of the DSPs receives one \(27\,\mathrm{bit}\) and one \(18\,\mathrm{bit}\) input. The former packs the two activations: while the latter contains the sign-extended parameter: Figure 4: Convolution architecture: The data flow is output stationary, and for \(\mathit{och}_{i}^{\mathit{par}}\) output channels the contribution of the input window \(\mathit{fh}_{i},\mathit{fw}_{i}\) is evaluated every clock cycle. Data is written to the output after all input channels have been processed. The dataflow setup for \(\mathit{ow}_{i}^{\mathit{par}}=1\) and \(\mathit{ow}_{i}^{\mathit{par}}=2\) is shown in the two schematics. The input activations are loaded simultaneously, along orange lines, into each gray box, which is a PE. PEs performs a _MAC_ operation and the partial results move through the pipeline from top to bottom along the green lines. The two operands are multiplied into a \(36\,\mathrm{bit}\) word (\(M\)): The DSP also adds the partial product to the accumulation coming from the previous pipeline stage, \(P_{i-1}\): At the end of the chain, a restore stage ensures that the \(p_{v}\) sign does not create errors in the final result: Note that in our specific use case, where activations and parameters are quantized on \(8\,\mathrm{bits}\), we can at most chain **7** packed DSPs, because of the limited padding between the two words, namely \(2\,\mathrm{bits}\), and the restore mechanism which corrects \(1\,\mathrm{bit}\) overflow. However, for convolution filters with \(\textit{fn}_{i}=3\) and \(\textit{fw}_{i}=3\), the DSP chain should have a length of 9. Hence we split the chain into \(2\) subparts that respect the maximum length condition. The partial accumulations coming from the different chains are then added together in an additional stage, and the result coming from the DSPs pipeline is finally added to the accumulation register. Registers keeping the partial results of the MAC between multiple iterations are sized in order to avoid overflows in the computations. Considering models with \(\textit{bw}_{i}\) bit quantization, the accumulated values from the product between an activation and a parameter, have a width equal to \(2\cdot\textit{bw}_{i}\), using same bit-width for activation and parameter quantization. For each convolution, the number of accumulations (\(N_{i}^{acc}\)) performed to compute each output value is \[N_{i}^{acc}=\textit{och}_{i}\cdot\textit{ich}_{i}\cdot\textit{fh}_{i}\cdot \textit{fw}_{i}. \tag{4}\] Since the addend has \(2\textit{bw}_{i}\) bits, the final accumulation register must have a width equal to \[\textit{bw}_{i}^{acc}=\lceil\log_{2}(N_{i}^{acc})\rceil+2\textit{bw}_{i}. \tag{5}\] Considering the worst case for _Resnet8_ and _Resnet20_ with \(8\,\mathrm{bit}\) quantization, the required bitwidth is \[N_{i}^{acc}=32\cdot 32\cdot 3\cdot 3=9216 \tag{6}\] \[\textit{bw}_{i}^{acc}=\lceil\log_{2}9216\rceil+2\cdot 8=14+16=30. \tag{7}\] The accumulation register size is chosen to be \(32\,\mathrm{bit}\) because it ensures no overflow, and using standard C++ types improves C simulation speed. ### Parameter task Each convolution layer of the QONNX network graph has a _parameter task_ in the top function, feeding the computation pipeline with data from on-chip memory. Depending on the target FPGA, parameters may be stored in: * _BRAMs_: they can store up to \(4\,\mathrm{KB}\) each and can be initialized by the bitstream. The parameters for each convolution are stored in separate arrays, one for each weight and bias of the convolutions, because each is accessed by a specific _parameter task_. * _UltraRAMs (URAMs)_: they can store \(32\,\mathrm{KB}\) of data each (allowing higher performance) but require dedicated hardware for initialization (a DMA-driven input stream). The parameters for each convolution are packed into a single array stored in off-chip DRAM (also accessible by the host) and transferred by DMA once at power-up. A concurrent task in the dataflow architecture splits and distributes the input parameter stream to the tasks that handles the parameters of each convolution. Each _Parameter task_ provides the filter data to the computation pipeline and caches it in URAMs at the first iteration for reuse (hence the next URAMs accesses are read-only). The Ultra96 board lacks URAM, so BRAM is used. The KRIA KV260 board uses the URAM. As discussed in Section 3.3, the main loop of each convolution's _computation task_ consumes \(\textit{cw}_{i}=\textit{och}_{i}^{par}\cdot\textit{fh}_{i}\cdot\textit{fw}_{i}\) filter data per clock cycle. The \(\textit{ow}_{i}^{par}\) unroll factor does not contribute because each parameter is used for multiplication with \(\textit{ow}_{i}^{par}\) activations. To avoid stalling the computation pipeline, the _parameter task_ must write \(\textit{cw}_{i}\) weights every clock cycle and read the same amount of data from the BRAMs or URAMs. Arrays are then reshaped by a factor equal to \(\textit{cw}_{i}\), using the array_reshape pragma, to achieve the required memory bandwidth. ### Throughput optimization To avoid stalling, all streams are sized appropriately by our configuration _Python_ script based on their type, as follows. Streams created by _parameter tasks_ supply _computation tasks_ with a token size equal to the computational parallelism of the consuming convolution, \(\textit{och}_{i}^{par}\), every clock cycle. Since the producer and consumer write and read one token per clock cycle, the stream size is 2. The sizes of the streams produced by _window buffer tasks_ are discussed in Section 3.6. The output stream from _computation tasks_ must consider \(\textit{och}_{i}^{par}\) and \(\textit{ow}_{i}^{par}\). The pseudocode in Fig. 4 shows that computation tasks_ write a burst of \(\mathit{och}_{i}\cdot\mathit{ow}_{i}^{par}\) output activations, grouped into tokens of size \(\mathit{och}_{i}^{par}\) to not stall the pipeline. When packing is applied, the output stream is split into \(\mathit{ow}_{i}^{par}\) parallel channels to ensure enough bandwidth. Each channel is implemented by a first in first out (FIFO) of size \(\mathit{och}_{i}^{groups}=\mathit{och}_{i}/\mathit{och}_{i}^{par}\) to store the burst transactions completely. As mentioned above, using the _dataflow_ paradigm and assuming optimal stream sizing to avoid stalling, accelerator throughput is limited by the slowest concurrent process. Therefore, the throughput \(\mathit{Th}\) of each layer unit must be balanced for optimal performance. The latency of each module depends on the number of computations for each input frame \(c\) and the computational parallelism \(\mathit{cp}\) required for each block \(i\). The number of computations for a convolutional layer is \[c_{i}=\mathit{oh}_{i}\cdot\mathit{ow}_{i}\cdot\mathit{och}_{i}\cdot\mathit{ ich}_{i}\cdot\mathit{fh}_{i}\cdot\mathit{fw}_{i}. \tag{8}\] Since the parameter \(c_{i}\) is fixed and depends on the chosen network architecture, the throughput per layer is set by the number of compute units allocated to each _computation task_ implementing a layer. As shown in the pseudocode in Fig. 4, computation parallelism \(\mathit{cp}_{i}\) is \[\mathit{cp}_{i} =k_{i}\cdot\mathit{och}_{i}^{par}\cdot\mathit{ow}_{i}^{par}, \tag{9}\] \[\text{with}\quad k_{i} =\mathit{fh}_{i}\cdot\mathit{fw}_{i},\mathit{och}_{i}^{par}, \mathit{ow}_{i}^{par}\in\mathbb{N}. \tag{10}\] Since the filter size \(k_{i}\) is defined by the model and \(\mathit{ow}_{i}^{par}=2\), because for this work we consider all the quantization bit-widths equal to \(8\,\mathrm{bit}\), the variable to optimize is \(\mathit{och}_{i}^{par}\), i.e. \(\mathit{cp}_{i}\) is an integer multiple of the filter size. The throughput of each task, \(\mathit{Th}_{i}\) frame per second (FPS), depends on the variable \(\mathit{och}_{i}^{par}\) \[\mathit{Th}_{i}=\mathit{Th}\left(\mathit{och}_{i}^{par}\right)=\frac{\mathit{ cp}_{i}}{c_{i}}=\frac{k_{i}\cdot\mathit{och}_{i}^{par}\cdot\mathit{ow}_{i}^{par}}{c_{i}}. \tag{11}\] Considering a network with \(N\) convolutional layers, Algorithm 1 shows an ILP formulation of throughput optimization. If \(i_{\max}\in[1,N]\) is the index of the layer with the highest \(c_{i}\), then the goal is to balance the throughput of all layers \[\forall i\in[1,N]\quad\mathit{Th}\left(\mathit{och}_{i_{\max}}^{par}\right)= \mathit{Th}\left(x_{i}\right)\implies\mathit{cp}_{i}=\mathit{cp}_{i_{\max}}r_ {i} \tag{14}\] with \(r_{i}=c_{i}/c_{i_{\max}}\). Then the number of resources needed for each layer can be calculated, given the resources allocated for layer \(i_{\max}\). The total number of parallel computations allocated is \[\mathit{cp}_{\mathrm{tot}}=\sum_{i=1}^{N}\mathit{cp}_{i}=\sum_{i=1}^{N} \mathit{cp}_{i_{\max}}r_{i}. \tag{15}\] From (13), \(N_{\mathrm{PAR}}\) limits the maximum number of computations that can be done in parallel and depends on the platform. The FPGAs on the Ultra96 and KRIA KV260 boards that we are considering have \(360\) and \(1248\) DSPs, respectively. During hardware generation, \(N_{\mathrm{PAR}}\) is set to the number of DSPs on the target board. The ILP can then maximize the throughput of the network by optimizing the parameters for the \(i_{\max}\) layer (12) and automatically configuring the template parameters of the tasks. ### Window generation Given a convolution input tensor, we only need to store on-chip enough data to provide the input window to the _intra-task_ pipeline of each computational task. For example, Fig. 6 shows an input tensor and the input window mapping for a convolution with \(\mathit{fh}_{i}=3,\mathit{fw}_{i}=3\). It is important to highlight that the activations are produced using _depth-first_ order by the convolution that creates the input tensor (Fig. 4), while the input window is distributed over one channel and multiple lines. It is thus necessary to store all the lines needed to generate an input window, so each window buffer (also called line buffer in the literature) should be sized to accommodate the required activations. The portion of the input tensor (\(B_{i}\)) that the buffer must retain to create an input window is highlighted in Fig. 6 \[B_{i}=[(\mathit{fh}_{i}-1)\cdot\mathit{iw}_{i}+\mathit{fw}_{i}-1]\cdot\mathit{ ich}_{i}. \tag{16}\] This size is constant over time because each time that the buffer reads an activation and generates a window, it disc Figure 6: Input window mapped on the input tensor, \(\mathit{ow}^{par}{}_{i}=1\). Fig. 4 shows how the window elements map to the computation pipeline. The _window buffer tasks_ retrieve from the input buffer the \(B_{i}\) activations required for the windows. At the maximum unroll factor, \(\mathit{och}_{i}^{\mathit{par}}=\mathit{och}_{i}\), each intra-task pipeline of the _computation task_ processes one input window per clock cycle. The data read by the _window buffer tasks_ from the input activation buffer is \(\mathit{fh}_{i}\cdot\mathit{fw}_{i}\), i.e. a convolution window. The data needed for the input window is not contiguous and cannot be read by directly addressing the buffer, because it is stored sequentially in a FIFO with only one read port available. To provide the necessary bandwidth, the FIFO must be partitioned into \(\mathit{fh}_{i}\cdot\mathit{fw}_{i}\) parts, connected sequentially as shown in Fig. 7. Optimizing the window buffer to reduce the required partitioning in cases that allow a lower window generation rate is left for future work. The size of each FIFO, \(S_{1}\), \(S_{2}\), is the distance, in number of activations, between successive values of the same input window, considering that the tensor is processed in _depth-first_ order. \(S_{1}\) represents the distance between two activations within the same row of the input window, and it is equal to the number of channels \(\mathit{ich}_{i}\) in the tensor. In contrast, \(S_{2}\) covers the gap between two activations in different rows of the input window, and it is directly proportional to one row (\(\mathit{ich}_{i}\cdot\mathit{iw}_{i}\)) in the input tensor, minus the filter width \(\mathit{fw}_{i}\). Each _task_\(T_{i}\) reads from a FIFO the data for an input window position, \(i\), and provides the input for the next FIFO slice, \(i+1\). The _padding_ task, if enabled, reads at each cycle the data from _task_\(T_{i}\) for positions \(i\) that do not require padding, and replaces with \(0\) the positions of the input window that must be padded. Thanks to the concurrent execution and padding-aware control of the _window buffer tasks_ and _padding task_, the first buffer slices can be initialized with the tensor values of the next frame, while the last ones generate the final windows of the previous frame, avoiding the latency overhead caused by initializing the FIFOs. The structure of the _window buffer_ depends on the unroll factor \(\mathit{ow}_{i}^{\mathit{par}}\). Fig. 8 shows the input window mapped to the input tensor with \(\mathit{ow}_{i}^{\mathit{par}}=2\), for which all considerations made before about \(\mathit{ow}_{i}^{\mathit{par}}=1\) apply. The input buffer size is \[B_{i}=[(\mathit{fh}_{i}-1)\cdot\mathit{iw}_{i}+\mathit{fw}_{i}]\cdot\mathit{ich }_{i} \tag{17}\] so the overhead with respect to (17) is minimal. The buffer must be partitioned to ensure the required window production rate. With \(\mathit{ow}_{i}^{\mathit{par}}=2\), the elements of the input window are \((\mathit{fw}_{i}+\mathit{ow}_{i}^{\mathit{par}}-1)\cdot\mathit{fh}_{i}\). Fig. 9 shows how the buffer is partitioned according to the required bandwidth. The main difference between Fig. 7 and Fig. 9 is how the activations flow in the different FIFO slices. Given an input filter of size \(\mathit{fh}_{i}\cdot\mathit{fw}_{i}\), an input activation is multiplied, at different times, by a value in each position of the filter window. If \(\mathit{ow}_{i}^{\mathit{par}}=1\), there is a one-to-one correspondence between the positions of the input window and those of the filter window. This means that the activation must pass through all FIFO slices, because each of them represents a position \(i\) of the input/filter windows. If \(\mathit{ow}_{i}^{\mathit{par}}=2\), the input window keeps two windows that are multiplied by the parameter window, i.e. part of the activations are evaluated in two adjacent positions for each input window (\(i,i+1\)). Thus, the output of \(T_{i}\) must be connected to the input of the FIFO slice \(i+2\) to ensure correct data flow. Figure 8: Input window mapped on the input tensor, \(\mathit{ow}_{i}^{\mathit{par}}=2\), retaining two computation windows (red and blue). Fig. 4 shows how the window elements map to the computation pipeline. Figure 7: Buffer partitioning, \(\mathit{ow}_{i}^{\mathit{par}}=1\). Yellow boxes are the FIFOs with their sizes. Orange boxes are tasks that read and write the FIFOs. Padding is applied before generating the box for the convolution. ### Graph Optimization The main contribution of this paper is to provide a structured methodology to efficiently implement a residual block in a dataflow accelerator with concurrent processes. The same considerations from Section 3.6 can be extended to network graphs with multiple nodes processing the same input tensor, i.e. residual blocks, as shown in Fig. 10, to provide a more general solution. Considering a tensor processed by multiple convolutions, multiple branches start from the convolution that generates the tensor. In residual networks the branches are then merged by an _add_ layer. Fig. 10 shows _Resnet20_ and _Resnet8_ residual block topologies with _2 branches_ per input tensor and _0 or 1 convolutions_ on skip connection (the branch crossing fewer convolutions). The _add_ layer adds the values from the _2 branches_. Because of the dataflow architecture, the operation starts as soon as both input streams have data. However, the time required to fill each stream is different. The skip connection stream that reaches the _add_ node is filled in parallel with the _conv0_ input stream in the case without downsampling, or after \(ich_{i}\) cycles in the case with downsampling. The input stream from the long branch is filled as soon as _conv1_ provides its first output activation. As shown by Fig. 6, _conv1_ starts processing data as soon as its input buffer is full. The amount of data buffered for skip connections, \(B_{\text{sc}}\), is equal to the amount to be processed by _conv0_ and is sufficient to start _conv1_ operations. To calculate this value we use the _receptive field_[2], which is the portion of feature maps related to successive layers that contribute to produce an activation. Fig. 11 shows the _receptive field_ of the _conv1_ window with respect to the _conv0_ that generates it. \(B_{\text{sc}}\) is the buffering of all receptive fields projected from the activation in the _conv1_ line buffer as soon as it starts computing. From [2], as shown in Fig. 11, the data to store for each receptive field \(B_{\text{r}}\) is \[rh_{0} =fh_{1}+fh_{0}-1 \tag{18}\] \[rw_{0} =fw_{1}+fw_{0}-1\] (19) \[B_{r} =rh_{0}\cdot rw_{0}. \tag{20}\] Sliding the receptive field window over \(ich_{i},iw_{i}\), the obtained buffering \(B_{\text{sc}}\) is \[B_{\text{sc}}=[iw_{0}\left(rh_{0}-1\right)+rw_{0}]\,ich_{0}. \tag{21}\] In the dataflow architecture used in the final implementation, the "bypass" branch must store its input activation data Figure 11: Receptive field of the _conv1_ window. For clarity, the \(ich\) dimension is omitted and _conv1_ stride is assumed to be 1 (\(s_{1}=1\)). Figure 10: _Resnet20_ and _Resnet8_ residual blocks Figure 9: Buffer partitioning, \(\textit{ow}_{i}^{\text{par}}=2\). The data in output from each _task Ti_ is connected as input to the FIFO slice at the position \(i+\textit{2}\) because of activation reuse. from the previous stage until the first output activation is generated by the convolution and the merged output can be generated. In previous dataflow implementations of CNNs, this buffering consumed large amounts of memory ( [34]). To efficiently support _residual_ blocks, the multiple endpoints of the input tensor and the increased buffering caused by the different number of convolutions (and thus different computation delays) per branch must be handled differently. The combination of the following optimization steps for the dataflow architecture, _proposed for the first time in this paper_, can avoid it, e.g., in CNNs such as Resnet8 and Resnet20. The following two transformations (see Fig. 12) show how to solve the problem of multiple endpoints, reducing the length of the skip connection and the required buffering: * Loop merge: if the residual block has a downsample layer, i.e., a pointwise convolution in the short branch of the skip connection, the algorithm merges the two convolution loops. Both computations are performed by the same task, which provides the tensor produced by the merged layer as an additional output. * Temporal reuse: to avoid buffering the same tensor twice, if the residual block does not have a downsample layer, the graph optimization uses the window buffer as input to the convolution. The values are forwarded, using a second output stream, to the next layer once they have been completely used. Thanks to these transformations, the two input streams of the _add_ merge layer are produced simultaneously, and computation tasks never stall. _Conv0_ writes the skip connection stream as soon as the convolution computation starts and at the same rate as the convolution output tensor. The last transformation, shown in Fig. 13, removes the sum of the value coming from the short branch by connecting it as an additional contribution to the second convolution of the long branch. The value from the skip branch is used to initialize the accumulator register, and the addition operation is removed from the network graph. The producer and consumer of the two branch streams are now the same (_conv0_ and _conv1_), and they produce/consume at the same rate. The required buffering of the skip connection (\(B_{\mathrm{sc}}\)) is now equal to the _conv1_ window buffer size \[B_{\mathrm{sc}}=B_{1}=[(\textit{fh}_{1}-1)\cdot\textit{iw}_{1}+\textit{fw}_{ 1}-1]\cdot\textit{ich}_{1}. \tag{22}\] The dimensions of the first residual block without downsample of _Resnet20_ are: \(\textit{iw}_{0}=\textit{iw}_{1}=32\), \(\textit{ich}_{0}=\textit{ich}_{1}=16\), \(\textit{fh}_{0}=\textit{fh}_{1}=3\), \(\textit{fw}_{0}=\textit{fw}_{1}=3\). The dimensions of the first residual block with downsample of _Resnet20_ are: \(\textit{iw}_{0}=32\), \(\textit{iw}_{1}=16\), \(\textit{ich}_{0}=16\), \(\textit{ich}_{1}=32\), \(\textit{fh}_{0}=\textit{fh}_{1}=3\), \(\textit{fw}_{0}=\textit{fw}_{1}=3\). The skip connection buffering, \(B_{\mathrm{sc}}\), is then reduced to \(R_{\mathrm{sc}}\), in both cases \[R_{\mathrm{sc}}=\frac{[(\textit{fh}_{1}-1)\,\textit{iw}_{1}+\textit{fw}_{1}- 1]\,\textit{ich}_{1}}{[(\textit{rh}_{0}-1)\,\textit{iw}_{0}+\textit{rw}_{0}] \,\textit{ich}_{0}}=0.5. \tag{23}\] The same calculated gain holds for all residual blocks in _Resnet20_ because the product \(\textit{iw}_{i}\cdot\textit{ich}_{i}\) remains constant. The same considerations apply to _Resnet8_, since the structure of its residual blocks is identical to those already analyzed. From a network graph of a residual block with and without downsampling, Fig. 14 shows the initial and final representations after applying the previously described optimizations. ## 4 Experimental Results Our architecture is evaluated on the CIFAR-10 dataset, which consists of \(32\)x\(32\) RGB images. The same preprocessing and data augmentation used in [15] is used for both training and testing. The model is trained for \(400\) epochs with a batch size of \(256\), using the stochastic gradient descent (SGD) optimizer and cosine annealing as the learning rate scheduler. Synthesizable C++ is used for both the hand-written layer process library and the Python-generated top-level dataflow code that calls the process functions. The implementation Figure 12: Multiple endpoint graph optimizations: (a) input forwarding without downsampling, (b) layer merging when there is a downsample pointwise convolution. Figure 13: The addition is optimized as initialization of the convolution accumulator register. flow uses Xilinx Vitis HLS for RTL code generation and Vivado for implementation on the Ultra96-v2 and Kria KV260 boards. Table 2 shows the available resources for the two boards. The obtained throughputs (FPS, \(\mathrm{Gops/s}\)) and the latency (\(\mathrm{ms}\)) are shown in Table 3. The final resource utilization is summarized in Table 4. Our proposed architecture is first compared with a ResNet20 implementation and the derived AdderNet described in [42], which is are the most efficient CNN implementations on FPGAs in terms DSP packing and model architecture to-date. Our implementation achieves speedups (\(\mathrm{Gops/s}\)) of \(2.88\)x and \(1.94\)x with \(0.5\,\%\) and \(1.4\,\%\) higher accuracy, with respect to the ResNet20 and Addernet in [42], using the Kria KV260 as a reference platform. Also, the latency is reduced by \(3.84\)x and \(1.96\)x respectively. We then compare our results with the implementations of the ResNet8 model by Vitis AI and FINN described in [14]. Our solution achieves speedups of \(6.8\)x and \(2.2\)x with a latency improvement of \(28.1\)x and \(3.35\)x respectively. Vitis AI achieved better accuracy by \(0.5\,\%\), probably because it executes _batch normalization_ in hardware, while our implementation outperformed a 4-bit FINN implementation by \(2.8\,\%\). ## 5 Conclusion This work presents a design flow for CNNs specifically optimized for residual networks. It supports the most commonly used operations for classic CNNs, including convolutions, fully connected (linear) layers, batch normalization, ReLU activation functions, max/average pooling, and skip connections. It is also fairly platform-independent, since it is based on heavily templatized layer models and comes with an ILP-based optimization method to maximize throughput under resource constraints. This allows it to be used with various FPGA platforms, including embedded ones. A dataflow pipelined architecture minimizes buffering resources for networks with skip connections. The design is validated by experiments on ResNet8 and ResNet20 using the CIFAR-10 dataset. Both activations and weights are quantized in INT8 format using power-of-two scaling factors. The design uses PyTorch and Brevitas for training and quantization, and Vitis HLS and Vivado for hardware implementation on Kria KV-260 and Ultra96-v2 boards. The solution achieves an accuracy of \(88.7\,\%\) for ResNet8 and \(91.3\,\%\) for ResNet20, with throughputs of \(12\,971\) FPS and \(3254\) FPS on the Ultra96, and \(30\,153\) FPS and \(7601\) FPS on the Kria KV260. Compared to the state-of-the-art for CNN residual network acceleration on FPGAs [42], it achieves \(2.88\)x speedup with \(0.5\,\%\) higher accuracy for ResNet20 and \(2.2\)x speedup with \(2.8\,\%\) higher accuracy for ResNet8 [14]. Compared to a residual network with packed adders [42], it achieves \(1.94\)x speedup with \(1.4\,\%\) higher accuracy and a latency improvement of \(1.96\,\times\). Considering state-of-the-art frameworks, the comparison shows that the resource-efficient implementation of the residual layer achieves a Pareto-optimal implementation for accuracy, throughput, and latency. Since the boards are the same and all approaches utilize most resources of each FPGA, lower latency also means lower energy than the state of the art. In summary, the proposed design architecture shows potential as an alternative to the commonly used residual network accelerators on platforms with limited resources. It delivers greater throughput and energy efficiency than the state-of-the-art without increasing hardware costs. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Board** & **FPGA part** & **LUT** & **FF** & **BRAM** & **DSP** & **URAM** \\ \hline Ultra96 & xcza3eg & 141120 & 70560 & 216 & 360 & 0 \\ Kria KV260 & xcza5eg & 234240 & 117120 & 144 & 1248 & 64 \\ \hline \hline \end{tabular} \end{table} Table 2: Resources of the Ultra96 and Kria KV260 boards Figure 14: Graph optimization for the residual blocks of Resnet8 and Resnet20 networks. The skip connection goes through the first convolution layer, conv0, into the second convolution layer, conv1, reducing buffering requirements with and without downsampling.
2309.08853
Computational Enhancement for Day-Ahead Energy Scheduling with Sparse Neural Network-based Battery Degradation Model
Battery energy storage systems (BESS) play a pivotal role in future power systems as they contribute to achiev-ing the net-zero carbon emission objectives. The BESS systems, predominantly employing lithium-ion batteries, have been exten-sively deployed. The degradation of these batteries significantly affects system efficiency. Deep neural networks can accurately quantify the battery degradation, however, the model complexity hinders their applications in energy scheduling for various power systems at different scales. To address this issue, this paper pre-sents a novel approach, introducing a linearized sparse neural network-based battery degradation model (SNNBD), specifically tailored to quantify battery degradation based on the scheduled BESS daily operational profiles. By leveraging sparse neural networks, this approach achieves accurate degradation predic-tion while substantially reducing the complexity associated with a dense neural network model. The computational burden of inte-grating battery degradation into day-ahead energy scheduling is thus substantially alleviated. Case studies, conducted on both microgrids and bulk power grids, demonstrated the efficiency and suitability of the proposed SNNBD-integrated scheduling model that can effectively address battery degradation concerns while optimizing day-ahead energy scheduling operations.
Cunzhi Zhao, Xingpeng Li
2023-09-16T03:11:05Z
http://arxiv.org/abs/2309.08853v1
Computational Enhancement for Day-Ahead Energy Scheduling with Sparse Neural Network-based Battery Degradation Model ###### Abstract Battery energy storage systems (BESS) play a pivotal role in future power systems as they contribute to achieving the net-zero carbon emission objectives. The BESS systems, predominantly employing lithium-ion batteries, have been extensively deployed. The degradation of these batteries significantly affects system efficiency. Deep neural networks can accurately quantify the battery degradation, however, the model complexity hinders their applications in energy scheduling for various power systems at different scales. To address this issue, this paper presents a novel approach, introducing a linearized sparse neural network-based battery degradation model (NNBD), specifically tailored to quantify battery degradation based on the scheduled BESS daily operational profiles. By leveraging sparse neural networks, this approach achieves accurate degradation prediction while substantially reducing the complexity associated with a dense neural network model. The computational burden of integrating battery degradation into day-ahead energy scheduling is thus substantially alleviated. Case studies, conducted on both microgrids and bulk power grids, demonstrated the efficiency and suitability of the proposed SNNBD-integrated scheduling model that can effectively address battery degradation concerns while optimizing day-ahead energy scheduling operations. Battery degradation modeling, Bulk power grids, Day-ahead scheduling, Energy management, Machine learning, Microgrids, Optimization, Sparse neural network. ## Nomenclature _Indices:_ \(g\): Generator index. \(s\): Battery energy storage system index. \(k\): Transmission line index. \(l\): Load index. \(wt\): Wind turbine index. \(pv\): Photovoltaic index. _Sets:_ \(T\): Set of time intervals. \(G\): Set of controllable micro generators. \(S\): Set of energy storage systems. \(WT\): Set of wind turbines. \(PV\): Set of PV systems. _Parameters:_ \(c_{g}\): Linear cost for controllable unit \(g\). \(c_{g}^{NL}\): No load cost for controllable unit \(g\). \(c_{g}^{SU}\): Start-up cost for controllable unit \(g\). \(\Delta T\): Length of a single dispatch interval. \(R_{prcnt}\): Ratio of the backup power to the total power. \(E_{s}^{Max}\): Maximum energy capacity of ESS \(s\). \(E_{s}^{min}\): Minimum energy capacity of ESS \(s\). \(c_{t}^{Buy}\): Wholesale electricity purchase price in time interval \(t\). \(c_{t}^{Sell}\): Wholesale electricity sell price in time interval \(t\). \(p_{t}^{Max}\): Maximum capacity of generator \(g\). \(p_{t}^{Min}\): Minimum capacity of generator \(g\). \(p_{k}^{Max}\): Maximum thermal limit of transmission line \(k\). \(b_{k}\): Susceptance, inverse of impedance, of branch \(k\). \(p_{t}^{Max}\): Maximum thermal limit of tie-line between main grid \(p_{t}^{Ramp}\): and microgrid. \(p_{g}^{Ramp}\): Ramping limit of diesel generator \(g\). \(p_{s}^{Max}\): Maximum charge/discharge power of BESS \(s\). \(p_{s}^{Min}\): Minimum charge/discharge power of BESS \(s\). \(p_{s}^{Disc}\): Discharge efficiency of BESS \(s\). \(\eta_{s}^{char}\): Charge efficiency of BESS \(s\). _Variables:_ \(U_{t}^{Buy}\): Status of buying power from main grid in time interval \(t\). \(U_{t}^{Sell}\): Status of selling power to main grid status in time \(t\). \(U_{s,t}^{Char}\): Charging status of energy storage system \(s\) in time interval \(t\). It is 1 if charging status; otherwise 0. \(U_{s,t}^{Disc}\): Discharging status of energy storage system \(i\) in time interval \(t\). It is 1 if discharging status; otherwise 0. \(U_{g,t}\): Status of generator \(g\) in time interval \(t\). It is 1 if on status; otherwise 0. \(V_{g,t}\): Startup indicator of Status of generator \(g\) in time interval \(t\). It is 1 if unit \(g\) starts up; otherwise 0. \(P_{g,t}\): Output of generator \(g\) in time interval \(t\). \(\theta_{n(k)}^{t}\): Phase angle of sending bus \(n\) of branch \(k\). \(\theta_{m(k)}^{t}\): Phase angle of receiving bus \(m\) of branch \(k\). \(P_{k,t}\): Line flow at transmission line \(k\) at time period \(t\). \(p_{t}^{buy}\): Amount of power purchased from main grid power in time interval \(t\). \(P_{t}^{Sell}\): Amount of power sold to main grid power in time interval \(t\). \(P_{t,t}\): Demand of the microgrid in time interval \(t\). \(p_{t,t}^{Disc}\): Discharging power of energy storage system \(s\) at time \(t\). \(p_{s,t}^{Char}\): Charging power of energy storage system \(s\) at time t. ## I Introduction Renewable energy sources (RES) have emerged as a pivotal component of the future power system, due to their environmental friendly attributes in contrast to conventional fossil fuels. By producing clean, sustainable, and inexhaustible electric energy, RES plays a transformative role in reducing greenhouse gas emissions in the electricity sector and thus mitigating climate change [1]. Nonetheless, the escalating utilization of RES for power generation has introduced inherent stability challenges in the system, primarily due to the unpredictable and intermittent nature of deeply integrated RES [2]-[4]. In response to this challenge, battery energy storage systems (BESS) are being extensively adopted as an effective and practically viable solution [5]. BESS effectively addresses the variability and uncertainty inherent in RES by efficiently storing excess renewable energy during peak periods and releasing it during off-peak periods of renewable generation [6]. This capability not only promotes a seamless integration of renewable energy in the grid but also reinforces the resilience of the system. Furthermore, BESS plays a pivotal role in providing essential ancillary services such as frequency regulation, voltage control, and peak shaving, thereby enhancing the stability and efficiency of the overall power system [7]-[8]. Numerous studies have demonstrated the successful integration of BESS into both bulk power systems and microgrids, particularly those integrating high penetrations of RES. For instance, [9]-[10] demonstrate the microgrid's ability to support the main grid with integrated BESS. Moreover, [11] highlights the significant benefits of incorporating BESS into the power system. Another notable example is the offshore BESS system presented in [12], which effectively reduces carbon emissions. Various other models have been proposed to incorporate BESS to mitigate fluctuations caused by renewable energy sources, as presented in [13]-[16]. In summary, the deployment of BESS is indispensable for the successful integration of renewable energy into the power system. It not only improves the system's stability and efficiency but also paves the way for a cleaner and more sustainable energy future. The primary component utilized in BESS presently available in the market is lithium-ion batteries [17]. However, these batteries' chemical characteristics make them susceptible to degrade over cycling, ultimately resulting in a negative impact on their performance and efficiency. The degradation of lithium-ion batteries is primarily attributed by the depletion of Liions, electrolyte, and the escalation of internal resistance. Those changes contribute to an increase in internal resistance, and decrease the overall available energy capacity during daily cycling [19]-[20]. Multiple factors contribute to battery aging, including ambient temperature, charging/discharging rate, state of charge (SOC), state of health (SOH), and depth of discharge (DOD), each playing a pivotal role in the degradation process over the battery cycling [21]-[22]. Nevertheless, accurately assessing the internal state of the battery remains a difficult challenge. This complexity is particularly amplified by the escalating significance of batteries functioning as energy storage systems in both microgrid systems and bulk power systems. Thus, accurately quantifying battery degradation is an urging task, particularly when BESS operates in diverse conditions and environments in the power system. Previous studies have extensively developed battery degradation models. However, these models fail to comprehensively address battery degradation across diverse operational conditions. One widely used approach is the DOD-based battery degradation model. Papers [23]-[27] proposed battery degradation models that depend on the DOD of each individual cycle. While this approach may be effective under consistent operating environments, it falls short when applied to the complex and diverse daily cycling scenarios of BESS. The DOD-based model omits various factors that can significantly contribute to substantial prediction errors in degradation. Another frequently employed model is the linear degradation model. As discussed in [28], this model incorporates a linear degradation cost based on power usage or energy consumption within the battery degradation model. However, similar to the DOD-based model, it can only offer a rough estimation of battery degradation and is not suitable for accurate predictions in daily day-ahead scheduling problems due to its limited accuracy. Therefore, despite the existence of previous research on battery degradation models, none of these approaches adequately addresses the battery aging factors in BESS operations comprehensively. In our previous research work [29], we applied a data-driven approach that utilized a neural network to accurately quantify battery degradation value. Distinct from the DOD-based and linear degradation models, our neural network-based battery degradation (NNBD) model takes into account various factors such as SOC, DOD, ambient temperature, charge or discharge rate, and SOH for each cycle, resulting in more precise degradation quantification. However, the highly non-linear and non-convex nature of the NNBD model poses challenges when seeking direct solutions to the day-ahead scheduling optimization problem. To address this challenge, we proposed a neural network and optimization decoupled heuristic algorithm in our previous work, which solves the complex neural network-embedded optimization problem iteratively. While the proposed iterative methodology exhibited commendable efficiency with the simple problems, unfortunately, its performance filtered when confronted with the complexities of a multi-BESS day-ahead scheduling optimization problem. The iteration method failed to converge when applied to a system with multiple integrated BESSs. To overcome the non-linearity characteristic of the neural network-based day-ahead scheduling problem, we present a piecewise linear model in this paper. This model enables us to directly solve the optimization problem without relying on an iterative method mentioned in our previous work. The non-linearity within the NNBD model arises from the adoption of the rectified linear unit (ReLU) activation function in the hidden layer neurons. The linearized model, using the BigM method, is designed to linearize a ReLU activation function through the introduction of four linear constraints with an auxiliary binary variable. This allows for the direct solution of the NNBD-integrated day-ahead scheduling problem. However, when multiple BESSs are present in the system, a severe computational challenge would be observed. As the number of BESS units escalates, the computational complexity rises exponentially due to the corresponding increase in the number of constraints and binary variables. This escalation made the optimization problem much more challenging to solve and may take an impractically long time to obtain a feasible solution. Thus, the research gap lies in finding methods to reduce the computational burden associated with neural network integrated optimization problems. Heuristic methods were proposed in reducing the complexity of neural network models. For instance, in [30], a low-complexity neural belief propagation decoder is constructed using network pruning techniques. This approach involves removing unimportant edges from the network structure. However, it should be noted that these techniques may inadvertently decrease the training accuracy. Another approach to reducing complexity is the utilization of a sparse feature learn ing model [31]. This model focuses on learning useful representations and decreasing the complexity of hierarchical deep neural network models. In [32], the effectiveness of sparse convolutional neural networks for single-instruction-multiple-data (SIMD)-like accelerators is demonstrated. This technique helps alleviate the computational burden by applying pruning methods to eliminate unnecessary connections within fully connected structures, as exemplified in [33] for wideband power amplifiers' neural network models. Similarly, pruning techniques are also employed in [34] to compact the deep neural networks for SCADA applications. Furthermore, [35]-[36] suggest that modern deep neural networks often suffer from overparameterization, with a large number of learnable parameters. One potential solution to this issue is model compression through the use of sparse connections. These approaches contribute to reducing the complexity and computational burden associated with neural network models, enabling more efficient and streamlined implementations. Since the sparsity and pruning techniques have proved to be efficient to reduce the complexity of neural networks in many other applications, it may be a perfect solution to obtain a low computational complexity model in battery degradation prediction. Thus, we propose a sparse neural network-based battery degradation model (SNNBD) to quantify the battery degradation in BESS daily operations. SNNBD is designed to be significantly less complex than the traditional fully-connected dense neural network model. SNNBD is designed to reduce the computation burden induced by the ReLU activation function. Achieving this entails a strategic process of pruning during training, whereby a predetermined percentage of neurons is systematically pruned. The sparsity percentage is defined as the ratio of pruned neurons to the total neurons in the neural network. A higher percentage of sparsity may decrease the computation complexity significantly, but the accuracy of the battery degradation prediction may decrease as compared with a less-sparse or dense model. It will be a trade-off between the sparsity and the training accuracy. Compared to the NNBD model [29], the proposed SNNBD model contains only a percentage of NNBD's neurons which may reduce the computational burden significantly while maintaining accurate battery degradation prediction. The main contributions of this paper are as follows: * _Refined Battery Degradation Modeling_: The proposed SNNBD model significantly refines existing NNBD model, elevating its proficiency in quantifying battery degradation within the day-ahead scheduling model. * _Computational Augmentation with SNNBD_: To efficiently address the day-ahead scheduling optimization challenge, this paper proposes an innovative SNNBD-assisted computational enhancement model. Capitalizing on the capabilities of the SNNBD model, this enhancement substantially improves the computational efficiency of the optimization process. This, in turn, translates into more responsive and informed decision-making procedures. * _Linearization Technique for Practicality:_ The integration of the SNNBD model into the day-ahead scheduling framework is accompanied by a pertinent linearization technique. This technique simplifies the model's analysis and evaluation, making it more practical and feasible for real-world application scenarios. * _Versatile Performance Evaluation_: This paper showcases the SNNBD model's efficacy across various levels of sparsity, and highlights its adaptability in capturing battery degradation under diverse operational scenarios. The day-ahead scheduling, enriched by the SNNBD model, is rigorously assessed on both expansive bulk power systems and local microgrid systems. These validation trials substantiate the SNNBD model's robustness and effectiveness in real-world power system environments. * _In-depth Economic Insights:_ This paper provides an insightful market analysis. By comparing locational marginal prices (LMPs) across three scenarios: (1) zero-degradation BESS, (2) degraded BESS, and (3) no BESS integration, the economic implications and advantages of incorporating BESS into the power system and capturing its degradation are explored. This analysis provides a comprehension of the economic landscape, enriching decision-making processes within the energy market. The rest of the paper is organized as follows. Section II describes the sparse neural network model and training strategy. Section III presents the traditional day-ahead scheduling model. Section IV presents the SNNBD integrated day-ahead scheduling model. Section V presents case studies and Section VI concludes the paper. ## II Spare Neural Network Based Battery Degradation This section outlines the training process for the proposed SNNBD model. We proposed two training schemes: (i) Warm Start that trains the SNNBD based on the pre-trained NNBD model, and (ii) Cold Start that trains the SNNBD model directly with random initial weights. Both models consist of 5 input neurons, 20 neurons in hidden layer 1, 10 neurons in hidden layer 2, and 1 neuron in the output layer. The hidden layers utilize the ReLU as the activation function for each neuron. ### _Warm Start_ The training process for Warm Start is illustrated in the algorithm explained below. Initially, the weights derived from the trained neural network model are utilized as the initial weights for the SNNBD model. During the training of the SNNBD model, a pruning mask is generated based on a certain predetermined sparsity percentage value. This mask is then applied to prune the weights after each training epoch to achieve the desired sparsity. The pruning masks are binary matrices that indicate which neurons are pruned (set to zero) in order to achieve sparsity throughout the entire structure. ### _Cold Start_ Cold Start offers a simple approach compared to Warm Start. Instead of training the neural network based on the fine-tuned NNBD weights, Cold Start directly trains a sparse neural network using random initial weights. In essence, the key difference between Warm Start and Cold Start lies in the choice of initial weights. However, all other training techniques remain consistent between the two options. The performance and efficiency of both Warm Start and Cold Start will be evaluated and compared the performance. ### _NNBD Model_ The training of deep neural networks requires a substantial amount of data. In our study, we utilize MATLAB Simulink [37] to perform battery aging tests by implementing a battery model. By employing a battery cycle generator, we simulate charging and discharging cycles at predefined rates and replicate various battery types, conditions, operating profiles, ambient temperature effects, and internal resistance. These battery aging tests are conducted at different initial SOC and DOD levels, as well as under different ambient temperatures and charging/discharging rates. In order to enhance the training efficiency and accuracy of the model, the battery degradation data collected from Simulink needs to be normalized before being fed into the training process. The original data consists of SOC, DOD, temperature, charging/discharging rate, and SOH. The C Rate, denoting the speed at which a battery is charged or discharged. SOH data is collected at the end of each charging/discharging cycle when the battery returns to its pre-set SOC value. Each cycle represents the process of discharging a battery from a specific SOC to a lower SOC and then recharging it back to the original SOC. ### _Pruning Method_ Pruning is a technique employed in neural networks to reduce the size and complexity of a model by eliminating unnecessary connections or neurons [38]. The objective of pruning is to enhance the efficiency of the training model, minimize memory requirements, and potentially improve its generalization capabilities. During the pruning process, a pruning mask is applied to identify and eliminate neurons that contribute less to the overall network performance, as depicted in Fig. 1. The pruning masks are regenerated for each epoch which means each pruning mask are identical. It also helps the robustness of the proposed SNNBD model. These pruning masks enable a compact representation of the sparse neural network. Instead of storing and computing all connection weights, only the active connections are considered, resulting in reduced memory usage and computational demands. By incorporating pruning masks, sparse neural networks strike a balance between model complexity and efficiency, making them a valuable approach for various applications, particularly in scenarios with limited computational resources or deployment constraints. ### _Fine Tuning and Setup_ After the pruning stage, the network undergoes retraining to restore and fine-tune its performance in the next epoch. During retraining, the pruning mask plays a crucial role in removing the pruned connections, effectively fixing their weights at zero. Only the remaining active connections are updated during the retraining process. This allows the network to relearn and redistribute its capacity among the remaining connections, compensating for the pruned ones. During the training phase, the sparse neural network is trained using the mini-batch gradient descent strategy. The performance of the deep neural network is assessed based on its capacity to make precise predictions, which is evaluated using the mean squared error metric as shown in equation (1). The mean squared error is computed by averaging the squared differences between the actual and predicted values across all the training data points, serving as the loss function throughout the training process. \[\textit{Mean Square Error}=\frac{1}{n}\sum_{t=1}^{n}(y_{t}-\bar{y}_{t})^{2} \tag{1}\] ## III Traditional Day-Ahead Scheduling Model This section presents the day-ahead scheduling problem for both the bulk power system and the microgrid system. The bulk power system is inherently more complex than the microgrid system due to the presence of multiple buses and transmission lines. It's important to note that neither of the models listed in this section consider the battery degradation. ### _Bulk Power System Energy Scheduling Model_ The day-ahead scheduling problem of the bulk power system is represented by the traditional security constrained unit commitment (SCUC) model. The objective of the traditional SCUC model is to minimize the total operating cost of the system as defined in equation (2). _Objective function_: \[f^{cost}=\sum\sum P_{g,t}c_{g,t}+U_{g}c_{g}^{NL}+\ V_{g}c_{g}^{SU},\forall g,t \tag{2}\] _Constraints_: \[\sum\nolimits_{g\in S_{g}}P_{g,t}+\sum\nolimits_{wt\in S_{WT}}P_{wt,t}+\] \[\sum\nolimits_{pw\in S_{pv}}P_{pv,t}+\sum\nolimits_{s\in S_{S}}P _{s,t}^{Disc}+\sum\nolimits_{k\in S_{n}}P_{k,t} \tag{3}\] \[=\sum\nolimits_{k\in S_{n+}}P_{k,t}\sum\nolimits_{t\in S_{L}}P_{ l,t}+\sum\nolimits_{s\in S_{S}}P_{s,t}^{char}\] \[P_{g}^{Min}\in P_{g,t}\leq P_{g}^{Max},\forall g,t\] (4) \[P_{g,t+1}-P_{g,t}\leq\Delta T\cdot P_{g}^{Ramp},\forall g,t\] (5) \[P_{g,t}-P_{g,t+1}\leq\Delta T\cdot P_{g}^{Ramp},\forall g,t\] (6) \[V_{g,t}\geq U_{g,t}-U_{g,t-1},\forall g,t,\] (7) \[V_{g,t+1}\leq 1-U_{g,t},\forall g,t,\] (8) \[V_{g,t}\leq U_{g,t},\forall g,t,\] (9) \[-P_{k}^{Max}\leq P_{k,t}\leq P_{k}^{Max},\forall k,t,\] (10) \[P_{k,t}-b_{k}\left(\theta_{n(k)}^{t}-\theta_{m(k)}^{t}\right)=0 \,\forall k,t,\] (11) \[U_{s,t}^{Disc}+U_{s}^{char}\leq 1,\forall s,t\] (12) \[U_{s,t}^{Char}\cdot P_{g}^{Min}\leq P_{s,t}^{Char}\leq U_{s,t}^{ Char}\cdot P_{s}^{Max},\forall s,t\] (13) \[U_{s,t}^{Disc}\cdot P_{g}^{Min}\leq P_{s}^{Disc}\leq U_{s,t}^{ Disc}\cdot P_{s}^{Max},\forall s,t \tag{14}\] Fig. 1: Pruning of a sample neural network model. \[E_{s,t}-E_{s,t-1}+\Delta T\big{(}P_{s,t-1}^{Disc}/\eta_{s}^{Disc}-P_{ s,t-1}^{char}\eta_{s}^{char}\big{)} \tag{15}\] \[=0,\forall s,t\] \[E_{s,t=2*}=E_{s}^{initial},\forall s\] (16) \[0\leq E_{s,t}\leq E_{s,t}^{max} \tag{17}\] The power balance equation for bus n incorporates synchronous generators, renewable energy sources, battery energy storage systems, and load demand, as represented by equation (3). Constraints (4-6) define the power output limits and ramping limits for each generator. To establish the relationship between a generator's start-up status and its on/off status, equations (7)-(9) are employed. Equation (10) enforces the thermal limit of the transmission lines. Constraint (11) calculates the power flow within the network. For the BESS, the state of charge (SOC) level is determined by the ratio between the current stored energy and the maximum available energy capacity, as shown in equation (12). Constraints (13)-(14) maintain the charging/discharging power of the BESS within specified limits. Equation (15) calculates the stored energy of the BESS for each time interval. Equation (16) mandates that the final SOC level of the BESS matches the initial value. Equation (17) establishes the upper limit for the stored energy of the BESS. ### _Microgrid Energy Scheduling Model_ The traditional microgrid day-ahead scheduling problem shares some constraints of the bulk power system model, excluding the power flow constraints. The objective function for microgrids aims to minimize the total cost, incorporating the cost of traditional generators and the cost of tie-line power exchange, as depicted in (18). _Objective function_: \[\begin{split} f^{cost}=\sum\sum(& P_{g,t}c_{g,t}+ U_{g}c_{g}^{sh}+\ \ V_{g}c_{g}^{sy})\\ &+\ P_{t}^{buy}c_{t}^{buy}-P_{t}^{sel}c_{t}^{sel},\forall g,t \end{split} \tag{18}\] _Constraints:_ The power balance equation for microgrid is presented in (19). To ensure the appropriate status of power exchange between the microgrid and the main grid, (20) is utilized, specifying the status of being a buyer, seller, or idle. Constraints (21) and (22) limit the thermal limits of the tie-line. Lastly, equation (23) setup the emergency reserve of the system. The traditional microgrid day-ahead scheduling constraints encompass (4)-(9) and (12)-(23). Unlike the power flow constraints present in the bulk power system model, the microgrid model incorporates tie-line exchange equations within the day-ahead scheduling framework. \[\begin{split}& P_{t}^{buy}+\sum_{g\in S_{G}}p_{g,t}+\sum_{wtr\in S _{WT}}p_{w,t,t}+\sum_{p\in S_{WP}}P_{pv,t}\\ &+\sum_{s\in S_{S}}p_{s,t}^{Disc}=p_{t}^{sel}+\sum_{l\in S_{L}}P_ {l,t}+\sum_{s\in S_{S}}p_{s,t}^{char}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ \[a_{h}^{i}=relu\big{(}x_{h}^{i}\big{)}=max(0,x_{h}^{i}) \tag{29}\] \[a_{h}^{i}\leq x_{h}^{i}+Big*(1-\delta_{h}^{i})\] (30) \[a_{h}^{i}\geq x_{h}^{i}\] (31) \[a_{h}^{i}\leq BigM*\delta_{h}^{i}\] (32) \[a_{h}^{i}\geq 0 \tag{33}\] The SNNBD-integrated day-ahead energy scheduling models for different systems are shown in the Table I. Table I The proposed day-ahead scheduling models \begin{tabular}{|c|c|} \hline Systems & Equations \\ \hline Bulk Power Grid & (2)-(17), (24)-(33) \\ \hline Microgrid & (4)-(9), (12)-(33) \\ \hline \end{tabular} ### _Benchmark Model_ To evaluate the performance of the SNNBD model in day-ahead scheduling problems, a benchmark model will be employed. The benchmark model utilizes the NNBD model, which has been previously introduced in studies. The purpose of this benchmark model is to provide a basis for comparison and assess the effectiveness of the SNNBD model. It is important to note that there the day-ahead scheduling modeling remains consistent across both the NNBD and SNNBD models. Both models are applied within the same day-ahead scheduling framework, sharing the same variables and constraints. The distinction between the two models lies in the methodology used to quantify battery degradation. The NNBD model, used in the benchmark model, employs a conventional deep neural network approach to predict battery degradation based on the input variables. In contrast, the SNNBD model, under evaluation, utilizes a sparse neural network architecture. By comparing the performance of the SNNBD model against the benchmark NNBD model, it becomes possible to evaluate the effectiveness of the sparse architecture introduced in the SNNBD model. This comparison aids in determining the advancements made by the SNNBD model in compact the neutral network structure, which in turn can contribute to reduce the computational complexity of the neural network integrated day-ahead scheduling problems. ## V Case Studies ### _Training Strategies: Warm Start vs. Cold Start_ The analysis of training outcomes, as illustrated in Table II, distinctly highlights the superiority of Warm Start over Cold Start concerning training accuracy. However, it's noteworthy that Cold Start requires fewer epochs to complete the training process. It is important to mention that the training epochs for Warm Start represent the combined training epochs required by the NNBD model and the SNNBD model, while for Cold Start, it refers to the training epochs of the sparse neural network alone. Training the sparse neural network from random initial weights (Cold Start) proves to be notably challenging when it comes to achieve an equivalently level of accuracy as the Warm Start. In contrast, Warm Start is designed to take advantage of the pre- trained NNBD model, which serves as a stable starting point. The SNNBD model is then applied to further refine and sparse the structure of the already trained model. This suggests that the pre-trained NNBD model provides a beneficial foundation for the SNNBD model. The initial training with the NNBD model establishes a solid baseline, and the subsequent application of the SNNBD model enables fine-tuning with sparsity. By leveraging the existing knowledge encoded in the NNBD model, Warm Start demonstrates superior training accuracy compared to Cold Start. Table II Results between Warm Start and Cold Start \begin{tabular}{|c|c|c|} \hline Training Options & Accuracy & Epochs \\ \hline Warm Start & 94\% & 550 \\ \hline Cold Start & 77\% & 300 \\ \hline \end{tabular} ### _SNNBD Model Training_ All the results presented here are based on training Warm Start, as it outperforms the Cold Start. The training results of the proposed SNNBD model are depicted in Fig. 2 and Table III. In Fig. 2, the 0% sparsity represents the original NNBD model without any sparsity applied. The subsequent markers--5%, 10%, and 15%--tinted in blue, red, and green, respectively, signify distinct error tolerance thresholds. Notably, the pattern that unfolds the interplay between sparsity and prediction accuracy. As sparsity percentage scales up, the precision of battery degradation value predictions undergoes a gradual decrement across all tolerance thresholds. This trend continues until the 70% sparsity mark is attained. When comparing the 0% sparsity model (NNBD model) and the 50% sparsity model, the accuracy stands at 94.5% and 93.7% respectively, considering a 15% error tolerance. However, the 50% sparsity model significantly reduces the computational complexity compared to the original NNBD model since half of the neurons are pruned to be zero, thereby eliminating their connections. This reduction in computational complexity is exponential, as all connections associated with zero-valued neurons are discarded. ### _Microgrid Test Case_ To evaluate the performance of the integrated SNNBD day-ahead scheduling model, a typical grid-connected microgrid with renewable energy sources was employed as a testbed, as demonstrated in Section IV. The microgrid configuration consists of several components, including a traditional diesel generator, wind turbines, residential houses equipped with solar panels, and a lithium-ion BESS with a charging/discharging efficiency of 90%. The parameters for these main components are provided in Table IV. To simulate realistic conditions, the load data for the microgrid is based on the electricity consumption of 1000 residential houses. The ambient temperature and available solar power for a 24-hour period are sourced from the Pecan Street Dataport [39], ensuring accurate representation of real-world environmental conditions. The wholesale electricity price data is obtained from ERCOT [40], allowing the model to consider market dynamics in the day-ahead scheduling decisions. The optimization problem, formulated as part of the day-ahead scheduling model, was solved on a computer with the following specifications: an AMD(r) Ryzen 7 3800X processor, 32 GB RAM, and an Nvidia Quadro RTX 2700 Super GPU with 8 GB of memory. The Pyomo [41] package, a powerful optimization modeling framework, was utilized to formulate and solve the day-ahead optimization problem. A high-performance mathematical programming solver Gurobi [42] was employed to efficiently find optimal solutions. By utilizing this realistic microgrid test platform and the computational resources mentioned, the SNNBD integrated day-ahead scheduling model can accurately capture the dynamics of the renewable energy sources, optimize the scheduling decisions, and assess the performance of the proposed approach. Table V presents the validation results for different sparsity levels of the SNNBD models in the microgrid day-ahead scheduling problem. The table provides insights into the performance of these models across various metrics. "Pseudo Total" represents the total cost with the SNNBD model, which serves as the objective of the day-ahead scheduling including the operating cost and degradation cost in optimization problem. "BD Cost" represents the equivalent battery degradation cost estimated using the SNNBD model. "Operation" shows the microgrid operating cost, including the cost associated with generators and power trading. "OG BD Cost" indicates the battery degradation cost obtained from the original NNBD model, which does not incorporate sparsity. "Updated Total" represents the sum of the operation cost and the "OG BD Cost". "0% sparsity" is considered as the benchmark model, used to evaluate the performance of the other SNNBD models with different sparsity levels. From the information in Table V, it appears that the SNNBD model does not significantly reduce the solving time in the microgrid model. Furthermore, there is no substantial difference observed in the total cost and updated total cost among the various SNNBD models compared to the benchmark model. These findings suggest that the inclusion of sparsity in the SNNBD model does not significantly impact the overall cost in the microgrid day-ahead scheduling problem. Fig. 3 illustrates the output curves of the BESS under different battery degradation models. The figure shows that the BESS charge/discharge power profiles largely overlap across most time intervals. The only notable difference is observed in the 10% and 20% sparsity models, where the BESS charges at 20 kW during the 7-8 pm. Overall, these results demonstrate that the SNNBD model is capable of finding solutions for the day-ahead scheduling problem. Based on these findings, it can be concluded that the SNNBD model is reliable and able to identify optimal solutions compared to the non-sparse NNBD model in the microgrid day-ahead scheduling problem. However, it should be noted that the SNNBD model does not yield efficiency improvements, even with higher sparsity levels. One possible reason for this observation could be the small scale of the microgrid case and the presence of only one BESS, which does not impose a heavy computational burden. ### _Bulk Power System Test Case_ To evaluate the day-ahead scheduling of the bulk power grid model, a typical IEEE 24-bus system (Fig. 4) [43] is employed as a test bed. This system consists of 33 generators and serves as a representative model for large-scale power grids. In addition to the existing infrastructure, the test bed incorporates several BESSs and wind farms to evaluate their impact on the day-ahead scheduling. Fig. 4 illustrates the layout of the IEEE 24-bus system, showcasing the interconnected buses and the corresponding transmission lines. The objective of this evaluation is to optimize the scheduling decisions considering the presence of the multiple BESS and wind farm within the larger power grid system. Similar to the microgrid case discussed earlier, the day-ahead scheduling problem for the bulk power grid is solved using same solving software packages. The integration of the BESS and wind farm within the IEEE 24-bus system enables the evaluation of their impact on optimizing power generation, transmission, and scheduling decisions at a larger scale. Table VI provides the parameters of the BESSs installed at different buses within the IEEE 24-bus system. These parameters characterize the specifications of each BESS, including Figure 3. BESS output in a microgrid system. their energy capacities and power output capabilities. Notably, BESS number four possesses the largest energy capacity and the highest output power among the five BESSs considered in the system. The minimum power for charging or discharging is set to zero. Additionally, the IEEE 24-bus system incorporates five wind farms, each comprising a varying number of wind turbines. The capacity of each wind turbine is fixed at 200 kW. To obtain suitable wind profiles for this study, the wind profile data sourced from the Pecan Street Dataport [41] are appropriately scaled. The inclusion of these parameters and data in the evaluation allows for a comprehensive analysis of the day-ahead scheduling problem within the IEEE 24-bus system. The outcome for the IEEE 24-bus system with different sparsity levels of the SNNBD model are presented in Table VII. It is vital to recognize that all tabulated results are anchored on a relative MipGap of 0.001, which is a critical gauge of the optimality gap. The table clearly demonstrates that the solving time decreases exponentially as the sparsity level of the SNNBD model. The results based on the 60% and 70% sparsity have not been included as the BESS output curve deviates significantly from the solutions based on lower sparsity level models. The 0% and 10% sparsity models results are not listed here since they cannot be solved within the given time frame, whereas the 50% sparsity model requires only 455 seconds for solution. Similarly, for the 20% sparsity model, the day-ahead scheduling problem cannot find the optimal solution within the span of 20 hours, resulting in a reported non-optimal benchmark result. We also found that the 50% sparsity model lead to the minimum total cost. However, the total cost, does not change significantly despite the variation in solving time. This indicates that while the solving time is reduced to an acceptable number with high sparsity SNNBD models, the overall cost remains relatively stable. By analyzing these results, it becomes evident that increasing the sparsity level in the SNNBD model significantly reduces solving time without significantly impacting the overall cost. However, it is crucial to validate the BESS power output pattern when assessing the performance of the SNNBD model. Examining the BESS power output pattern ensures that the model captures the desired behavior and produces outputs consistent with expectations. Figs. 5 and 6 display the SOC curves of BESS #4 and #5 under different sparsity levels of the proposed SNNBD model. These two BESS units are particularly active among the five units considered in the testbed. For benchmarking purposes, the SOC curve is also plotted when there is no battery degradation considered in the day-ahead scheduling problem. The SOC curve provides insights into the utilization of the BESS, with more fluctuation indicating more active and flatter curves indicating less active. When degradation is not considered, the BESS units are utilized to their maximum capacity since there is no equivalent degradation cost factored into the optimization problem. We found that both BESS #4 and #5 are scheduled to discharge to 0% SOC twice when degradation is not considered. In Figure 5, the output curves of BESS #4 with SNNBD models significantly shrink compared to the case where degradation is not considered. However, the output patterns of BESS #4 with different sparsity levels of the SNNBD model exhibit a similar pattern and overlap for most time periods, which demonstrating the effectiveness of the proposed SNNBD model. The output patterns of BESS #5 in Figure 6 appear different from those of BESS #4. However, similar to BESS #4, BESS #5 discharges significantly less when degradation is considered. Table III provides insights into the tradeoff between sparsity and accuracy. A higher sparsity level leads to lower accuracy, while a lower sparsity level results in longer solving times for day-ahead scheduling. Thus, a balance must be compromised between sparsity and accuracy. Overall, the 50% sparsity model performs the best since its SOC curve closely resembles those of the 20%, 30%, and 40% sparsity models while having the lowest total cost. Fig. 4: Illustration of the modified IEEE 24-bus system [43]. Fig. 5: SOC curves of BESS #4 in the 24-bus bulk power system. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline BESS & Bus & Energy Capacity & Power Rating & Initial \\ _No._ & _No._ & (MWh) & (MW) & SOC \\ \hline 1 & 21 & 50 & 20 & 40\% \\ \hline 2 & 22 & 10 & 4 & 40\% \\ \hline 3 & 7 & 10 & 4 & 40\% \\ \hline 4 & 14 & 200 & 100 & 40\% \\ \hline 5 & 9 & 30 & 10 & 50\% \\ \hline \end{tabular} \end{table} TABLE VI: Setting of BESSs ### _Market Analysis_ Fig. 7 presents sample results demonstrating the influence of locational marginal price (LMP) when integrating BESSs into the bulk power system. Our exploration encompassed 3 models including "no BESS model", "BESS considered with degradation", and "BESS considered without degradation". A comparison was made with the "no BESS model" to assess the system's ability to reduce line congestion when a BESS is integrated. The LMP results in Fig. 7 specifically focus on bus 14, the location of the largest BESS unit, BESS #4. From the figure, it is evident that during most time periods, such as 1 am to 5 am and 12 pm to 6 pm, the LMP values are consistent across the different cases, indicating there is no line congestion at bus 14 during those periods. However, as the clock strikes 3 pm to 6 pm, a surge in LMP is evident, which suggests that the line is loaded higher than in the previous hours but is not yet congested. During the normal daily peak load periods of 7 am to 9 am and 7 pm to 8 pm, the LMP values differ among the proposed models. In comparison to the "no BESS model," the models with integrated BESS units can significantly reduce the LMP, indicating that the BESS can alleviate line congestion. Note that when battery degradation is not considered, the BESS exhibits a higher capability to mitigate line congestion, leading to the lowest LMP during those congested hours. This analysis of LMP with the integration of a BESS system provides valuable insights for both grid operators and BESS investors, as BESS installations play a crucial role in addressing line congestion within the grid. ### _Sensitivity Analysis of Relative Optimization Gaps_ A sensitivity test was conducted to examine the impact of different relative gaps. Fig. 8 displays SOC curves of BESS #4 based on the optimal solution obtained using different relative MipGap values. The results presented in Figure 8 are based on the 50% sparsity SNNBD model. The solving times for MipGap values of 0.01, 0.005, and 0.001 are 339 seconds, 380 seconds, and 450 seconds, respectively. The solving time increases as the MipGap value decreases because a more accurate optimal solution is sought. However, upon analyzing the SOC curve depicted in Figure 8, it becomes evident that the SOC curves mostly overlap, indicating minimal differences between the solutions obtained under different MipGap values. Consequently, for the 50% sparsity model, a higher MipGap value is preferred as it reduces the computation time while maintaining a comparable solution quality. ## VI Conclusions This paper introduces a novel sparse neural network-based battery degradation model that can accurately quantify battery degradation in BESS and largely address the computational challenges associated with traditional dense neural networks when being incorporated into optimization models. By leveraging the sparse technique, the proposed SNNBD achieves accurate degradation prediction while significantly reducing the computational complexity. It has been proven that the accuracy of the SNNBD model does not decrease significantly until the sparsity is increased to 80%. It can obtain 91% accuracy at 60% sparsity, compared to 95% when no sparsity is implemented. The results also show that our proposed SNNBD model can significantly reduce the computational burden, making neural network-integrated day-ahead energy scheduling directly solvable, even for complicated multi-BESS integrated systems. Furthermore, the results have been proven to be accurate and feasible with a high sparsity SNNBD model in both microgrid and bulk power system. Choosing different sparsity levels for the proposed SNNBD model provides flexibility for the grid operator, as it involves a tradeoff between accuracy and solving time. Overall, the SNNBD model opens up new possibilities for efficiently addressing battery degradation in day-ahead energy scheduling for multi-BESS systems.
2306.17648
Enhancing training of physics-informed neural networks using domain-decomposition based preconditioning strategies
We propose to enhance the training of physics-informed neural networks (PINNs). To this aim, we introduce nonlinear additive and multiplicative preconditioning strategies for the widely used L-BFGS optimizer. The nonlinear preconditioners are constructed by utilizing the Schwarz domain-decomposition framework, where the parameters of the network are decomposed in a layer-wise manner. Through a series of numerical experiments, we demonstrate that both, additive and multiplicative preconditioners significantly improve the convergence of the standard L-BFGS optimizer, while providing more accurate solutions of the underlying partial differential equations. Moreover, the additive preconditioner is inherently parallel, thus giving rise to a novel approach to model parallelism.
Alena Kopaničáková, Hardik Kothari, George Em Karniadakis, Rolf Krause
2023-06-30T13:35:09Z
http://arxiv.org/abs/2306.17648v2
Enhancing training of physics-informed neural networks using domain-decomposition based preconditioning strategies + ###### Abstract We propose to enhance the training of physics-informed neural networks (PINNs). To this aim, we introduce nonlinear additive and multiplicative preconditioning strategies for the widely used L-BFGS optimizer. The nonlinear preconditioners are constructed by utilizing the Schwarz domain-decomposition framework, where the parameters of the network are decomposed in a layer-wise manner. Through a series of numerical experiments, we demonstrate that both, additive and multiplicative preconditioners significantly improve the convergence of the standard L-BFGS optimizer, while providing more accurate solutions of the underlying partial differential equations. Moreover, the additive preconditioner is inherently parallel, thus giving rise to a novel approach to model parallelism. s cientific machine learning, nonlinear preconditioning, Schwarz methods, domain-decomposition pacs: 90C30, 90C26, 90C06, 65M55, 68T07 ## 1 Introduction In recent years, deep learning approaches for solving partial differential equations (PDEs) have gained popularity due to their simplicity, mesh-free nature, and efficiency in solving inverse and high-dimensional problems [34, 14]. The notable deep learning approaches in the context of scientific machine learning include physics-informed neural networks (PINNs) [64], deep Galerkin method [69], and deep Ritz method [15]. The idea behind these methods is to utilize deep neural networks (DNNs) to approximate solutions of the PDEs by minimizing a loss function, which incorporates the physical laws and domain knowledge. More specifically, in the context of PINNs, which we consider in this work, the loss function incorporates a PDE residual, boundary/initial conditions, and sometimes also observational data. To approximate a solution of a PDE using PINNs, the optimal parameters of DNN have to be found during the training. This is typically carried out using Adam [35] and/or L-BFGS [46] optimizers. However, it has been shown that the performance of these optimizers deteriorates for highly ill-conditioned and stiff problems [73]. Several algorithmic enhancements have been therefore proposed to alleviate such difficulties, for instance, adaptive weighing strategies [74], empirical learning-rate annealing scheme [73], multiscale architectures [12, 23], ensemble learning [16] based on gradient boosting [6], adaptive activation functions [32, 31] etc. Furthermore, novel optimizers have also been developed, for instance, Gauss-Newton methods [65, 2], Levenberg-Marquardt optimizers [58, 76, 18], variable projection methods [60, 21, 61], block coordinate methods [7], etc. In this work, our goal is to enhance the convergence rate of the optimizers used for the training of PINNs by developing novel nonlinear preconditioning strategies. In the field of scientific computing, such nonlinear preconditioning strategies are often used to accelerate the convergence of the nonlinear iterative methods [3]. Specifically, left preconditioners [5, 47] enhance the conditioning of nonlinear problems by rebalancing nonlinearities or by transforming the basis of the solution space. In contrast, right preconditioners [51, 36] provide an improved initial guess for a given iterate. Both, left and right, preconditioners can be constructed using various approaches. Prominent approaches typically rely on the decomposition of the solution space into subspaces, such as field-split [39, 47], domain-decomposition (DD) [5, 42, 43], or multilevel decomposition [77, 41, 17, 22, 40]. Among these approaches, the DD approach is of particular interest due to its parallelization capabilities. As a consequence, preconditioners constructed using the DD approach not only improve the convergence rate of a given iterative method but also allow for a reduction of the overall computation time, as the workload can be distributed between larger amounts of computational resources. In the field of machine learning, the parallelization of the training algorithms has attracted a lot of attention recently, see [8, 57, 1] for a comprehensive overview. The developed approaches can be broadly classified into two main groups: data parallel and model parallel. The data parallel approach has gained significant popularity due to its architecture-agnostic nature. In contrast, model parallelism is more intricate as it involves decomposing the network across different compute nodes. As a consequence, this approach has to be typically tailored to a specific network architecture [1]. For instance, the TorchBraid framework [26] enables layer-parallel training of ODE-based networks by utilizing a multigrid-in-time approach. In [25, 24], a model parallel approach for convolutional networks is proposed, where the network's width is partitioned into subnetworks, which can be trained in parallel. The parameters obtained from training these subnetworks are then used to initialize the global network, which is further trained sequentially. Similarly, in [37], a DD-based network architecture is proposed for parallel training of image recognition tasks. In the context of PINNs, DD-based approaches have also been utilized to enhance training efficiency and scalability to larger computational domains and multi-scale problems. These approaches leverage the spatiotemporal decomposition of the computational domain. For instance, conservative PINNs (cPINNs) [33] utilize a non-overlapping Schwarz-based DD approach for conservation laws. The extended PINNs (XPINNs) expand the cPINN methodology to generic PDEs and arbitrary space-time domains. The idea behind XPINNs [30, 33, 68, 29] is to associate each subdomain with a sub-PINN, which can be trained independently of each other, i.e., in parallel. After the training of the subdomains is concluded, the sub-PINNs are stitched together by enforcing continuity on the interfaces and by averaging solutions along neighboring subdomain interfaces. The XPINN framework was later expanded in several ways. For instance, the adaptive PINNs (APPINs) [28], GatedPINNs [70], or extensions incorporating extreme learning machines [13] have been proposed. An alternative approach that utilizes the variational principle and overlapping Schwarz-based DD approach has also been explored, see [45]. This method was further extended by introducing coarse space acceleration to improve convergence with respect to an increasing number of subdomains [55]. Building upon the variational principles and the deep Ritz method, the D3M [44] also employs the overlapping Schwarz method. Another approach, called finite basis PINNs (FBPINN) [59, 11], represents the solution of a PDE as a sum of basis functions with compact support defined over small, overlapping subdomains. These basis functions are learned in parallel using neural networks. Finally, we mention PFNN-2 method [67], which enhances the training of PINNs by utilizing overlapping DD together with the penalty-free network approach (PFNN) [66]. In this work, we aim to enhance the training of the PINNs by developing novel nonlinear additive and multiplicative preconditioning strategies. More specifically, we utilize the right preconditioning approach to accelerate the convergence of the widely used L-BFGS method. The devised preconditioners are constructed by decomposing the network parameters in a layer-wise manner. Methodologically, our approach differs from the aforementioned DD approaches for PINNs that utilize the decomposition of the computational domain. As a consequence, the proposed preconditioners are agnostic to the physics-based priors and have therefore potential to be also applied to other machine-learning tasks. Using the proposed DD approach, we demonstrate that both additive and multiplicative preconditioners can significantly improve the convergence rate of the standard L-BFGS optimizer. Moreover, utilizing the additive preconditioner naturally enables model parallelism, which allows for additional acceleration of the training, if a larger amount of computational resources is available. This paper is organized as follows. Section 2 provides a brief overview of the PINNs. In section 3, we propose nonlinear additive and multiplicative preconditioners using a layer-wise decomposition of the network. Section 4 discusses implementation details, while section 5 introduces a set of benchmark problems used for numerical experiments. Section 6 demonstrates the numerical performance of the proposed preconditioning strategies. Finally, section 7 summarizes the presented work. ## 2 Physics-informed neural networks We consider the differential equations of a generic form and pose the problem as follows. Find \(u:\Omega\to\mathbb{R}\), such that \[\mathcal{P}(u(\mathbf{x})) =f(\mathbf{x}), \mathbf{x}\in\Omega, \tag{1}\] \[\mathcal{B}^{k}(u(\mathbf{x})) =g^{k}(\mathbf{x}), \mathbf{x}\in\Gamma^{k}\subseteq\partial\Omega, \text{for }k=1,2,\ldots,n_{\Gamma}.\] The symbol \(\mathcal{P}\) represents a nonlinear differential operator, while \(\{\mathcal{B}^{k}\}_{k=1}^{n_{\Gamma}}\) denotes a set of boundary condition operators. In the case of time-dependent problems, we treat time \(t\) as an additional component of the vector \(\mathbf{x}\in\mathbb{R}^{d}\), where \(d\) corresponds dimension of the problem. Therefore, the computational domain \(\Omega\) encompasses both spatial and temporal dimensions, and the initial conditions can be considered as a specific type of boundary conditions within the spatiotemporal domain. ### DNN approximation of solution Following [64], our goal is to approximate a solution of (1) using PINNs, i.e., \(u(\mathbf{x})\approx u_{\texttt{NN}}(\mathbf{\theta},\mathbf{x})\), where the nonlinear mapping \(u_{\texttt{NN}}:\mathbb{R}^{n}\times\mathbb{R}^{d}\to\mathbb{R}\) is parametrized by the trainable parameters \(\mathbf{\theta}\in\mathbb{R}^{n}\). Typically, the exact form of \(u_{\texttt{NN}}\) is induced by the network architecture. Here, we consider a network that is constructed as a composition of an input layer, \(L-1\) hidden layers, and an output layer, thus as \[\begin{array}{llll}\text{input layer:}&\mathbf{y}_{0}&=\mathbf{W}_{0}\mathbf{x}, \\ \text{hidden layers:}&\mathbf{y}_{l}&=\mathcal{N}(\mathbf{y}_{l-1}),\\ \text{output layer:}&\mathbf{y}_{L}&=\mathbf{W}_{L}\mathbf{y}_{L-1}+\mathbf{b}_{L}.\end{array} \tag{2}\] The input layer maps input features \(\mathbf{x}\in\Omega\subset\mathbb{R}^{d}\) into the dimension of the hidden layers using weights \(\mathbf{W}_{0}\in\mathbb{R}^{d\times nh}\). The \(l^{th}\) hidden layer nonlinearly transforms the output of the \((l-1)^{th}\) layer \(\mathbf{y}_{l-1}\) into \(\mathbf{y}_{l}\) by means of nonlinear operator \(\mathcal{N}:\mathbb{R}^{nh}\to\mathbb{R}^{nh}\). The operator \(\mathcal{N}\) can take on multiple forms. We utilize a single-layer perceptron with a skip connection, given as \[\mathcal{N}(\mathbf{y}_{l-1}):=\mathbf{y}_{l-1}+\sigma_{l}(\mathbf{W}_{l}\mathbf{y}_{l-1}+ \mathbf{b}_{l}). \tag{3}\] The parameters associated with layer \(l\) can be given as \(\mathbf{\theta}_{l}:=(\text{flat}(\mathbf{W}_{l}),\text{flat}(\mathbf{b}_{l}))\) collectively, where \(\mathbf{W}_{l}\in\mathbb{R}^{nh\times nh}\) and \(\mathbf{b}_{l}\in\mathbb{R}^{nh}\) denote weights and biases, respectively. The function \(\text{flat}(\cdot)\) represents a flattening operator and \(\sigma_{l}:\mathbb{R}^{nh}\to\mathbb{R}^{nh}\) denotes an activation function. Finally, the output of the last hidden-layer \(\mathbf{y}_{L-1}\) is linearly transformed using \(\mathbf{W}_{L}\) and \(\mathbf{b}_{L}\), giving rise to the network's output \(\mathbf{y}_{L}\). ### Training data and loss functional In order to train the network such that it can approximate a solution of (1), we construct a dataset \(\mathcal{D}_{\text{int}}=\{\mathbf{x}_{j}\}_{j=1}^{n_{\text{int}}}\) of \(n_{\text{int}}\) collocations points, which lie in the interior of the computational domain \(\Omega\). These points are used to enforce the physics captured by the PDE. In addition, we consider datasets \(\{\mathcal{D}_{\text{bc}}^{k}\}_{k=1}^{n_{\text{r}}}\), where each \(\mathcal{D}_{\text{bc}}^{k}=\{(\mathbf{x}_{j}^{k},\mathbf{g}_{j}^{k})\}_{j=1}^{n_{ \text{bc}}}\) contains \(n_{\text{bc}}^{k}\) points, which we use to enforce the boundary conditions on \(\Gamma^{k}\). Using \(\mathcal{D}_{\text{int}}\) and \(\{\mathcal{D}_{\text{bc}}^{k}\}_{k=1}^{n_{\text{r}}}\), the optimal parameters of the network are found by solving the following minimization problem: \[\mathbf{\theta}^{*}:=\underset{\mathbf{\theta}\in\mathbb{R}^{n}}{\arg\min}\ \underbrace{\tilde{\mathcal{L}}_{\text{int}}(\mathbf{\theta})}_{\text{ interior loss}}+\underbrace{\tilde{\mathcal{L}}_{\text{bc}}(\mathbf{\theta})}_{\text{boundary loss}}. \tag{4}\] Here, the interior and the boundary losses are given as \[\tilde{\mathcal{L}}_{\text{int}}(\mathbf{\theta}) :=\frac{w_{\text{int}}}{|\mathcal{D}_{\text{int}}|}\sum_{\mathbf{x}_ {j}\in\mathcal{D}_{\text{int}}}\|\mathcal{P}(\tilde{u}_{\text{NN}}(\mathbf{\theta },\mathbf{x}_{j}))-f(\mathbf{x}_{j})\|^{2}, \tag{5}\] \[\tilde{\mathcal{L}}_{\text{bc}}(\mathbf{\theta}) :=\sum_{k=1}^{n_{\text{r}}}\Bigg{(}\frac{w_{\text{bc}}}{| \mathcal{D}_{\text{bc}}^{k}|}\sum_{(\mathbf{x}_{j}^{k},\mathbf{g}_{j}^{k})\in\mathcal{ D}_{\text{bc}}^{k}}\|(\tilde{u}_{\text{NN}}(\mathbf{\theta},\mathbf{x}_{j}^{k})-g^{k}(\mathbf{x}_{j }^{k})\|^{2}\Bigg{)}.\] Thus, the interior loss \(\tilde{\mathcal{L}}_{\text{int}}\) is expressed as the mean PDE residual obtained by Monte Carlo integration, i.e., by the averaging over the residuals evaluated at collocation points sampled in the interior of the domain. Note that an evaluation of the PDE residual typically requires knowledge of the partial derivatives of the network output \(\tilde{u}_{\text{NN}}(\mathbf{\theta},\mathbf{x}_{j})\) with respect to the input \(\mathbf{x}_{j}\). These derivatives can be efficiently obtained using advanced automatic differentiation techniques. The second term, boundary loss \(\tilde{\mathcal{L}}_{\text{bc}}\), ensures that the boundary conditions are satisfied at a set of points sampled along boundaries \(\{\Gamma^{k}\}_{k=1}^{n_{\Gamma}}\). The weights \(w_{\text{int}}\) and \(w_{\text{bc}}\) in (5) are used to balance the interior and boundary loss terms. Their optimal value can be determined either during the hyper-parameter tuning or directly during the training by employing adaptive strategies, such as one proposed in [54]. In this work, we overcome the difficulty of finding suitable values of \(w_{\text{int}}\) and \(w_{\text{bc}}\) by imposing the boundary conditions in a penalty-free manner, thus by eliminating the boundary loss term [50]. To this aim, the output of the network \(\tilde{u}_{\text{NN}}(\mathbf{\theta},\mathbf{x})\) is modified using the length factor function \(\ell^{k}:\Gamma^{k}\subset\mathbb{R}^{d}\to\mathbb{R}\) as follows \[u_{\text{NN}}(\mathbf{\theta},\mathbf{x}):=\sum_{k=1}^{n_{\Gamma}}g^{k}(\mathbf{x})+\frac{ \tilde{\ell}(\mathbf{x})}{\max_{\mathbf{x}\in\Omega}\tilde{\ell}(\mathbf{x})}\tilde{u}_{ \text{NN}}(\mathbf{\theta},\mathbf{x}),\ \text{where}\ \tilde{\ell}(\mathbf{x})=\prod_{k=1}^{n_{\Gamma}}(1-(1-\ell^{k}(\mathbf{x}))). \tag{6}\] The function \(\ell^{k}\) is constructed such that \[\ell^{k}(\mathbf{x}) =0, \mathbf{x}\in\Gamma^{k} \tag{7}\] \[\ell^{k}(\mathbf{x}) =1, \mathbf{x}\in\Gamma^{j},\quad j\in\{1,2,\ldots,n_{\Gamma}\},\quad\text{ for }k\neq j\] \[\ell^{k}(\mathbf{x}) \in(0,1) \text{otherwise.}\] The details regarding the construction of \(\ell^{k}\) for complex geometries can be found in [71]. Using (6), the minimization problem (4) can be now reformulated as \[\mathbf{\theta}^{*}:=\operatorname*{arg\,min}_{\mathbf{\theta}\in\mathbb{R}^{n}}\ \mathcal{L}(\mathbf{\theta}):=\frac{1}{|\mathcal{D}_{\texttt{int}}|}\sum_{\mathbf{x}_ {j}\in\mathcal{D}_{\texttt{int}}}\|\mathcal{P}(u_{\texttt{NN}}(\mathbf{\theta}, \mathbf{x}_{j}))-f(\mathbf{x}_{j})\|^{2}. \tag{8}\] After training, the optimal parameters \(\mathbf{\theta}^{*}\) can be used to infer an approximate solution \(u_{\texttt{NN}}\) of (1). The quality of this solution can be assessed by means of the error \(\mathcal{E}(u_{\texttt{NN}},u^{*}):=\|u_{\texttt{NN}}-u^{*}\|\), where \(u^{*}\) denotes an exact solution. In general, the error \(\mathcal{E}\) consists of three components: the discretization error, network approximation error, and optimization error [56]. More specifically, the discretization error is determined by the number and location of the collocation points. The approximation error of the network is associated with the representation capacity of the DNN, i.e., it directly depends on the network architecture (number of layers, network width). The optimization error quantifies the quality of the solution provided by the optimizer. Our goal, in this work, is to reduce the optimization error by developing enhanced training algorithms. ## 3 Nonlinearly preconditioned training using layer-wise decomposition of network The minimization problem (8) is traditionally solved using Adam and/or L-BFGS optimizers. As it has been pointed out in [73], the convergence of these optimizers deteriorates with an increasing problem stiffness, which is caused by the multiscale and multirate nature of the considered problems. We aim to overcome this difficulty by proposing a novel nonlinear preconditioning strategy. Let us recall that a minimizer of (8) has to satisfy the following first-order criticality condition \[\nabla\mathcal{L}(\mathbf{\theta})=0. \tag{9}\] As solving (9) is computationally demanding, we instead construct and solve an alternative, a nonlinearly preconditioned system of equations, given as \[\mathcal{F}(\mathbf{\theta})=0. \tag{10}\] The preconditioned system of equations (10) shall be constructed such that it is easier to solve than (9), but such that it has the same solution as an original nonlinear system. Thus, the preconditioned problem should have more balanced nonlinearities, and the numerical computations involved in solving (10) should be computationally cheaper. ### Network decomposition In this work, we propose to construct the nonlinearly preconditioned system of equations (10) by decomposing the parameter space of the network. In particular, subnetworks are constructed by layer-wise decomposition of the network into \(N_{sd}\) disjoint groups, often called subdomains. Thus, each layer is assigned a unique subnetwork identifier, and the parameters of the network are also split into \(N_{sd}\) disjoint groups, i.e., \(\mathbf{\theta}=[\mathbf{\theta}_{1},\ldots,\mathbf{\theta}_{s},\ldots,\mathbf{\theta}_{N_{sd}}]^ {\top}\), where \(\mathbf{\theta}_{s}\in\mathbb{R}^{n_{s}}\) denotes parameters associated with a \(s^{th}\) subnetwork. Please, refer to Figure 1 for the illustration of layer-wise decomposition. In passing, we note that the different types of network decomposition can be potentially utilized, e.g., intra-layer decomposition with or without overlap. Transfer of the data between the subnetwork and the global network is performed using two transfer operators. Restriction operator \(\mathbf{R}_{s}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n_{s}}\) extracts the parameters of network associated with the \(s^{th}\) subnetwork, i.e., \[\mathbf{\theta}_{s}=\mathbf{R}_{s}\mathbf{\theta},\qquad\text{for }s=1,2,\ldots,N_{sd}. \tag{12}\] In contrast, an extension operator \(\mathbf{E}_{s}:\mathbb{R}^{n_{s}}\rightarrow\mathbb{R}^{n}\) is used to extend the quantities from the subnetworks to the global network, i.e., \[\mathbf{\theta}=\sum_{s=1}^{N_{sd}}\mathbf{E}_{s}\mathbf{\theta}_{s}. \tag{13}\] Using the proposed network decomposition, we now define local minimization problems associated with a given subnetwork \(s\) as \[\mathbf{\theta}_{s}^{*}=\operatorname*{arg\,min}_{\mathbf{\theta}_{s}}\mathcal{L}( \mathbf{\theta}_{1},\ldots,\mathbf{\theta}_{s},\ldots,\mathbf{\theta}_{N_{sd}}). \tag{14}\] Solving (14) corresponds to minimizing the loss functional \(\mathcal{L}\) with respect to the parameters \(\mathbf{\theta}_{s}\), while all other parameters of the network are kept fixed. This minimization process can be carried out by any optimizer of a choice, such as Adam [35], L-BFGS [46], or Newton [10]. For the purpose of this work, we employ the L-BFGS method, see also section 4. Note, this process loosely resembles the transfer-learning approaches [75], where only a particular subset of the network parameters is being (re-)trained. ### Right-preconditioned training optimizers In this work, we construct a nonlinearly preconditioned system of equations as follows \[\mathcal{F}(\mathbf{\theta}):=\nabla\mathcal{L}(G(\mathbf{\theta})), \tag{15}\] where the operator \(G:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) denotes a right preconditioner. Using the network-decomposition framework introduced in subsection 3.1, we define operator \(G\) as an iterative method of the following form: \[G(\mathbf{\theta}^{(k)})=\mathbf{\theta}^{(k)}+\alpha^{(k)}\sum_{s=1}^{N_{sd}}\mathbf{ E}_{s}(\mathbf{\theta}_{s}^{*}-\mathbf{R}_{s}\mathbf{\theta}^{(k)}). \tag{16}\] Figure 1: An example of the layer-wise decomposition of the network. Here, a symbol \(\mathbf{\theta}_{s}^{*}\) represents a solution of the local minimization problem (11), associated with \(s^{th}\) subnetwork, while \(\alpha^{(k)}\) denotes a step-size. The subnetwork solutions \(\{\mathbf{\theta}_{s}^{*}\}_{s=1}^{N_{sd}}\) can be obtained independently from each other, i.e., in parallel, which gives rise to an additive preconditioner \(G\). Alternatively, one might obtain \(\{\mathbf{\theta}_{s}^{*}\}_{s=1}^{N_{sd}}\) in a multiplicative fashion. In this case, the local minimization problems are solved sequentially and the already acquired local solutions \(\{\mathbf{\theta}_{z}^{*}\}_{z=1}^{s-1}\) are taken into account while performing local training for the \(s^{th}\) subnetwork. As we can see from (10), \(\mathcal{F}\) is a composite operator. Thus, for a given iterate \(\mathbf{\theta}^{(k)}\), the preconditioner \(G\) is used to obtain an improved iterate, denoted by \(\mathbf{\theta}^{(k+\nicefrac{{1}}{{2}})}=G(\mathbf{\theta}^{(k)})\). The improved iterate \(\mathbf{\theta}^{(k+\nicefrac{{1}}{{2}})}\) is then used as an initial guess for the next global optimization step, giving rise to the following update rule: \[\mathbf{\theta}^{(k+1)}=\mathbf{\theta}^{(k+\nicefrac{{1}}{{2}})}+\alpha^{(k+ \nicefrac{{1}}{{2}})}\mathbf{p}^{(k+\nicefrac{{1}}{{2}})}, \tag{12}\] where \(\mathbf{p}^{(k+\nicefrac{{1}}{{2}})}\) is obtained by a global optimizer and \(\alpha^{(k+\nicefrac{{1}}{{2}})}\) denotes appropriately selected global step-size. #### 3.2.1 Right-preconditioned L-BFGS There are many possible choices of a global optimizer, e.g., Adam [35], or AdaGrad [52]. Here, we utilize the BFGS method, as it is widely employed for the training of PINNs. The BFGS method belongs to the family of quasi-Newton (QN) methods, which have gained popularity due to the fact that they do not require explicit knowledge of the Hessian. Instead, a Hessian approximation \(\mathbf{B}^{(k+1)}\) is constructed such that the following secant equation is satisfied: \[\mathbf{B}^{(k+1)}\mathbf{s}^{(k)}=\mathbf{y}^{(k)}, \tag{13}\] where \(\mathbf{s}^{(k)}=\mathbf{\theta}^{(k+1)}-\mathbf{\theta}^{(k+\nicefrac{{1}}{{2}})}\) and \(\mathbf{y}^{(k)}=\nabla\mathcal{L}(\mathbf{\theta}^{(k+1)})-\nabla\mathcal{L}(\mathbf{ \theta}^{(k+\nicefrac{{1}}{{2}})})\). Note, that in the context of right-preconditioned nonlinear systems, the secant pairs are obtained by taking into account iterate \(\mathbf{\theta}^{(k+\nicefrac{{1}}{{2}})}\), obtained as an outcome of the nonlinear preconditioning step, see [42] for details. Since the secant equation (13) is indeterminate, additional constraints must be applied to uniquely determine the approximation \(\mathbf{B}^{(k+1)}\). A particular choice of constraints leads to different variants of the QN methods. For instance, the BFGS method is obtained by imposing the symmetry and positive definiteness of \(\mathbf{B}^{(k+1)}\). In order to reduce the memory footprint of the proposed training method, we employ a limited-memory variant of the BFGS method, termed L-BFGS. Using compact matrix representation [46], the approximation \(\mathbf{B}^{(k+1)}\) is given as \[\mathbf{B}^{(k+1)}=\mathbf{B}^{(0)}-\begin{bmatrix}\mathbf{B}^{(0)}\mathbf{S} ^{(k)}\\ \mathbf{Y}^{(k)}\end{bmatrix}^{\top}\begin{bmatrix}(\mathbf{S}^{(k)})^{\top} \mathbf{B}^{(0)}\mathbf{S}^{(k)}&\mathbf{L}^{(k)}\\ (\mathbf{L}^{(k)})^{\top}&-\mathbf{D}^{(k)}\end{bmatrix}^{-1}\begin{bmatrix} (\mathbf{S}^{(k)})^{\top}\mathbf{B}^{(0)}\\ (\mathbf{Y}^{(k)})^{\top}\end{bmatrix}, \tag{14}\] where the matrices \(\mathbf{S}^{(k)}:=[\mathbf{s}^{(k-m)},\dots,\mathbf{s}^{(k-1)}]\), and \(\mathbf{Y}^{(k)}:=[\mathbf{y}^{(k-m)},\dots,\mathbf{y}^{(k-1)}]\) are constructed using \(m\) most recent secant pairs \(\{(\mathbf{s}^{(i)},\mathbf{y}^{(i)})\}_{i=k-m}^{k-1}\). The matrices \(\mathbf{L}^{(k)}\) and \(\mathbf{D}^{(k)}\) denote the strictly lower triangular, and the diagonal part of the matrix \((\mathbf{S}^{(k)})^{\top}\mathbf{Y}^{(k)}\), while \(\mathbf{B}^{(0)}\) is an initial approximation of the Hessian. Usually \(\mathbf{B}^{(0)}=\gamma\mathbf{I}\), where \(\gamma=\frac{\langle\mathbf{y}^{(k)},\mathbf{y}^{(k)}\rangle}{\langle\mathbf{y}^{(k)},\bm {s}^{(k)}\rangle}\), see [63] for alternatives. Using (14), we are now able to state the global L-BFGS step as follows \[\mathbf{\theta}^{(k+1)}=\mathbf{\theta}^{(k+\nicefrac{{1}}{{2}})}+\alpha^{(k+ \nicefrac{{1}}{{2}})}\underbrace{\big{(}-(\mathbf{B}^{(k+1)})^{-1}\nabla \mathcal{L}(\mathbf{\theta}^{(k+\nicefrac{{1}}{{2}})})\big{)}}_{=:\mathbf{p}^{(k+ \nicefrac{{1}}{{2}})}}. \tag{15}\] Using the compact matrix representation, the inverse Hessian \((\mathbf{B}^{(k+1)})^{-1}\) can be obtained explicitly by applying the Sherman-Morrison-Woodbury formula. Alternatively, the two-loop recursion formula [19] can be utilized. The convergence of the right-preconditioned L-BFGS method can be further enhanced by incorporating momentum, as often done for training DNNs. Following [53, 63], we evaluate the momentum \(\boldsymbol{v}^{(k+\nicefrac{{1}}{{2}})}\in\mathbb{R}^{n}\) in recursive manner as \[\boldsymbol{v}^{(k+\nicefrac{{1}}{{2}})}=(1-\mu)\boldsymbol{v}^{(k-\nicefrac{{ 1}}{{2}})}+\mu\boldsymbol{p}^{(k+\nicefrac{{1}}{{2}})}, \tag{12}\] where \(\mu\in[0,1]\). Utilizing the momentum, the global step (11) can be now reformulated as \[\boldsymbol{\theta}^{(k+1)}=\boldsymbol{\theta}^{(k+\nicefrac{{1}}{{2}})}+ \alpha^{(k+\nicefrac{{1}}{{2}})}\boldsymbol{v}^{(k+\nicefrac{{1}}{{2}})}. \tag{13}\] Algorithm 1 summarizes the Schwarz preconditioned quasi-Newton (SPQN) methods, namely additive and multiplicative variants termed ASPQN and MSPQN, respectively. ``` 0:\(\mathcal{L}:\mathbb{R}^{n}\to\mathbb{R},\ \boldsymbol{\theta}^{(0)}\in\mathbb{R}^{n},\ \epsilon\in\mathbb{R},\ \mu\in[0,1],\ k_{\max}\in\mathbb{N}^{+}, \boldsymbol{v}^{-\nicefrac{{1}}{{2}}}=\boldsymbol{0}\) 1:\(\boldsymbol{v}^{(0)}=0\) 2:for\(k=0,\dots,k_{\max}\)do 3:for\(s=1,\dots,N_{sd}\)do 4:if additive then 5:\(\boldsymbol{\theta}_{s}^{*}=\arg\min_{\boldsymbol{\theta}_{s}}\mathcal{L}( \boldsymbol{\theta}_{1}^{(k)},\dots,\boldsymbol{\theta}_{s},\dots,\boldsymbol {\theta}_{N_{sd}}^{(k)})\)\(\triangleright\) parallel 6:else 7:\(\boldsymbol{\theta}_{s}^{*}=\arg\min_{\boldsymbol{\theta}_{s}}\mathcal{L}( \boldsymbol{\theta}_{1}^{*},\dots,\boldsymbol{\theta}_{s-1}^{*},\boldsymbol{ \theta}_{s},\boldsymbol{\theta}_{s+1}^{(k)},\dots,\boldsymbol{\theta}_{N_{sd} }^{(k)})\)\(\triangleright\) sequential 8:endif 9:endfor 10: 11:\(\boldsymbol{\theta}^{(k+\nicefrac{{1}}{{2}})}=\boldsymbol{\theta}^{(k)}+ \alpha^{(k)}\sum_{s=1}^{N_{sd}}\boldsymbol{E}_{s}(\boldsymbol{R}_{s} \boldsymbol{\theta}^{(k)}-\boldsymbol{\theta}_{s}^{*})\)\(\triangleright\) Synchronization step 12: 13:\(\boldsymbol{p}^{(k+\nicefrac{{1}}{{2}})}=-(\mathbf{B}^{(k+1)})^{-1}\nabla \mathcal{L}(\boldsymbol{\theta}^{(k+\nicefrac{{1}}{{2}})})\)\(\triangleright\) Preconditioned quasi-Newton step 14:\(\boldsymbol{v}^{(k+\nicefrac{{1}}{{2}})}=(1-\mu)\boldsymbol{v}^{(k- \nicefrac{{1}}{{2}})}+\mu\boldsymbol{p}^{(k+\nicefrac{{1}}{{2}})}\)\(\triangleright\) Momentum evaluation 15:\(\boldsymbol{\theta}^{(k+1)}=\boldsymbol{\theta}^{(k+\nicefrac{{1}}{{2}})}+ \alpha^{(k+\nicefrac{{1}}{{2}})}\boldsymbol{v}^{(k+\nicefrac{{1}}{{2}})}\)\(\triangleright\) Global parameter update 16: 17: Update \(\mathbf{S}^{(k+1)}\) with \(\boldsymbol{s}^{(k)}=\boldsymbol{\theta}^{(k+1)}-\boldsymbol{\theta}^{(k+ \nicefrac{{1}}{{2}})}\)\(\triangleright\) Update of secant pairs 18: Update \(\mathbf{Y}^{(k+1)}\) with \(\boldsymbol{y}^{(k)}=\nabla\mathcal{L}(\boldsymbol{\theta}^{(k+1)})-\nabla \mathcal{L}(\boldsymbol{\theta}^{(k+\nicefrac{{1}}{{2}})})\) 19:endfor 20:return\(\boldsymbol{\theta}^{(k+1)}\) ``` **Algorithm 1** Schwarz Preconditioned Quasi-Newton (SPQN) ## 4 Implementation details, computational cost, and memory requirements. We implement the proposed ASPQN and MSPQN optimizers as part of Dist-TraiNN1 library [38], which utilizes PyTorch[62] for the implementation of PINNs. As mentioned earlier, the MSPQN algorithm is inherently sequential, and therefore its implementation targets computing architectures with a single GPU device. Footnote 1: The code will be made publicly available upon acceptance of the manuscript. In contrast, the ASPQN algorithm is inherently parallel hence its implementation is designed for distributed computing environments with multiple GPU devices. Here, we utilize the torch.distributed package with the NCCL backend and implement the ASPQN method such that each GPU is associated with a single subnetwork, i.e., the number of GPUs determines the number of subnetworks. As an evaluation of loss associated with a subnetwork requires input to pass through an entire network, we replicate the global network to all GPUs. Each subnetwork then undergoes local optimization by training the subnetwork's parameters, i.e., by minimizing (12). This is in contrast to other model parallel approaches, where each GPU gets access to only a specific part of the network [8, 1]. As a consequence, the standard model parallel approaches have a smaller memory footprint compared to our approach. After the subnetwork's training is concluded, the synchronization of local updates is performed (line 11, Algorithm 1). Subsequently, a global L-BFGS step is executed concurrently on all nodes. Figure 2 provides a sketch of the ASPQN parallelization strategy. _Remark 4.1_.: The implementation of the SPQN methods can be seamlessly integrated with data parallel approaches. We solve the local minimization problem (12) using the L-BFGS method. This minimization is carried out only approximately, using a fixed number of iterations, denoted by \(k_{s}\). Both, global and subnetwork, L-BFGS Hessian approximations take advantage of three secant pairs, i.e., \(m=3\). The Hessian approximation for each subnetwork is restarted every time the local training commences. Moreover, we employ cubic backtracking line-search method with strong Wolfe conditions [9, Algorithm A6.3.1, pages 325-327] to obtain \(\alpha^{(k+\nicefrac{{1}}{{2}})}\) and \(\alpha^{(k)}\). Alternatively, one could consider using fixed step sizes, which may result in a more efficient algorithm that would require fewer evaluations of the loss function. However, determining suitable values of \(\alpha^{(k+\nicefrac{{1}}{{2}})}\) and \(\alpha^{(k)}\) would necessitate a thorough hyper-parameter search, which would eventually lead to a significant rise in overall computational cost. All presented numerical experiments were performed at the Swiss National Supercomputing Centre (CSCS) using XC50 compute nodes of the Piz Daint supercomputer. Each XC50 compute node consists of an Intel Xeon E5-2690 V3 processor and an NVIDIA Tesla P100 GPU. The memory of a node is 64 GB, while the memory of the GPU is 16 GB. ### Computational cost and memory requirements In this section, we discuss the computational cost and memory requirements of the proposed SPQN Figure 2: A sketch of local-to-global updates utilized by the ASPQN. methods, as well as standard Adam and L-BFGS optimizers. In particular, the Adam optimizer requires a single evaluation of the loss function and the gradient per iteration. The update cost involves not only updating the parameters but also computing the biased and corrected first-order and second-order moment estimates, thus approximately \(5n\) flops in total. The memory requirement for the Adam optimizer is related to storing the iterate, gradient, and first-order and second-order moment estimates, leading to the total memory of size \(4n\). The L-BFGS method requires an evaluation of one gradient per iteration. However, the loss function might be evaluated multiple times, depending on the number of required line-search iterations, denoted by \(ls_{\text{its}}\). The update cost depends on the number of stored secant pairs (\(m\)) that are used for approximating the Hessian. Using compact matrix representation, it can be approximately estimated as \(4mn\) flops, see [4, Sections 3 and 4] for details. Moreover, we have to take into account the cost associated with an evaluation of the momentum term, leading to an overall cost of approximately \(n+4mn\) flops. In the context of memory requirements, the L-BFGS method necessitates the storage of the momentum term \(\mathbf{v}^{(k)}\) as well as matrices \(\mathbf{\mathrm{S}}^{(k)}\) and \(\mathbf{\mathrm{Y}}^{(k)}\), which accounts for the memory of size \(n+2mn\). In the case of the SPQN methods, the computational cost consists of global and local parts. The global update cost and memory requirements consist of that of the standard L-BFGS method. Moreover, one has to take into account the cost related to the evaluation of the preconditioned iterate \(\mathbf{\theta}^{(k+\nicefrac{{1}}{{2}})}\) (line 11, Algorithm 3.1). The local update cost is associated with the cost required by the local optimizer, denoted by \(\mathrm{UC}_{s}\). Thus, for the \(s^{th}\) subnetwork, the update cost is scaled proportionally to the number of local parameters \(n_{s}\) and the number of local iterations \(k_{s}\). Assuming that all subnetworks consist of the same number of local trainable parameters (\(n_{s}\)), the local update cost of the ASPQN method is therefore \(\binom{n_{s}/n}{k_{s}}\mathrm{UC}_{s}\). For the MSPQN method, the local update cost has to be summed up over all subnetworks, due to the sequential nature of the algorithm, thus \(\sum_{s=1}^{N_{sd}}\left(\binom{n_{s}/n}{k_{s}}\mathrm{UC}_{s}\right)=k_{s} \mathrm{UC}_{s}\). Regarding the memory cost of both SPQN methods, we have to take into account the storage of global and local Hessian approximations, the momentum, and the preconditioned iterate \(\mathbf{\theta}^{(k+\nicefrac{{1}}{{2}})}\). In contrast to the update cost (UC), the estimate for the number of the loss and gradient evaluations required by SPQN methods is not scaled proportionally to \(n_{s}\). This is due to the fact that evaluation of the loss requires a forward pass through the entire network. In contrast, the evaluation of the local gradient, denoted as \(\nabla_{\mathbf{\theta}_{s}}\mathcal{L}\), does not necessitate traversing the entire network as several code optimization strategies can be utilized to avoid unnecessary computations. Moreover, the network's structure can be leveraged to construct local gradient approximations at a lower computation cost. For instance, in the case of ODE-based network architectures, the parallel-in-time strategies [26] could be explored. We do not investigate such code optimization techniques in the present work, as their effectiveness depends heavily on the particular implementation and the choice of network architecture. However, we plan to pursue this avenue of research in the future. Taking all aforementioned points into account, we note that the estimate \(\#g_{e}\) considered in Table 1 is fairly pessimistic. The summary of computational cost and memory requirements for all aforementioned optimizers can be found in Table 1. ## 5 Numerical experiments In this section, we describe numerical examples used to test the proposed SPQN methods. In particular, we consider the following benchmark problems. * **Burgers' equation:** Burgers' equation has the following form: \[\frac{\partial u}{\partial t}+u\nabla u-\nu\nabla^{2}u =0, \forall\ (t,x)\in(0,1]\times(-1,1),\] (5.1) \[u =-\sin(\pi x), \forall\ (t,x)\in\{0\}\times[-1,1],\] \[u =0, \forall\ (t,x)\in(0,1]\times\{1\},\] \[u =0, \forall\ (t,x)\in(0,1]\times\{-1\},\] where \(u=u(t,x)\) denotes the flow velocity and \(\nu\) stands for the kinematic viscosity. Here, we choose \(\nu=0.01/\pi\). * **Diffusion-advection:** The steady-state diffusion-advection problem is given in two-spatial dimensions as (5.2) \[-\nabla\cdot\mu\nabla u+\mathbf{b}\cdot\nabla u =f, \forall\ (x_{1},x_{2})\in(0,1)\times(0,1),\] \[u =0, \text{on }\partial\Omega,\] where \(\mathbf{b}=(1,1)^{\top}\). The data is considered to be constant, i.e., \(f=1\). Moreover, the symbol \(\mu\) denotes viscosity. In this work, we employ \(\mu=10^{-2}\). * **Klein-Gordon:** The nonlinear second-order hyperbolic equation, called Klein-Gordon, is defined as \[\frac{\partial^{2}u}{\partial t^{2}}+\alpha\nabla^{2}u+\beta u+ \gamma u^{2} =f(t,x), \forall\ (t,x)\in(0,12]\times(-1,1),\] (5.3) \[u =x, \forall\ (t,x)\in\{0\}\times[-1,1],\] \[\frac{\partial u}{\partial t} =0, \forall\ (t,x)\in\{0\}\times[-1,1],\] \[u =-\cos(t), \forall\ (t,x)\in(0,12]\times\{-1\},\] \[u =\cos(t), \forall\ (t,x)\in(0,12]\times\{1\},\] where \(u=u(t,x)\) denotes the wave displacement at time \(t\) and location \(x\). The symbols \(\alpha,\beta,\gamma\) denote the predetermined constants. For the purpose of this work, we employ \(\alpha=-1,\beta=0,\gamma=1\) and \(f(t,x):=-x\cos(t)+x^{2}\cos^{2}(t)\). Given this particular choice of the parameters, the analytical solution is given as \(u(t,x)=x\cos(t)\), see [27]. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Method & \(\#\mathcal{L}_{\text{e}}\) & \(\#g_{\text{e}}\) & UC & MC \\ \hline \hline Adam & \(1\) & \(1\) & \(5n\) & \(4n\) \\ \hline L-BFGS & \(1+ls_{\text{its}}\) & \(1\) & \(n+4mn\) & \(n+2mn\) \\ \hline ASPQN & \(1+ls_{\text{its}}+\) & \(2+\) & \(2n+4mn+\) & \(2n+2mn+\) \\ \(k_{s}(\#\mathcal{L}_{\text{e}_{s}})\) & \(k_{s}(\#g_{\text{e}_{s}})\) & \((^{n_{s}/n})k_{s}\text{UC}_{s}\) & \((^{n_{s}/n})\text{MC}_{s}\) \\ \hline MSPQN & \(1+ls_{\text{its}}+\) & \(2+\) & \(2n+4mn+\) & \(2n+2mn+\) \\ \(\sum_{s=1}^{N_{sd}}k_{s}(\#\mathcal{L}_{\text{e}_{s}})\) & \(\sum_{s=1}^{N_{sd}}k_{s}(\#g_{\text{e}_{s}})\) & \(k_{s}\text{UC}_{s}\) & \((^{n_{s}/n})\text{MC}_{s}\) \\ \hline \end{tabular} \end{table} Table 1: Estimates for the number of the loss function (\(\#\mathcal{L}_{\text{e}}\)) and gradient evaluations (\(\#g_{\text{e}}\)), update cost (UC) and memory requirements (MC) per iteration and GPU device for various optimizers. * **Allen-Cahn:** We consider the Allen-Cahn equation of the following form: \[\frac{\partial u}{\partial t}-D\nabla^{2}u-5(u-u^{3}) =0, \forall\ (t,x) \in(0,1]\times(-1,1),\] (10) \[u =x^{2}\cos(\pi x), \forall\ (t,x) \in\{0\}\times[-1,1],\] \[u =-1, \forall\ (t,x) \in(0,1]\times\{-1\},\] \[u =-1, \forall\ (t,x) \in(0,1]\times\{1\},\] where the diffusion coefficient \(D\) is chosen as \(D=0.001\). Table 2 summarizes the network architectures used for all four benchmark problems. As an activation function, we employ an adaptive variant of _tanh_[32]. All networks are initialized using the Xavier initialization strategy [20]. To construct the training set, we use a uniform non-adaptive sampling strategy based on quasi-random Hammersley low-discrepancy sequences [56]. The number of collocation points is chosen to be \(10,000\) for all numerical examples. ## 6 Numerical results In this section, we investigate the numerical performance of the proposed SPQN methods. During our investigation, we monitor the loss functional \(\mathcal{L}\) as well as the quality of PINN's solution (\(u_{\mathtt{NN}}\)), assessed by means of the relative \(L^{2}\)-error, given as \[\mathcal{E}_{\mathrm{rel}}(u_{\mathtt{NN}},u^{*})=\frac{\|u_{\mathtt{NN}}-u^{ *}\|_{L^{2}(\Omega)}}{\|u_{\mathtt{NN}}\|_{L^{2}(\Omega)}}. \tag{11}\] Here, \(u^{*}\) denotes an exact solution, obtained by utilizing an analytical expression (Klein-Gordon ) or by solving the underlying PDE using a high-fidelity finite element solver (Burgers', Diffusion-advection, and Allen-Cahn). ### Sensitivity with respect to a varying number of subnetworks and local iterations We start our numerical investigation by examining the performance of the SPQN methods with respect to a varying number of local iterations (\(k_{s}\)) and subnetworks (\(N_{sd}\)). During these experiments, we consider \(k_{s}\in\{10,50,100\}\) and three different types of network decomposition. In particular, we employ a decomposition into two subnetworks (the minimal decomposition), decomposition by grouping two neighboring layers into one subnetwork, and decomposition of one layer per subnetwork (the maximal layer-wise decomposition). Figure 3 illustrates the obtained results for different configurations of the ASPQN method. More precisely, we monitor the loss \(\mathcal{L}(\mathbf{\theta})\) and error \(\mathcal{E}_{\mathrm{rel}}\) as a function of number of global iterations. As expected, we observe that the loss decreases monotonically \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & \multicolumn{6}{c|}{Parameter type} \\ \cline{2-7} Benchmark problem & Depth & Width & Adam & ASPQN & MSPQN \\ & L & nh & lr & \(N_{sd}\) & \(k_{s}\) & \(N_{sd}\) & \(k_{s}\) \\ \hline \hline Burgers’ & 8 & 20 & \(5\times 10^{-4}\) & 8 & 50 & 8 & 50 \\ \hline Diffusion-advection & 10 & 50 & \(1\times 10^{-4}\) & 10 & 10 & 10 & 10 \\ \hline Klein-Gordon & 6 & 50 & \(1\times 10^{-3}\) & 6 & 50 & 6 & 50 \\ \hline Allen-Cahn & 6 & 64 & \(2.5\times 10^{-4}\) & 6 & 50 & 6 & 50 \\ \hline \end{tabular} \end{table} Table 2: Summary of network architecture and optimizer’s parameters used for numerical experiments reported in subsection 6.2. due to the line-search globalization strategy. We also observe that the configuration of ASPQN with maximal decomposition consistently outperforms other configurations in terms of the error \(\mathcal{E}_{\mathrm{rel}}\). The observed behavior is independent of the number of local iterations (\(k_{s}\)) and the benchmark problem. This can be attributed to the fact that more extensive decoupling of the parameters allows us to obtain more localized learning rates and Hessian approximations, which in turn enables capturing of the underlying nonlinearities more accurately. Figure 4 demonstrates the performance of the ASPQN method by means of the upper bound on the number of effective gradient evaluations (\(\#g_{e}\)) as well as the estimated update cost UC, both discussed in section 4. We again observe that the most efficient variants of the ASPQN method employ the maximal layer-wise decomposition. Moreover, the lowest \(\mathcal{E}_{\mathrm{rel}}\) is typically achieved using 50 or 100 local iterations. In the case of the MSPQN method, the obtained results are reported in Figure 5 and Figure 6. As we can see from Figure 5, using a larger amount of subnetworks and Figure 3: The mean loss \(\mathcal{L}(\boldsymbol{\theta})\) (dotted lines) and relative error \(\mathcal{E}_{\mathrm{rel}}\) (solid lines) for ASPQN. The results are obtained for varying numbers of local iterations, i.e., \(k_{s}\in\{10,50,100\}\) (from left to right). The average is obtained over 10 independent runs. Note that axes are scaled differently for different configurations of the ASPQN method. higher values of \(k_{s}\) gives typically rise to a faster convergence, i.e., a fewer number of iterations is required to obtain a comparably equally accurate solution. By examining results reported in Figure 6, we see that the best-performing variants in terms of UC are ones with \(k_{s}=50\) or \(k_{s}=100\), and with the maximal layer-wise decomposition. This behavior is expected since the local contributions to UC decrease with an increasing number of subnetworks. However, in the context of the \(\#g_{e}\) metric, the obtained results are fairly different. As we can notice, variants with fewer subnetworks are often more effective, as the \(\#g_{e}\) estimate does not scale proportionally with the size of the subnetwork, i.e., the number of trainable subnetworks' parameters. ### SPQN methods versus the state-of-the-art optimizers In this section, we compare the convergence and performance of the proposed SPQN methods with state-of-the-art Adam and L-BFGS optimizers. More precisely, we compare all considered optimizers by means of the \(\#g_{e}\) and UC estimates. Moreover, we also report the execution time, although our implementation of the SPQN algorithms is not Figure 4: The mean computational cost of the ASPQN method. The results are obtained for varying numbers of local iterations, i.e., \(k_{s}\in\{10,50,100\}\) (from left to right). Average is obtained over 10 independent runs. yet optimized, unlike PyTorch implementation of Adam and L-BFGS2. Throughout these experiments, we configure the SPQN methods according to the specifications summarized in Table 2. The learning rate for the Adam optimizer is chosen by carrying out a thorough hyper-parameter search. In particular, we choose a learning rate (reported in Table 2), which allows us to obtain the most accurate PDE solution, thus a solution with the lowest \(\mathcal{E}_{\mathrm{rel}}\). For the L-BFGS optimizer, we employ the same memory size (\(m=3\)), line-search strategy, and momentum setup as for the L-BFGS methods employed with the SPQN framework. Footnote 2: We have adjusted PyTorch implementation of the L-BFGS method to take into account the momentum. As we can see from the obtained results, reported in Figure 7, the SPQN methods give rise to more accurate solutions, i.e., solutions with smaller \(\mathcal{E}_{\mathrm{rel}}\). The differences in the quality of an obtained solution are on average of Figure 5: The mean loss \(\mathcal{L}(\mathbf{\theta})\) (dotted lines) and relative error \(\mathcal{E}_{\mathrm{rel}}\) (solid lines) for MSPQN. The results are obtained for varying numbers of local iterations, i.e., \(k_{s}\in\{10,50,100\}\) (from left to right). The average is obtained over 10 independent runs. Note that axes are scaled differently for different configurations of the MSPQN method. The Diffusion-advection problem is particularly interesting as the L-BFGS method stagnates for this example. In contrast, the variants of the SPQN optimizers are able to reach the \(\mathcal{E}_{\mathrm{rel}}\) close to \(10^{-2}\). Numerical evidence suggests that the L-BFGS stagnation is caused by the line-search method providing very small step sizes. In contrast, when using the SPQN methods, a very small step size was observed only for one particular subnetwork at the beginning of the training process. This suggests that using the DD approach enables the usage of localized learning rates and rebalances the global nonlinearities. In terms of the computational cost, we can also observe from Figure 7, that the SPQN methods are significantly more efficient in terms of \(\#g_{e}\) and UC. The difference is more prevalent in the case of the UC metric, as this estimate scales with the number of subnetworks. We can also see that the UC metric translates closely to the computational time (the last column of Figure 7). Thus, in order to achieve a solution of comparable quality, the SPQN methods are significantly faster than Adam and L-BFGS optimizers. To highlight the difference, we also report the minimum mean \(\mathcal{E}_{\mathrm{rel}}\) reached by the L-BFGS optimizer and the average time required to reach that solution. Figure 6: The mean computational cost of MSPQN method. The results are obtained for varying numbers of local iterations, i.e., \(k_{s}\in\{10,50,100\}\) (from left to right). Average is obtained over 10 independent runs. tion by all considered optimizers34. As we can see from Table 3, the computational time required by ASPQN and MSPQN is significantly lower than the computational time required by the L-BFGS method. In particular, the MSPQN optimizer enjoys an average speedup of a factor of 10 across all benchmark problems. For the ASPQN optimizer, the speedup is even larger, by an average factor of approximately 28. This speedup is however obtained with an increased amount of computational resources, as the optimizer runs in parallel, i.e., using multiple GPU devices. Thus, for a fixed amount of computational resources, MSPQN can be considered to be a more efficient optimizer. However, the ASPQN method allows for a larger overall speedup by leveraging model parallelism. Footnote 3: Comparison with the Adam optimizer is skipped, as it never reaches \(\mathcal{E}_{\mathrm{rel}}\) as low as the L-BFGS optimizer. Figure 7: Performance comparison of different optimizers for Burgers’, Klein-Gordon, Diffusion-advection, and Allen-Cahn problems (from top to bottom). The mean relative error \(\mathcal{E}_{\mathrm{rel}}\) obtained over 10 independent runs. ## 7 Conclusion In this work, we proposed novel additive and multiplicative preconditioning strategies for the L-BFGS optimizer with a particular focus on PINNs applications. The preconditioners were constructed by leveraging the layer-wise decomposition of the network, thus tailoring them to the characteristics of DNNs. The multiplicative preconditioner was designed to enhance the convergence of the L-BFGS optimizer in the context of single GPU computing environments. In contrast, the additive preconditioner provided a novel approach to model parallelism with a focus on multi-GPU computing architectures. To evaluate the effectiveness of the proposed training methods, we conducted numerical experiments using benchmark problems from the field of PINNs. A comparison with Adam and L-BFGS optimizers was performed and demonstrated that the SPQN methods can provide a substantial reduction in terms of training time. In particular, using the MSPQN optimizer, we obtained an average speedup of a factor of 10, while for the ASPQN optimizer, the average speedup factors were approximately 28. Moreover, we have also demonstrated that by using the proposed SPQN methods, one can obtain a significantly more accurate solution for the underlying PDEs. We foresee the extension of the presented work in several ways. For instance, it would be interesting to investigate different network decompositions, as well as to incorporate overlap and coarse-level acceleration into the proposed algorithmic framework. Furthermore, developing strategies for load balancing and optimization of the subnetwork's loss/gradient evaluations could further enhance the performance of the proposed algorithms. Moreover, the applicability of the proposed SPQN methods could be extended beyond the PINNs to a variety of different deep-learning tasks, e.g., operator learning using DeepOnets [48, 49], or natural language processing using transformers [72]. Lastly, in the context of PINNs, the current proposal could be combined with the methods that consider the decomposition of the computational domain (data space).
2310.20299
Verification of Neural Networks Local Differential Classification Privacy
Neural networks are susceptible to privacy attacks. To date, no verifier can reason about the privacy of individuals participating in the training set. We propose a new privacy property, called local differential classification privacy (LDCP), extending local robustness to a differential privacy setting suitable for black-box classifiers. Given a neighborhood of inputs, a classifier is LDCP if it classifies all inputs the same regardless of whether it is trained with the full dataset or whether any single entry is omitted. A naive algorithm is highly impractical because it involves training a very large number of networks and verifying local robustness of the given neighborhood separately for every network. We propose Sphynx, an algorithm that computes an abstraction of all networks, with a high probability, from a small set of networks, and verifies LDCP directly on the abstract network. The challenge is twofold: network parameters do not adhere to a known distribution probability, making it difficult to predict an abstraction, and predicting too large abstraction harms the verification. Our key idea is to transform the parameters into a distribution given by KDE, allowing to keep the over-approximation error small. To verify LDCP, we extend a MILP verifier to analyze an abstract network. Experimental results show that by training only 7% of the networks, Sphynx predicts an abstract network obtaining 93% verification accuracy and reducing the analysis time by $1.7\cdot10^4$x.
Roie Reshef, Anan Kabaha, Olga Seleznova, Dana Drachsler-Cohen
2023-10-31T09:11:12Z
http://arxiv.org/abs/2310.20299v1
# Verification of Neural Networks' Local Differential Classification Privacy ###### Abstract Neural networks are susceptible to privacy attacks. To date, no verifier can reason about the privacy of individuals participating in the training set. We propose a new privacy property, called _local differential classification privacy (LDCP)_, extending local robustness to a differential privacy setting suitable for black-box classifiers. Given a neighborhood of inputs, a classifier is LDCP if it classifies all inputs the same regardless of whether it is trained with the full dataset or whether any single entry is omitted. A naive algorithm is highly impractical because it involves training a very large number of networks and verifying local robustness of the given neighborhood separately for every network. We propose Sphnx, an algorithm that computes an abstraction of all networks, with a high probability, from a small set of networks, and verifies LDCP directly on the abstract network. The challenge is twofold: network parameters do not adhere to a known distribution probability, making it difficult to predict an abstraction, and predicting too large abstraction harms the verification. Our key idea is to transform the parameters into a distribution given by KDE, allowing to keep the over-approximation error small. To verify LDCP, we extend a MILP verifier to analyze an abstract network. Experimental results show that by training only 7% of the networks, Sphnx predicts an abstract network obtaining 93% verification accuracy and reducing the analysis time by \(1.7\cdot 10^{4}\)x. ## 1 Introduction Neural networks are successful in various tasks but are also vulnerable to attacks. One kind of attacks that is gaining a lot of attention is privacy attacks. Privacy attacks aim at revealing sensitive information about the network or its training set. For example, membership inference attacks recover entries in the training set [56, 43, 37, 75, 11, 35, 40], model inversion attacks reveal sensitive attributes of these entries [22, 23], model extraction attacks recover the model's parameters [64], and property inference attacks infer global properties of the model [24]. Privacy attacks have been shown successful even against platforms providing a limited access to a model, including black-box access and a limited number of queries. Such restricted access is common for platforms providing machine-learning-as-a-service (e.g., Google's Vertex AI1, Amazon's ML on AWS2, and BigML3). Footnote 1: [https://cloud.google.com/vertex-ai](https://cloud.google.com/vertex-ai) Footnote 2: [https://aws.amazon.com/machine-learning/](https://aws.amazon.com/machine-learning/) Footnote 3: [https://bigml.com/](https://bigml.com/) A common approach to mitigate privacy attacks is _differential privacy_ (DP) [18]. DP has been adopted by numerous network training algorithms [1, 10, 44, 8, 28]. Given a privacy level, a DP training algorithm generates the same network with a similar probability, regardless of whether a particular individual's entry is included in the dataset. However, DP is not an adequately suitable privacy criterion for black-box classifiers, for two reasons. First, it poses a too strong requirement: it requires that the training algorithm returns the same network (i.e., assign the same score to every class), whereas black-box classifiers are considered the same if they predict the same class (i.e., assign the _maximal_ score to the same class). Second, DP can only be satisfied by randomized algorithms, adding noise to the computations. Consequently, the accuracy of the resulting network decreases. The amount of noise is often higher than necessary because the mathematical analysis of differentially private algorithms is highly challenging and thus practically not tight (e.g., it often relies on compositional theorems [18]). Thus, network designers often avoid adding noise to their networks. This raises the question: _What can a network designer provide as a privacy guarantee for individuals participating in the training set of a black-box classifier?_ We propose a new privacy property, called _local differential classification privacy (LDCP)_. Our property is designed for black-box classifiers, whose training algorithm is not necessarily DP. Conceptually, it extends the local robustness property, designed for adversarial example attacks [62, 27], to a "deterministic differential privacy" setting. Local robustness requires that the network classifies all inputs in a given neighborhood the same. We extend this property by requiring that the network classifies all inputs in a given neighborhood the same _regardless of whether a particular individual's entry is included in the dataset_. Proving that a network is LDCP is challenging, because it requires to check the local robustness of a very large number of networks: \(|\mathcal{D}|+1\) networks, where \(\mathcal{D}\) is the dataset, which is often large (e.g., \(>10\)k entries). To date, verification of local robustness [31, 25, 63, 58, 65, 71, 42, 54], analyzing a single network, takes non-negligible time. A naive, accurate but highly unscalable algorithm checks LDCP by training every possible network (i.e., one for every possibility to omit a single entry from the dataset), checking that all inputs in the neighborhood are classified the same network-by-network, and verifying that all networks classify these inputs the same. However, this naive algorithm does not scale since training and analyzing thousands of networks is highly time-consuming. We propose Sphynx (**S**afety **P**rivacy analyzer via **H**yper-**N**etworks) for determining whether a network is LDCP at a given neighborhood (Figure 1). Sphynx takes as inputs a network, its training set, its training algorithm, and a neighborhood. Instead of training \(|\mathcal{D}|\) more networks, Sphynx computes a _hyper-network_ abstracting these networks with a high probability (under several conditions). A hyper-network abstracts a set of networks by associating to each network parameter an interval that contains the respective values in all networks. If Sphynx would train all networks, computing the hyper-network would be straightforward. However, Sphynx's goal is to reduce the high time overhead and thus it does not train all networks. Instead, it predicts a hyper-network from a small number of networks. Sphynx then checks LDCP at the given neighborhood directly on the hyper-network. These two ideas enable Sphynx to obtain a practical analysis time. The main challenges in predicting a hyper-network are: (1) network parameters do not adhere to a known probability distribution and (2) the inherent trade-off between a sound abstraction, where each parameter's interval covers all values of this parameter in every network (seen and unseen) and the ability to succeed in verifying LDCP given the hyper-network. Naturally, to obtain a sound abstraction, it is best to consider large intervals for each network parameter, e.g., by adding noise to the intervals (like in adaptive data analysis [17; 29; 5; 15; 16; 21; 52; 68]). However, the larger the intervals the harder it is to verify LDCP, because the hyper-network abstracts many more (irrelevant) networks. To cope, Sphynx transforms the network parameters into a distribution given by kernel density estimation (KDE), allowing to predict the intervals without adding noise. To predict a hyper-network, Sphynx executes an iterative algorithm. In every iteration, it samples a few entries from the dataset. For each entry, it trains a network given all the dataset except this entry. Then, given all trained networks, Sphynx predicts a hyper-network, i.e., it computes an interval for every network parameter. An interval is computed by transforming every parameter's observed values into a distribution given by KDE, using normalization and the Yeo-Johnson transformation [77]. Then, Sphynx estimates whether the hyper-network abstracts every network with a high probability, and if so, it terminates. Given a hyper-network, Sphynx checks LDCP directly on the hyper-network. To this end, we extend a local robustness verifier [63], relying on mixed-integer linear programming (MILP), to analyze a hyper-network. Our extension replaces the equality constraints, capturing the network's affine computations, with inequality constraints, since our network parameters are associated with intervals and not real numbers. To mitigate an over-approximation error, we propose two Figure 1: Sphynx checks leakage of individuals’ entries at a given neighborhood. approaches. The first approach relies on preprocessing to the network's inputs and the second one relies on having a lower bound on the inputs. We evaluate Sphynx on data-sensitive datasets: Adult Census [13], Bank Marketing [41] and Default of Credit Card Clients [76]. We verify LDCP on three kinds of neighborhoods for checking safety to label-only membership attacks [11, 35, 40], adversarial example attacks in a DP setting (like [34, 46]), and sensitive attributes (like [22, 23]). We show that by training only 7% of the networks, Sphynx predicts a hyper-network abstracting an (unseen) network with a probability of at least 0.9. Our hyper-networks obtain 93% verification accuracy. Compared to the naive algorithm, Sphynx provides a significant speedup: it reduces the training time by 13.6x and the verification time by \(1.7\cdot 10^{4}\)x. ## 2 Preliminaries In this section, we provide the necessary background. Neural network classifiersWe focus on binary classifiers, which are popular for data-sensitive tasks. As example, we describe the data-sensitive datasets used in our evaluation and their classifier's task (Section 6). Adult Census [13] consists of user records of the socioeconomic status of people in the US. The goal of the classifier is to predict whether a person's yearly income is higher or lower than 50K USD. Bank Marketing [41] consists of user records of direct marketing campaigns of a Portuguese banking institution. The goal of the classifier is to predict whether the client will subscribe to the product or not. Default of Credit Card Clients [76] consists of user records of demographic factors, credit data, history of payments, and bill statements of credit card clients in Taiwan. The goal of the classifier is to predict whether the default payment will be paid in the next month. We note that our definitions and algorithms easily extend to non-binary classifiers. A binary classifier \(N\) maps an input, a user record, \(x\in\mathcal{X}\subseteq[0,1]^{d}\) to a real number \(N(x)\in\mathbb{R}\). If \(N(x)\geq 0\), we say the classification of \(x\) is 1 and write \(\text{class}(N(x))=1\), otherwise, it is \(-1\), i.e., \(\text{class}(N(x))=-1\). We focus on classifiers implemented by a fully-connected neural network. This network consists of an input layer followed by \(L\) layers. The input layer \(x_{0}\) takes as input \(x\in\mathcal{X}\) and passes it as is to the next layer (i.e., \(x_{0,k}=x_{k}\)). The next layers are functions, denoted \(f_{1},f_{2},\ldots,f_{L}\), each taking as input the output of the preceding layer. The network's function is the composition of the layers: \(N(x)=f_{L}(f_{L-1}(\cdots(f_{1}(x))))\). A layer \(m\) consists of neurons, denoted \(x_{m,1},\ldots,x_{m,k_{m}}\). Each neuron takes as input the outputs of all neurons in the preceding layer and outputs a real number. The output of layer \(m\) is the vector \((x_{m,1},\ldots,x_{m,k_{m}})^{T}\) consisting of all its neurons' outputs. A neuron \(x_{m,k}\) has a weight for each input \(w_{m,k,k^{\prime}}\in\mathbb{R}\) and a bias \(b_{m,k}\in\mathbb{R}\). Its function is the composition of an affine computation, \(\hat{x}_{m,k}=b_{m,k}+\sum_{k^{\prime}=1}^{k_{m-1}}w_{m,k,k^{\prime}}\cdot x _{m-1,k^{\prime}}\), followed by an activation function computation, \(x_{m,k}=\sigma(\hat{x}_{m,k})\). Activation functions are typically non-linear functions. In this work, we focus on the ReLU activation function, \(\text{ReLU}(\hat{x})=\max(0,\hat{x})\). We note that, while we focus on fully-connected networks, our approach can extend to other architectures, e.g., convolutional networks or residual networks. The weights and biases of a neural network are determined by a training process. A training algorithm \(\mathcal{T}\) takes as inputs a network (typically, with random weights and biases) and a labelled training set \(\mathcal{D}=\{(x_{1},y_{1}),\ldots,(x_{n},y_{n})\}\subseteq\mathcal{X}\times\{-1,+1\}\). It returns a network with updated weights and biases. These parameters are computed with the goal of minimizing a given loss function, e.g., binary cross-entropy, capturing the level of inaccuracy of the network. The computation typically relies on iterative numerical optimization, e.g., stochastic gradient descent (SGD). Differential privacy (DP)DP focuses on algorithms defined over arrays (in our context, a dataset). At high-level, an algorithm is DP if for any two inputs differing in a single entry, it returns the same output with a similar probability. Formally, DP is a probabilistic privacy property requiring that the probability of returning different outputs is upper bounded by an expression defined by two parameters, denoted \(\epsilon\) and \(\delta\)[18]. Note that this requirement is too strong for classifiers providing only black-box access, which return only the input's classification. For such classifiers, it is sufficient to require that the classification is the same, and there is no need for the network's output (the score of every class) to be the same. To obtain the DP guarantee, DP algorithms add noise, drawn from some probability distribution (e.g., Laplace or Gaussian), to the input or their computations. That is, DP algorithms trade off their output's accuracy with privacy guarantees: the smaller the DP parameters (i.e., \(\epsilon\) and \(\delta\)) the more private the algorithm is, but its outputs are less accurate. The accuracy loss is especially severe in DP algorithms that involve loops in which every iteration adds noise. The loss is high because (1) many noise terms are added and (2) the mathematical analysis is not tight (it typically relies on compositional theorems [18]), leading to adding a higher amount of noise than necessary to meet the target privacy guarantee. Nevertheless, DP has been adopted by numerous network training algorithms [1, 10, 44, 8, 28]. For example, one algorithm adds noise to every gradient computed during training [1]. Consequently, the network's accuracy decreases significantly, discouraging network designers from employing DP. To cope, we propose a (non-probabilistic) privacy property that (1) only requires the network's classification to be the same and (2) can be checked even if the network has not been trained by a DP training algorithm. Local robustnessLocal robustness has been introduced in response to adversarial example attacks [62, 27]. In the context of network classifiers, an adversarial example attack is given an input and a space of perturbations and it returns a perturbed input that causes the network to misclassify. Ideally, to prove that a network is robust to adversarial attacks, one should prove that _for any valid input_, the network classifies the same under any valid perturbation. In practice, the safety property that has been widely-studied is _local robustness_. A network is locally robust at _a given input_ if perturbing the input by a perturbation in a given space does not cause the network to change the classification. Formally, the space of allowed perturbations is captured by a _neighborhood_ around the input. Definition 1 (Local Robustness): Given a network \(N\), an input \(x\in\mathcal{X}\), a neighborhood \(I(x)\subseteq\mathcal{X}\) containing \(x\), and a label \(y\in\{-1,+1\}\), the network \(N\) is _locally robust at \(I(x)\) with respect to \(y\)_ if \(\forall x^{\prime}\in I(x)\). class\((N(x^{\prime}))=y\). A well-studied definition of a neighborhood is the \(\epsilon\)_-ball_ with respect to the \(L_{\infty}\) norm [31, 25, 63, 58, 65, 71, 42, 54]. Formally, given an input \(x\) and a bound on the perturbation amount \(\epsilon\in\mathbb{R}^{+}\), the \(\epsilon\)-ball is: \(I_{\epsilon}^{\infty}(x)=\{x^{\prime}\mid\|x^{\prime}-x\|_{\infty}\leq\epsilon\}\). A different kind of a neighborhood captures _fairness_ with respect to a given sensitive feature \(i\in[d]\) (e.g., gender) [6, 38, 69, 53]: \(I_{i}^{S}(x)=\Big{\{}x^{\prime}\mid\bigwedge_{j\in[d]\setminus\{i\}}x^{ \prime}_{j}=x_{j}\Big{\}}\). ## 3 Local Differential Classification Privacy In this section, we define the problem of verifying that a network is locally differentially classification private (LDCP) at a given neighborhood. Local differential classification privacy (LDCP)Our property is defined given a classifier \(N\), trained on a dataset \(\mathcal{D}\), and a neighborhood \(I\), defining a space of attacks (perturbations). It considers \(N\) private with respect to an individual's entry \((x^{\prime},y^{\prime})\) if \(N\) classifies all inputs in \(I\) the same, whether \(N\) has been trained with \((x^{\prime},y^{\prime})\) or not. If there is discrepancy in the classification, the attacker may exploit it to infer information about \(x^{\prime}\). Namely, if the network designer cared about a single individual's entry \((x^{\prime},y^{\prime})\), our privacy property would be defined over two networks: the network trained with \((x^{\prime},y^{\prime})\) and the network trained without \((x^{\prime},y^{\prime})\). Naturally, the network designer wishes to show that the classifier is private for every \((x^{\prime},y^{\prime})\) participating in \(\mathcal{D}\). Namely, our property is defined over \(|\mathcal{D}|+1\) networks. The main challenge in verifying this privacy property is that the training set size is typically very large (\(>\)10k entries). Formally, our property is an extension of local robustness to a differential privacy setting suitable for classifiers providing black-box access. It requires that the inputs in \(I\) are classified the same by _every_ network trained on \(\mathcal{D}\) or trained on \(\mathcal{D}\) except for any single entry. Unlike DP, our definition is applicable to any network, even those trained without a probabilistic noise. We next formally define our property. Definition 2 (Local Differential Classification Privacy): Given a network \(N\) trained on a dataset \(\mathcal{D}\subseteq\mathcal{X}\times\{-1,+1\}\) using a training algorithm \(\mathcal{T}\), a neighborhood \(I(x)\subseteq\mathcal{X}\) and a label \(y\in\{-1,+1\}\), the network \(N\) is _locally differentially classification private (LDCP)_ if (1) \(N\) is locally robust at \(I(x)\) with respect to \(y\), and (2) for every \((x^{\prime},y^{\prime})\in\mathcal{D}\), the network of the same architecture as \(N\) trained on \(\mathcal{D}\setminus\{(x^{\prime},y^{\prime})\}\) using \(\mathcal{T}\) is locally robust at \(I(x)\) with respect to \(y\). Our privacy property enables to prove safety against privacy attacks by defining a suitable set of neighborhoods. For example, several membership inference attacks [11, 35, 40] are label-only attacks. Namely, they assume the attacker has a set of inputs and they can query the network to obtain their classification. To prove safety against these attacks, given a set of inputs \(X\subseteq\mathcal{X}\), one has to check LDCP for every neighborhood defined by \(I_{0}^{\infty}(x)\) where \(x\in X\). To prove safety against attacks aiming to reveal sensitive features (like [22, 23]), given a set of inputs \(X\subseteq\mathcal{X}\), one has to check that the network classifies the same regardless of the value of the sensitive feature. This can be checked by checking LDCP for every neighborhood defined by \(I_{i}^{S}(x)\) where \(x\in X\) and \(i\) is the sensitive feature. Problem definitionWe address the problem of determining whether a network is locally differentially classification private (LDCP) at a given neighborhood, while minimizing the analysis time. A definite answer involves training and verifying \(|\mathcal{D}|+1\) networks (the network trained given all entries in the training set, and for each entry in the training set, the network trained given all entries except this entry). However, this results in a very long training time and verification time (even local robustness verification takes a non-negligible time). Instead, our problem is to provide an answer which is correct with a high probability. This problem is highly challenging. On the one hand, the fewer trained networks the lower the training and verification time. On the other hand, determining an answer, with a high probability, from a small number of networks is challenging, since network parameters do not adhere to a known probabilistic distribution. Thus, our problem is not naturally amenable to a probabilistic analysis. Prior workPrior work has considered different aspects of our problem. As mentioned, several works adapt training algorithms to guarantee that the resulting network satisfies differential privacy [1, 10, 44, 8, 28]. However, these training algorithms tend to return networks of lower accuracy. A different line of research proposes algorithms for _machine unlearning_[36, 26, 9], in which an algorithm retrains a network "to forget" some entries of its training set. However, these approaches do not guarantee that the forgetting network is equivalent to the network obtained by training without these entries from the beginning. It is also more suitable for settings in which there are multiple entries to forget, unlike our differential privacy setting which focuses on omitting a single entry at a time. Several works propose verifiers to determine local robustness [45, 30, 19, 48, 73, 55, 31]. However, these analyze a single network and not a group of similar networks. A recent work proposes a proof transfer between similar networks [67] in order to reduce the verification time of similar networks. However, this work requires to explicitly have all networks, which is highly time consuming in our setting. Other works propose verifiers that check robustness to data poisoning or data bias [61, 39, 12]. These works consider an attacker that can manipulate or omit up to several entries from the dataset, similarly to LDCP allowing to omit up to a single entry. However, these verifiers either target patch attacks of image classifiers [61], which allows them to prove robustness without considering every possible network, or target decision trees [39, 12], relying on predicates, which are unavailable for neural networks. Thus, neither is applicable to our setting. ## 4 Our Approach In this section, we present our approach for determining whether a network is locally differentially classification private (LDCP). Our key idea is to _predict an _abstraction_ of all concrete networks from a small number of networks. We call the abstraction a _hyper-network_. Thanks to this abstraction, Sphynx does not need to train all concrete networks and neither verify multiple concrete networks. Instead, Sphynx first predicts an abstract hyper-network, given a network \(N\), its training set \(\mathcal{D}\) and its training algorithm \(\mathcal{T}\), and then verifies LDCP directly on the hyper-network. This is summarized in Figure 2. We next define these terms. Hyper-networksA _hyper-network_ abstracts a set of networks \(\mathcal{N}=\{N_{1},\ldots,N_{K}\}\) with the same architecture and in particular the same set of parameters, i.e., weights \(\mathcal{W}=\{w_{1,1,1},\ldots,w_{L,d_{L},d_{L-1}}\}\) and biases \(\mathcal{B}=\{b_{1,1},\ldots,b_{L,d_{L}}\}\). The difference between the networks is the parameters' values. In our context, \(\mathcal{N}\) consists of the network trained with the full dataset and the networks trained without any single entry: \(\mathcal{N}=\{\mathcal{T}(N,\mathcal{D})\}\cup\{\mathcal{T}(N,\mathcal{D}\setminus \{(x^{\prime},y^{\prime})\})\mid(x^{\prime},y^{\prime})\in\mathcal{D}\}\). A _hyper-network_ is a network \(\mathcal{N}^{\#}\) with the same architecture and set of parameters, but the domain of the parameters is not \(\mathbb{R}\) but rather an abstract domain \(\mathcal{A}\). As standard, we assume the abstract domain corresponds to a lattice and is equipped with a concretization function \(\gamma\). We focus on a non-relational abstraction, where each parameter is abstracted independently. The motivation is twofold. First, non-relational domains are computationally lighter than relational domains. Second, the relation between the parameters is highly complex because it depends on a long series of optimization steps (e.g., SGD steps). Thus, while it is possible to bound these computations using simpler functions (e.g., polynomial functions), the over-approximation error would be too large to be useful in practice. Formally, given a set of networks \(\mathcal{N}\) with the same architecture and set of parameters \(\mathcal{W}\cup\mathcal{B}\) and given an abstract domain \(\mathcal{A}\) and a concretization function \(\gamma:\mathcal{A}\rightarrow\mathcal{P}(\mathbb{R})\), a _hyper-network_ is a network \(\mathcal{N}^{\#}\) with the same architecture and set of parameters, where the parameters range over \(\mathcal{A}\) and satisfy the following: \[\forall N^{\prime}\in\mathcal{N}:\left[\forall w_{m,k,k^{\prime}}\in\mathcal{W }:w_{m,k,k^{\prime}}^{N^{\prime}}\in\gamma\left(w_{m,k,k^{\prime}}^{\#}\right) \wedge\forall b_{m,k}\in\mathcal{B}:b_{m,k}^{N^{\prime}}\in\gamma\left(b_{m,k} ^{\#}\right)\right]\] where \(w_{m,k,k^{\prime}}^{N^{\prime}}\) and \(b_{m,k}^{N^{\prime}}\) are the values of the parameters in the network \(N^{\prime}\) and \(w_{m,k,k^{\prime}}^{\#}\) and \(b_{m,k}^{\#}\) are the values of the parameters in the hyper-network \(\mathcal{N}^{\#}\). Figure 2: Given a network classifier \(N\), its training set \(\mathcal{D}\) and its training algorithm \(\mathcal{T}\), Sphynx predicts an abstract hyper-network \(\mathcal{N}^{\#}\). It then checks whether \(\mathcal{N}^{\#}\) is locally robust at \(I(x)\) with respect to \(y\) to determine whether \(N\) is LDCP. Interval abstraction In this work, we focus on the interval domain. Namely, the abstract elements are intervals \([l,u]\), with the standard meaning: an interval \([l,u]\) abstracts all real numbers between \(l\) and \(u\). We thus call our hyper-networks _interval hyper-networks_. Figure 3 shows an example of an interval hyper-network. An interval corresponding to a weight is shown next to the respective edge, and an interval corresponding to a bias is shown next to the respective neuron. For example, the neuron \(x_{1,2}\) has two weights and a bias whose values are: \(w_{1,2,1}^{\#}=[1,3]\), \(w_{1,2,2}^{\#}=[1,2]\), and \(b_{1,2}^{\#}=[1,1]\). Computing an interval hyper-network is straightforward if all concrete networks are known. However, computing all concrete networks defeats the purpose of having a hyper-network. Instead, Sphynx predicts an interval hyper-network with a high probability (Section 5.1). Checking LDCP given a hyper-networkGiven an interval hyper-network \(\mathcal{N}^{\#}\) for a network \(N\), a neighborhood \(I(x)\) and a label \(y\), Sphynx checks whether \(N\) is LDCP by checking whether \(\mathcal{N}^{\#}\) is locally robust at \(I(x)\) with respect to \(y\). If \(\mathcal{N}^{\#}\) is robust, then \(N\) is LDCP at \(I(x)\), with a high probability. Otherwise, Sphynx determines that \(N\) is not LDCP at \(I(x)\). Note that this is a conservative answer since \(N\) is either not LDCP or that the abstraction or the verification lose too much precision. Sphynx verifies local robustness of \(\mathcal{N}^{\#}\) by extending a MILP verifier [63] checking local robustness of a neural network (Section 5.2). ## 5 Sphynx: Safety Privacy Analyzer via Hyper-Networks In this section, we present Sphynx, our system for verifying local differential classification privacy (LDCP). As described, it consists of two components, the first component predicts a hyper-network, while the second one verifies LDCP. ### Prediction of an Interval Hyper-Network In this section, we introduce Sphynx's algorithm for predicting an interval hyper-network, called PredHyperNet. PredHyperNet takes as inputs a network \(N\), its training set \(\mathcal{D}\), its training algorithm \(\mathcal{T}\), and a probability error bound \(\alpha\), where \(\alpha\in(0,1)\) is a small number. It returns an interval hyper-network \(\mathcal{N}^{\#}\) which, Figure 3: Three networks, \(N_{1},N_{2},N_{3}\), and their interval hyper-network \(\mathcal{N}^{\#}\). with probability \(1-\alpha\) (under certain conditions), abstracts an (unseen) concrete network returned by \(\mathcal{T}\) given \(N\) and \(\mathcal{D}\setminus\{(x,y)\}\), where \((x,y)\in\mathcal{D}\). The main idea is to predict an abstraction for every network parameter from a small number of concrete networks. To minimize the number of networks, PredHyperNet executes iterations. An iteration trains \(k\) networks and predicts an interval hyper-network using all trained networks. If the intervals' distributions have not converged to the expected distributions, another iteration begins. We next provide details. PredHyperNet's algorithmPredHyperNet(Algorithm 1) begins by initializing the set of trained networks nets to \(N\) and the set of entries entr, whose corresponding networks are in nets, to \(\emptyset\). It initializes the previous hyper-network \(\mathcal{N}^{\#}_{prev}\) to \(\bot\) and the current hyper-network \(\mathcal{N}^{\#}_{curr}\) to the interval abstraction of \(N\), i.e., the interval of a network parameter \(w\in\mathcal{W}\cup\mathcal{B}\) (a weight or a bias) is \([w^{N},w^{N}]\). Then, while the stopping condition (Line 5), checking convergence using the Jaccard distance as described later, is false, it runs an iteration. An iteration trains \(k\) new networks (Line 7-Line 10). A training iteration samples an entry \((x,y)\), adds it to entr, and runs the training algorithm \(\mathcal{T}\) on (the architecture of) \(N\) and \(\mathcal{D}\setminus\{(x,y)\}\). The trained network is added to nets. PredHyperNet then computes an interval hyper-network from nets (Line 11-Line 13). The computation is executed via PredInt (Line 13) independently on each network parameter \(w\). PredInt's inputs are all observed values of \(w\) and a probability error bound \(\alpha^{\prime}\), which is \(\alpha\) divided by the number of network parameters. Interval prediction: overviewPredInt predicts an interval for a parameter \(w\) from a (small) set of values \(\mathcal{V}_{w}=\{w_{1},\ldots,w_{K}\}\), obtained from the concrete networks, and given an error bound \(\alpha^{\prime}\). If it used interval abstraction, \(w\) would be mapped to the interval defined by the minimum and maximum in \(\mathcal{V}_{w}\). However, this approach cannot guarantee to abstract the unseen concrete networks, since we aim to rely on a number of concrete networks significantly smaller than \(|\mathcal{D}|\). Instead, PredInt defines an interval by predicting the minimum and maximum of \(w\), over all its values in the (seen and unseen) networks. There has been an extensive work on estimating statistics, with a high probability, of an unknown probability distribution (e.g., expectation [17, 29, 5, 15, 16, 21, 52, 68]). However, PredInt's goal is to estimate the _minimum_ and _maximum_ of unknown samples from an unknown probability distribution. The challenge is that, unlike other statistics, the minimum and maximum are highly sensitive to outliers. To cope, PredInt transforms the unknown probability distribution of \(w\) into a known one and then predicts the minimum and maximum. Before the transformation, PredInt normalizes the values in \(\mathcal{V}_{w}\). Overall, PredInt's operation is (Figure 4): (1) it normalizes the values in \(\mathcal{V}_{w}\), (2) it transforms the values into a known probability distribution, (3) it predicts the minimum and maximum by computing a confidence interval with a probability of \(1-\alpha^{\prime}\), and (4) it inverses the transformation and normalization to fit the original scale. We note that executing normalization and transformation and then their inversion does not result in any information loss, because they are bijective functions. We next provide details. Transformation and normalizationPredInt transforms the values in \(\mathcal{V}_{w}\) to make them seem as if drawn from a known probability distribution. It employs the Yeo-Johnson transformation [77], transforming an unknown distribution into a Gaussian distribution. This transformation has the flexibility that the input random variables can have any real value. It has a parameter \(\lambda\), whose value is determined using maximum likelihood estimation (MLE) (i.e., \(\lambda\) maximizes the likelihood that the transformed data is Gaussian). It is defined as follows: \[T_{\lambda}(z)=\left\{\begin{array}{ll}\frac{(1+z)^{\lambda}-1}{\lambda},& \lambda\neq 0,z\geq 0;&\log(1+z),&\lambda=0,z\geq 0\\ -\frac{(1-z)^{2-\lambda}-1}{2-\lambda},&\lambda\neq 2,z<0;&-\log(1-z),& \lambda=2,z<0\end{array}\right\}\] Figure 4: The flow of PredInt for predicting an interval for a network parameter. PredInt employs several adaptations to this transformation to fit our setting better. First, it requires \(\lambda\in[0,2]\). This is because if \(\lambda<0\), the transformed outputs have an upper bound \(\frac{1}{-\lambda}\) and if \(\lambda>2\), they have a lower bound \(-\frac{1}{\lambda-2}\). Since our goal is to predict the minimum and maximum values, we do not want the transformed outputs to artificially limit them. Second, the Yeo-Johnson transformation transforms into a Gaussian distribution. Instead, we transform to a distribution given by kernel density estimation (KDE) with a Laplace kernel, which is better suited for our setting (Section 6). We describe KDE in Appendix 0.A. Third, the Yeo-Johnson transformation operates best when given values centered around zero. However, training algorithms produce parameters of different scales and centered around different values. Thus, before the transformation, PredInt normalizes the values in \(\mathcal{V}_{w}\) to be centered around zero and have a similar scale as follows: \(z_{i}\leftarrow\frac{w_{i}-\mu}{\Delta}\), \(\forall i\in[K]\). Consequently, PredInt is invariant to translation and scaling and is thus more robust to the computations of the training algorithm \(\mathcal{T}\). There are several possibilities to define the normalization's parameters, \(\mu\) and \(\Delta\), each is based on a different norm, e.g., the \(L_{1}\), \(L_{2}\) or \(L_{\infty}\) norm. In our setting, a normalization based on the \(L_{1}\) norm works best, since we use a Laplace kernel. Its center point is \(\mu=\text{median}(w_{1},\ldots,w_{K})\) and its scale is the centered absolute first moment: \(\Delta=\frac{1}{K}\sum_{i=1}^{K}|w_{i}-\mu|\). Interval predictionAfter the transformation, PredInt has a cumulative distribution function (CDF) for the transformed values: \(F_{y}(v)=\mathbb{P}\{y\leq v\}\). Given the CDF, we compute a confidence interval, defining the minimum and maximum. A confidence interval, parameterized by \(\alpha^{\prime}\in(0,1)\), is an interval satisfying that the probability of an unknown sample being inside it is at least \(1-\alpha^{\prime}\). It is defined by: \([l_{y},u_{y}]=\left[F_{y}^{-1}\left(\frac{\alpha^{\prime}}{2}\right),F_{y}^{- 1}\left(1-\frac{\alpha^{\prime}}{2}\right)\right]\). Since we wish to compute an interval hyper-network \(\mathcal{N}^{\#}\) abstracting an unseen network with probability \(1-\alpha\) and since there are \(|\mathcal{W}\cup\mathcal{B}|\) parameters, we choose \(\alpha^{\prime}=\frac{\alpha}{|\mathcal{W}\cup\mathcal{B}|}\) (Line 13). By the union bound, we obtain confidence intervals guaranteeing that the probability that an unseen concrete network is covered by the hyper-network is at least \(1-\alpha\). Stopping conditionThe goal of PredHyperNet's stopping condition is to identify when the distributions of the intervals computed for \(\mathcal{N}^{\#}_{curr}\) have converged to their expected distributions. This is the case when the intervals have not changed significantly in \(\mathcal{N}^{\#}_{curr}\) compared to \(\mathcal{N}^{\#}_{prev}\). Formally, for each network parameter \(w\), it compares the current interval to the previous interval by computing their Jaccard distance. Given the previous and current intervals of \(w\), \(I_{prev}\) and \(I_{curr}\), the Jaccard Index is: \(J(I_{curr},I_{prev})=\frac{I_{curr}\cap I_{prev}}{I_{curr}\cup I_{prev}}\). For any two intervals, the Jaccard distance \(1-J(I_{curr},I_{prev})\in[0,1]\), such that the smaller the distance, the more similar the intervals are. If the Jaccard distance is below a ratio \(R\) (a small number), we consider the interval of \(w\) as converged to the expected CDF. If at least \(M\cdot 100\%\) of the hyper-network's intervals have converged, we consider that the hyper-network has converged, and thus PredHyperNet terminates. ### Verification of a Hyper-Network In this section, we explain how Sphynx verifies that an interval hyper-network is locally robust, in order to show that the network is LDCP. Our verification extends a local robustness verifier [63], designed for checking local robustness of a (concrete) network, to check local robustness given an interval hyper-network. This verifier encodes the verification task as a mixed-integer linear program (MILP), which is submitted to a MILP solver, and provides a sound and complete analysis. We note that we experimented with extending incomplete analysis techniques to our setting (interval analysis and DeepPoly [58]) to obtain a faster analysis. However, the over-approximation error stemming both from these techniques and our hyper-network led to a low precision rate. The challenge with extending the verifier by Tjeng et al. [63] is that both the neuron values and the network parameters are variables, leading to quadratic constraints, which are computationally heavy for constraint solvers. Instead, Sphynx relaxes the quadratic constraints. To mitigate an over-approximation error, we propose two approaches. We begin with a short background and then describe our extension. BackgroundThe MILP encoding by Tjeng et al. [63] precisely captures the neurons' affine computation as linear constraints. The ReLU computations are more involved, because ReLU is non-linear. For each ReLU computation, a boolean variable is introduced along with four linear constraints. The neighborhood is expressed by constraining each input value by an interval, and the local robustness check is expressed by a linear constraint over the network's output neurons. This encoding has been shown to be sound and complete. To scale the analysis, every neuron is associated with a real-valued interval. This allows to identify ReLU neurons whose computation is linear, which is the case if the input's interval is either non-negative or non-positive. In this case, the boolean variable is not introduced for this ReLU computation and the encoding's complexity decreases. Extension to hyper-networksTo extend this verifier to analyze hyper-networks, we encode the affine computations differently because network parameters are associated with intervals and not real numbers. A naive extension replaces the parameter values in the linear constraints by variables, which are bounded by intervals. However, this leads to quadratic constraints and significantly increases the problem's complexity. To keep the constraints linear, one option is to introduce a fresh variable for every multiplication of a neuron's input and a network parameter. However, such variable would be bounded by the interval abstracting the multiplication of the two variables, which may lead to a very large over-approximation error. Instead, we rely on the following observation: if the input to every affine computation is non-negative, then the abstraction of the multiplication does not introduce an over-approximation error. This allows us to replace the constraint of each affine variable \(\hat{x}\) (defined in Section 2), previously captured by an equality constraint, with two inequalities providing a lower and upper bound on \(\hat{x}\). Formally, given lower and upper bounds of the weights and biases at the matrices \(l_{W}\) and \(u_{W}\) and the vectors \(l_{b}\) and \(u_{b}\), the affine variable is bounded by: \[l_{W}\cdot x+l_{b}\leq\hat{x}\leq u_{W}\cdot x+u_{b}\] To guarantee that the input to every affine computation \(x\) is non-negative our analysis requires (1) preprocessing of the network's inputs and (2) a non-negative lower bound to every activation function's output. The second requirement holds since the MILP encoding of ReLU explicitly requires a non-negative output. If preprocessing is inapplicable (i.e., \(x\in\mathcal{X}\nsubseteq[0,1]^{d}\)) or the activation function may be negative (e.g., leaky ReLU), we propose another approach to mitigate the precision loss, given a lower bound \(l_{x}\leq x\). Given lower and upper bounds for the weights and biases \(l_{W}\), \(u_{W}\), \(l_{b}\), and \(u_{b}\), we can bound the output by: \[l_{W}\cdot x+l_{b}-(u_{W}-l_{W})\cdot\max(0,-l_{x})\leq\hat{x}\leq u_{W}\cdot x +u_{b}+(u_{W}-l_{W})\cdot\max(0,-l_{x})\] We provide a proof in Appendix 0.B. Note that each bound is penalized by \((u_{W}-l_{W})\cdot\max(0,-l_{x})\geq 0\), which is an over-approximation error term for the case where the lower bound \(l_{x}\) is negative. ### Analysis of Sphynx In this section, we discuss the correctness of Sphynx and its running time. Proofs are provided in Appendix 0.B. CorrectnessOur first lemma states the conditions under which PredInt computes an abstraction for the values of a single network parameter with a high probability. Lemma 1: _Given a parameter \(w\) and an error bound \(\alpha^{\prime}\), if the observed values \(w_{1},\ldots,w_{K}\) are IID and suffice to predict the correct distribution, and if there exists \(\lambda\in[0,2]\) such that the distribution of the transformed normalized values \(y_{1},\ldots,y_{K}\) is similar to a distribution given by KDE with a Laplace kernel (using the bandwidth defined in Appendix 0.A), then PredInt computes a confidence interval containing an unseen value \(w_{i}\), for \(i\in\{K+1,\ldots,|\mathcal{D}|+1\}\), with a probability of \(1-\alpha^{\prime}\)._ Note that our lemma does not make any assumption about the distribution of the observed values \(w_{1},\ldots,w_{K}\). Next, we state our theorem pertaining the correctness of PredHyperNet. The theorem states that when the stopping condition identifies correctly when the observed values have converged to the correct distribution, then the hyper-network abstracts an unseen network with the expected probability. Theorem 5.1: _Given a network \(N\), its training set \(\mathcal{D}\), its training algorithm \(\mathcal{T}\), and an error bound \(\alpha\), if \(R\) is close to \(0\) and \(M\) is close to \(1\), then PredHyperNet returns a hyper-network abstracting an unseen network with probability \(1-\alpha\)._ Our next theorem states that our verifier is sound and states when it is complete. Completeness means that if the hyper-network is locally robust at the given neighborhood, Sphynx is able to prove it. Theorem 5.1: _Our extension to the MILP verifier provides a sound analysis. It is also complete if all inputs to the affine computations are non-negative._ By Theorem 5.1 and Definition 2, if Sphynx determines that the hyper-network is locally robust, then the network is LDCP. Note that it may happen that the network is LDCP, but the hyper-network is not locally robust due to the abstraction's precision loss. Running timeThe running time of Sphynx consists of the running time of PredHyperNet and the running time of the verifier. The running time of PredHyperNet consists of the networks' training time and the time to compute the hyper-networks (the running time of the stopping condition is negligible). The training time is the product of the number of networks and the execution time of the training algorithm \(\mathcal{T}\). The time complexity of PredInt is \(O(K^{2})\), where \(K=|\mathtt{nets}|\), and thus the computation of a hyper-network is: \(O\left(K^{2}\cdot|\mathcal{W}\cup\mathcal{B}|\right)\). Since PredHyperNet runs \(\frac{K}{k}\) iterations, overall, the running time is \(O\left(|\mathcal{T}|\cdot K+\frac{K^{3}}{k}\cdot|\mathcal{W}\cup\mathcal{B}|\right)\). In practice, the second term is negligible compared to the first term (\(|\mathcal{T}|\cdot K\)). Thus, the fewer trained networks the faster PredHyperNet is. The running time of the verifier is similar to the running time of the MILP verifier [63], verifying local robustness of a single network, which is exponential in the number of ReLU neurons whose computation is non-linear (their input's interval contains negative and positive numbers). Namely, Sphynx reduces the verification time by a factor of \(|\mathcal{D}|+1\) compared to the naive algorithm that verifies robustness network-by-network. ## 6 Evaluation We implemented Sphynx in Python4. Experiments ran on an Ubuntu 20.04 OS on a dual AMD EPYC 7713 server with 2TB RAM and 8 NVIDIA A100 GPUs. The hyper-parameters of PredHyperNet are: \(\alpha=0.1\), the number of trained networks in every iteration is \(k=400\), the thresholds of the stopping condition are \(M=0.9\) and \(R=0.1\). We evaluate Sphynx on the three data-sensitive datasets described in Section 2: Adult Census [13] (Adult), Bank Marketing [41] (Bank), and Default of Credit Card Clients [76] (Credit). We preprocessed the input values to range over \([0,1]\) as follows. Continuous attributes were normalized to range over \([0,1]\) and categorical attributes were transformed into two features ranging over \([0,1]\): \(\cos\left(\frac{\pi}{2}\cdot\frac{i}{m-1}\right)\) and \(\sin\left(\frac{\pi}{2}\cdot\frac{i}{m-1}\right)\), where \(m\) is the number of categories and \(i\) is the category's index \(i\in\{0,\ldots,m-1\}\). Binary attributes were transformed with a single feature: \(0\) for the first category and \(1\) for the second one. While one hot encoding is highly popular for categorical attributes, it has also been linked to reduced model accuracy when the number of categories is high [74, 51]. However, we note that the encoding is orthogonal to our algorithm. We consider three fully-connected networks: \(2\times 50\), \(2\times 100\), and \(4\times 50\), where the first number is the number of intermediate layers and the second one is the number of neurons in each intermediate layer. Our network sizes are comparable to or larger than the sizes of the networks analyzed by verifiers targeting these datasets [69, 38, 4]. All networks reached around 80% accuracy. The networks were trained over 10 epochs, using SGD with a batch size of 1024 and a learning rate of 0.1. We used \(L_{1}\) regularization with a coefficient of \(10^{-5}\). All networks were trained with the same random seed, so Sphynx can identify the maximal privacy leakage (allowing different seeds may reduce the privacy leakage since it can be viewed as adding noise to the training process). We remind that Sphynx's challenge is intensified compared to local robustness verifiers: their goal is to prove robustness of a single network, whereas Sphynx's goal is to prove that privacy is preserved over a very large number of concrete networks: 32562 networks for Adult, 31649 networks for Bank and 21001 networks for Credit. Namely, the number of parameters that Sphynx reasons about is the number of parameters of all \(|\mathcal{D}|+1\) concrete networks. Every experiment is repeated 100 times, for every network, where each experiment randomly chooses dataset entries and trains their respective networks. Performance of SphynxWe begin by evaluating Sphynx's ability to verify LDCP. We consider three kinds of neighborhoods, each is defined given an input \(x\), and the goal is to prove that all inputs in the neighborhood are classified as a label \(y\): 1. _Membership_, \(I(x)=\{x\}\): safety to label-only membership attacks [11, 35, 40]. 2. _DP-Robustness_, \(I_{\epsilon}^{\infty}(x)=\{x^{\prime}\mid\|x^{\prime}-x\|_{\infty}\leq\epsilon\}\), where \(\epsilon=0.05\): safety to adversarial example attacks in a DP setting (similarly to [34, 46]). 3. _Sensitivity_, \(I_{i}^{S}(x)=\left\{x^{\prime}\mid\bigwedge_{j\in[d]\setminus\{i\}}x_{j}^{ \prime}=x_{j}\right\}\), where the sensitive feature is \(i=\)_sex_ for Adult and Credit and \(i=\)_age_ for Bank: safety to attacks revealing sensitive attributes (like [22, 23]). We note that sensitivity is also known as _individual fairness_[30]. For each dataset, we pick 100 inputs for each of these neighborhoods. We compare Sphynx to the naive but most accurate algorithm that trains all concrete networks and verifies the neighborhoods' robustness network-by-network. Its verifier is the MILP verifier [63] on which Sphynx's verifier builds. We let both algorithms run on all networks and neighborhoods. Table 1 reports the confusion matrix of Sphynx compared to the ground truth (computed by the naive algorithm): * True Positive (TP): the number of neighborhoods that are LDCP and that Sphynx returns they are LDCP. * True Negative (TN): the number of neighborhoods that are not LDCP and Sphynx returns they are not LDCP. * False Positive (FP): the number of neighborhoods that are not LDCP and Sphynx returns they are LDCP. A false positive may happen because of the probabilistic abstraction which may miss concrete networks that are not locally robust at the given neighborhood. * False Negative (FN): the number of neighborhoods that are LDCP but Sphynx returns they are not LDCP. A false negative may happen because the hyper-network may abstract spurious networks that are not locally robust at the given neighborhood (an over-approximation error). Results show that Sphynx's average accuracy is 93.3% (TP+TN). The FP rate is 1.1% and at most 9%. The FN rate (i.e., the over-approximation error) is 5.5%. Results further show how private the different networks are. All networks are very safe to label-only membership attacks. Although Sphynx has several false negative results, it still allows the user to infer that the networks are very safe to such attack. For DP-Robustness, results show that some networks are fairly robust (e.g., Bank \(2\times 50\) and \(2\times 100\)), while others are not (e.g., Bank \(4\times 50\)). For Sensitivity, results show that Sphynx enables the user to infer what networks are sensitive to the sex/age attribute (e.g., Credit \(4\times 50\)) and what networks are not (e.g., Credit \(2\times 100\)). An important characteristic of Sphynx's accuracy is that the false positive and false negative rates do not lead to inferring a wrong conclusion. For example, if the network is DP-robust, Sphynx proves LDCP (TP+FP) for significantly more DP-robustness neighborhoods than the number of DP-robustness neighborhoods for which it does not prove LDCP (TN+FN). Similarly, if the network is not DP-robust, Sphynx determines for significantly more DP-robustness neighborhoods that they are not LDCP (TN+FN) than the number of DP-robustness neighborhoods that it proves they are LDCP (TP+FP). Table 2 compares the execution time of Sphynx to the naive algorithm. It shows the number of trained networks (which is the size of the dataset plus one for the naive algorithm, and \(K=|nets|\) for Sphynx), the overall training time on a single GPU in hours and the verification time of a single neighborhood (in hours for the naive algorithm, and in _seconds_ for Sphynx). Results show the two strengths of Sphynx: (1) it reduces the training time by 13.6x, because it requires to train only 7% of the networks and (2) it reduces the verification time by \(1.7\cdot 10^{4}\)x. Namely, Sphynx reduces the execution time by four orders of magnitude compared to the naive algorithm. The cost is the minor decrease in Sphynx's accuracy (\(<\)7%). That is, Sphynx trades off precision with scalability, like many local robustness verifiers do [49, 2, 70, 71, 7, 25, 58, 55, 57, 42]. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline Dataset & Network & \multicolumn{4}{c}{Membership} & \multicolumn{4}{c}{DP-Robustness} & \multicolumn{4}{c}{Sensitivity} \\ & & TP & TN & FP & FN & TP & TN & FP & FN & TP & TN & FP & FN \\ \hline Adult & \(2\times 50\) & 93 & 0 & 0 & 7 & 75 & 21 & 0 & 4 & 85 & 10 & 0 & 5 \\ & \(2\times 100\) & 82 & 1 & 0 & 17 & 54 & 39 & 0 & 7 & 75 & 10 & 0 & 15 \\ & \(4\times 50\) & 93 & 0 & 0 & 7 & 11 & 86 & 3 & 0 & 1 & 97 & 1 & 1 \\ \hline Bank & \(2\times 50\) & 100 & 0 & 0 & 0 & 100 & 0 & 0 & 0 & 100 & 0 & 0 & 0 \\ & \(2\times 100\) & 99 & 0 & 0 & 1 & 98 & 1 & 0 & 1 & 99 & 0 & 0 & 1 \\ & \(4\times 50\) & 81 & 0 & 0 & 19 & 22 & 62 & 9 & 7 & 2 & 71 & 8 & 19 \\ \hline Credit & \(2\times 50\) & 93 & 0 & 0 & 7 & 91 & 0 & 0 & 9 & 92 & 0 & 0 & 8 \\ & \(2\times 100\) & 100 & 0 & 0 & 0 & 91 & 2 & 2 & 5 & 100 & 0 & 0 & 0 \\ & \(4\times 50\) & 91 & 0 & 0 & 9 & 2 & 95 & 3 & 0 & 0 & 96 & 4 & 0 \\ \hline \hline \end{tabular} \end{table} Table 1: Sphynx’s confusion matrix. #### Ablation study We next study the importance of Sphynx's steps in predicting an interval hyper-network. Recall that PredInt predicts an interval for a given network parameter by running a normalization and the Yeo-Johnson transformation and transforming it into a distribution given by KDE. We compare PredInt to interval abstraction, mapping a set of values into the interval defined by their minimum and maximum, and to three variants of PredInt: (1) _-Transform_: does not use the normalization or the transformation and directly estimates the density with KDE, (2) _-Normalize_: does not use the normalization but uses the transformation, (3) _-KDE_: transforms into a Gaussian distribution (as common), and thus employs a normalization based on the \(L_{2}\) norm. We run all approaches on the \(2\times 100\) network, trained for Adult. Table 3 reports the following: * Weight abstraction rate: the average percentage of weights whose interval provides a (sound) abstraction. Note that this metric is not related to Lemma 1, guaranteeing that the value of a network parameter of a _single_ network is inside its predicted interval with probability \(1-\alpha^{\prime}\). This metric is more challenging: it measures how many values of a given network parameter, over \(|\mathcal{D}|+1\) networks, are inside the corresponding predicted interval. * Miscoverage: measures how much the predicted intervals need to expand to be an abstraction. It is the average over all intervals' miscoverage. The interval \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline Dataset & Network & \multicolumn{2}{l}{Trained networks} & \multicolumn{2}{l}{GPU training time} & \multicolumn{2}{l}{Verification time} \\ & & naive & Sphynx & naive & Sphynx & naive & Sphynx \\ & & \(|\mathcal{D}|+1\) & \(K\) & hours & hours & hours & seconds \\ \hline Adult & \(2\times 50\) & 32562 & 2436 & 44.64 & 3.33 & 2.73 & 0.35 \\ & \(2\times 100\) & 32562 & 2024 & 46.56 & 2.89 & 5.24 & 0.72 \\ & \(4\times 50\) & 32562 & 1464 & 52.37 & 2.35 & 0.33 & 0.87 \\ \hline Bank & \(2\times 50\) & 31649 & 2364 & 41.04 & 3.06 & 2.73 & 0.35 \\ & \(2\times 100\) & 31649 & 2536 & 41.76 & 3.34 & 5.60 & 0.69 \\ & \(4\times 50\) & 31649 & 1996 & 49.41 & 3.11 & 1.3 & 1.19 \\ \hline Credit & \(2\times 50\) & 21001 & 2724 & 18.72 & 2.42 & 2.1 & 0.35 \\ & \(2\times 100\) & 21001 & 2234 & 19.21 & 2.04 & 3.6 & 0.64 \\ & \(4\times 50\) & 21001 & 1816 & 22.67 & 1.96 & 0.08 & 0.75 \\ \hline \hline \end{tabular} \end{table} Table 2: Training and verification time of Sphynx and the naive algorithm. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Int. Abs. & -Transform & -Normalize & -KDE & PredInt \\ \hline Weight abstraction rate & 19.60 & 55.25 & 22.89 & 48.10 & 70.61 \\ \hline Miscoverage & 7.20 & 5.42 & 1.5\(\times 10^{-6}\) & 2.13 & 2.49 \\ \hline Overcoverage & 1 & 1.12 & 299539 & 1.62 & 2.80 \\ \hline \hline \end{tabular} \end{table} Table 3: PredInt vs. the interval abstraction and several variants. miscoverage is the ratio of the size of the difference between the optimal interval and the predicted interval and the size of the predicted interval. * Overcoverage: measures how much wider than necessary the predicted intervals are. It is the geometric average over all intervals' overcoverage. The interval overcoverage is the ratio of the size of the join of the predicted interval and the optimal interval and the size of the optimal interval. Results show that the weight abstraction rate of the interval abstraction is very low and PredInt has a 3.6x higher rate. Results further show that PredInt obtains a very low miscoverage, by less than 2.9x compared to the interval abstraction. As expected, these results come with a cost: a higher overcoverage. An exception is the interval abstraction which, by definition, does not have an overcoverage. Results further show that the combination of normalization, transformation, and KDE improve the weight abstraction rate of PredInt. Next, we study how well PredHyperNet's hyper-networks abstract an (unseen) concrete network with a probability \(\geq 0.9\). We compare to a variant that replaces PredInt by the interval abstraction, computed using the \(K\) concrete networks reported in Table 2. Table 4 reports the network abstraction rate (the average percentage of concrete networks abstracted by the hyper-network) and overcoverage. Results show that PredHyperNet obtains a very high network abstraction rate and always above \(1-\alpha=0.9\). In contrast, the variant obtains lower network abstraction rate with a very large variance. As before, the cost is the over-approximation error. ## 7 Related Work Network abstractionOur key idea is to abstract a set of concrete networks (seen and unseen) into an interval hyper-network. Several works rely on network abstraction to expedite verification of a single network. The goal of the abstraction is to generate a smaller network, which can be analyzed faster, and that preserves \begin{table} \begin{tabular}{l c c c c c} \hline \hline Dataset & Network & \multicolumn{2}{c}{Network abstraction rate} & \multicolumn{2}{c}{Overcoverage} \\ & & Int. Abs. & Sphynx & Int. Abs. & Sphynx \\ \hline Adult & \(2\times 50\) & 77.04 & 94.55 & 1.00 & 4.52 \\ & \(2\times 100\) & 55.20 & 92.82 & 1.00 & 2.80 \\ & \(4\times 50\) & 58.24 & 92.22 & 1.00 & 2.83 \\ \hline Bank & \(2\times 50\) & 80.06 & 93.89 & 1.00 & 2.54 \\ & \(2\times 100\) & 73.21 & 90.13 & 1.00 & 3.21 \\ & \(4\times 50\) & 74.80 & 90.47 & 1.00 & 1.93 \\ \hline Credit & \(2\times 50\) & 84.07 & 95.98 & 1.00 & 15.07 \\ & \(2\times 100\) & 67.16 & 90.59 & 1.00 & 3.68 \\ & \(4\times 50\) & 73.14 & 93.19 & 1.00 & 3.53 \\ \hline \hline \end{tabular} \end{table} Table 4: PredHyperNet vs. the interval abstraction variant (using \(K\) networks). soundness with respect to the original network. One work proposes an abstraction-refinement approach that abstracts a network by splitting neurons into four types and then merging neurons of the same type [20]. Another work merges neurons similarly but chooses the neurons to merge by clustering [3]. Other works abstract a neural network by abstracting its network parameters with intervals [47] or other abstract domains [60]. In contrast, Sphynx abstracts a large set of networks, seen and unseen, and proves robustness for all of them. Robustness verificationSphynx extends a MILP verifier [63] to verify local robustness given a hyper-network. There are many local robustness verifiers. Existing verifiers leverage various techniques, e.g., over-approximation [49, 2, 70], linear relaxation [71, 7, 25, 58, 55, 57, 42], simplex [31, 32, 20], mixed-integer linear programming [63, 33, 59], and duality [14, 50]. A different line of works verifies robustness to small perturbations to the network parameters [72, 66]. These works assume the parameters' perturbations are confined in a small \(L_{\infty}\)\(\epsilon\)-ball and compute a lower and upper bounds on the network parameters by linearly bounding their computations. In contrast, our network parameters are not confined in an \(\epsilon\)-ball, and our analysis is complete if the inputs are (or processed to be) non-negative. Adaptive data analysisPredHyperNet relies on an iterative algorithm to predict the minimum and maximum of every network parameter. Adaptive data analysis deals with estimating statistics based on data that is obtained iteratively. Existing works focus on statistical queries computing the expectation of functions [17, 29, 15, 16, 21, 52, 68] or low-sensitivity queries [5]. ## 8 Conclusion We propose a privacy property for neural networks, called local differential classification privacy (LDCP), extending local robustness to the setting of differential privacy for black-box classifiers. We then present Sphynx, a verifier for determining whether a network is LDCP at a given neighborhood. Instead of training all networks and verifying local robustness network-by-network, Sphynx predicts an interval hyper-network, providing an abstraction with a high probability, from a small number of networks. To predict the intervals, Sphynx transforms the observed parameter values into a distribution given by KDE, using the Yeo-Johnson transformation. Sphynx then verifies LDCP at a neighborhood directly on the hyper-network, by extending a local robustness MILP verifier. To mitigate an over-approximation error, we rely on preprocessing to the network's inputs or on a lower bound for them. We evaluate Sphynx on data-sensitive datasets and show that by training only 7% of the networks, Sphynx predicts a hyper-network abstracting any concrete network with a probability of at least 0.9, obtaining 93% verification accuracy and reducing the verification time by \(1.7\cdot 10^{4}\)x. #### 8.0.1 Acknowledgements We thank the anonymous reviewers for their feedback. This research was supported by the Israel Science Foundation (grant No. 2605/20).
2309.03251
Temporal Inductive Path Neural Network for Temporal Knowledge Graph Reasoning
Temporal Knowledge Graph (TKG) is an extension of traditional Knowledge Graph (KG) that incorporates the dimension of time. Reasoning on TKGs is a crucial task that aims to predict future facts based on historical occurrences. The key challenge lies in uncovering structural dependencies within historical subgraphs and temporal patterns. Most existing approaches model TKGs relying on entity modeling, as nodes in the graph play a crucial role in knowledge representation. However, the real-world scenario often involves an extensive number of entities, with new entities emerging over time. This makes it challenging for entity-dependent methods to cope with extensive volumes of entities, and effectively handling newly emerging entities also becomes a significant challenge. Therefore, we propose Temporal Inductive Path Neural Network (TiPNN), which models historical information in an entity-independent perspective. Specifically, TiPNN adopts a unified graph, namely history temporal graph, to comprehensively capture and encapsulate information from history. Subsequently, we utilize the defined query-aware temporal paths on a history temporal graph to model historical path information related to queries for reasoning. Extensive experiments illustrate that the proposed model not only attains significant performance enhancements but also handles inductive settings, while additionally facilitating the provision of reasoning evidence through history temporal graphs.
Hao Dong, Pengyang Wang, Meng Xiao, Zhiyuan Ning, Pengfei Wang, Yuanchun Zhou
2023-09-06T17:37:40Z
http://arxiv.org/abs/2309.03251v3
# Temporal Inductive Path Neural Network for Temporal Knowledge Graph Reasoning ###### Abstract. Temporal Knowledge Graph (TKG) is an extension of traditional Knowledge Graph (KG) that incorporates the dimension of time. Reasoning on TKGs is a crucial task that aims to predict future facts based on historical occurrences. The key challenge lies in uncovering structural dependencies within historical subgraphs and temporal patterns. Most existing approaches model TKGs relying on entity modeling, as nodes in the graph play a crucial role in knowledge representation. However, the real-world scenario often involves an extensive number of entities, with new entities emerging over time. This makes it challenging for entity-dependent methods to cope with extensive volumes of entities, and effectively handling newly emerging entities also becomes a significant challenge. Therefore, we propose **T**emporal **I**nductive **P**ath **N**eural **N**etwork (TiPNN), which models historical information in an entity-independent perspective. Specifically, TiPNN adopts a unified graph, namely history temporal graph, to comprehensively capture and encapsulate information from history. Subsequently, we utilize the defined query-aware temporal paths to model historical path information related to queries on history temporal graph for the reasoning. Extensive experiments illustrate that the proposed model not only attains significant performance enhancements but also handles inductive settings, while additionally facilitating the provision of reasoning evidence through history temporal graphs. **Computing methodologies \(\rightarrow\) Temporal reasoning** + Footnote †: c) 2018 Association for Computing Machinery. Manuscript submitted to ACM ## 1. Introduction Knowledge Graphs (KGs) are powerful representations of structured information that capture a vast array of real-world facts. They consist of nodes, which represent entities such as people, places, and events, and edges that define the relationships between these entities. These relationships are typically represented as triples in the form of \((s,r,o)\), comprising a subject, a relation, and an object, such as _(Tesla, Founded by, Elon Musk)_. Temporal Knowledge Graphs (TKGs) is an extension of a traditional KGs that incorporates the dimension of time. In a TKG, these facts are annotated with timestamps or time intervals to indicate when they are or were valid. Different from KGs, each fact in a TKG is represented as a quadruple, adding a timestamp feature, which can be denoted as \((s,r,o,t)\), including a subject, a relation, an object, and a timestamp. For example, _(Albert Einstein, Born in, Germany, 03/14/1879)_. The timestamp indicates the specific date or time when this event occurred, providing temporal information to the graph. Such unique representation empowers TKGs to capture the dynamics of multi-relational graphs over time, exhibiting temporal variations. As a result, TKGs have found widespread applications in diverse domains, such as social network analysis [51][13], and event prediction [19][2][50] (e.g. disease outbreaks, natural disasters, and political shifts, _etc._) These applications leverage the temporal information embedded in TKGs to make informed decisions and predictions based on historical knowledge. Among various tasks, TKG reasoning is one emerging research topic that focuses on inferring missing facts based on known ones. Basically, reasoning over TKG consists two primary settings: interpolation and extrapolation [20]. Given the timestamp scope \([0,T]\) in a TKG, where \(T\) represents the maximum timestamp, in the interpolation setting, the reasoning process is focused on the time range from 0 to \(T\), i.e., completing missing facts within the range of past timestamps based on the given known data [14]. On the other hand, in the extrapolation setting, the reasoning timestamps extend beyond \(T\), focusing on predicting the facts in the future based on the observed historical knowledge encoded in the TKG [27][8][61]. This study particularly emphasizes the extrapolation setting that infers future missing facts. Existing literature on TKG reasoning mainly focuses on exploring temporal features and structural information to capture complicated variations and logical connections between entities and relations. For example, RE-NET [20] and RE-GCN [27] are representative works that leverage RNN-based and GNN-based approaches to learn both temporal dependencies and structural features from historical sequences in TKGs, serving as the foundation for predicting future facts. xERTE [15] utilizes the entity connection patterns from different temporal subgraphs in TKGs to construct an inference graph with time-aware features, enabling graph reasoning. TITer [41] adopts a different approach by fusing historical subgraph sequences into a unified graph through the addition of extra edges. It employs a reinforcement learning framework and designs a reward function to guide the model learning. These methods rely on entity modeling as nodes in the graph play a crucial role in knowledge representation, and entity modeling is indispensable, as entities serve as important hubs and key components in the graph. However, the real-world scenario often involves an extensive number of entities, with new entities emerging over time. This makes it challenging for entity-dependent methods to cope with extensive volumes of entities, and effectively handling newly emerging entities also becomes a significant challenge. To address this, we previously introduced an innovative _entity-independent_ modeling approach for temporal knowledge graphs with an evolutionary perspective in our previous work [8], referred to as DaeMon. The entity-independent modeling approach brings several benefits: (i) modeling on graph representation is independent of entities, maintaining stable efficiency even for large-scale datasets with numerous entities. In other words, the memory occupation is independent of the number of entity categories. (ii) the entity-independent nature makes it insensitive to entity-specific features. Hence, when new entities emerge in future timestamps, the model can still handle these unseen nodes, showcasing robust generalization and reasoning capabilities. In essence, DaeMon focuses on learning query-aware latent path representations, incorporating a passing strategy between adjacent timestamps to control the path information flow toward later ones without explicitly modeling individual entities. Specifically, DaeMon introduces the concept of temporal paths to represent the logical paths between a query's head entity and candidate tail entities within the sequence of historical subgraphs. It utilizes path memory units to store aggregated representations of temporal paths along the timeline. The model further employs a path aggregation unit to capture local path information within each subgraph. Finally, DaeMon adopts a memory passing strategy to facilitate cross-temporal information propagation. Here, we provide a generalized overview of inference based on temporal paths modeling, as illustrated in Figure 1. For a given inference query, it starts from this query and collects the path between the head node and tail node in the historical context. Then, it utilizes a query-aware temporal method to represent the aggregation of temporal paths with temporal information for reasoning. For example, in Figure 1, the red triangular nodes and red circular nodes represent the head entity and candidate tail entity of the query at timestamp \(T\) + 1, respectively. This method, based on temporal paths, iteratively learns the path information between the red triangular and red circular nodes within the historical context. The aggregation of path information corresponding to the query is then utilized to infer the possibility of potential future interactions. Besides, during the learning of path information, the method does not focus on which intermediary nodes (i.e., the blue nodes in Figure 1) are present in the path, but rather employs the relation representations present in the path to express it, showcasing an entity-independent characteristic. While DaeMon benefits from this entity-independent modeling approach, it still exhibits certain limitations. First, during the learning of historical information, DaeMon independently performs graph learning on local subgraphs and then connects them through memory passing strategy to obtain the final query-aware path features for future fact reasoning. This process requires graph learning on each subgraph separately, leading to difficulties in modeling complex temporal characteristics and a significant increase in time complexity as the historical length grows. Second, the memory passing strategy is adopted to model temporal features in historical sequences, controlling the information flow from one timestamp to the next to achieve temporal evolution modeling. However, this temporal modeling approach is unidirectional, which not only gives rise to the long-term dependency problem but also neglects relative temporal features between timestamps. Third, DaeMon defines temporal paths as virtually connected paths across the sequence of subgraphs. However, in practice, it does not directly learn the representation of these virtual paths. Instead, it independently learns several edges on each timestamp's subgraph and then obtains the path feature by fusing the edge information through memory passing strategy. The independent learning approach without real physical paths creates challenges in providing relevant reasoning evidence during the inference stage, which makes it difficult to present interpretable explanations to users, hindering their understanding of the reasoning process. To overcome the above challenges, we propose a model, Temporal Inductive Path Neural Network (TiPNN). Specifically, we introduce the concept of _history temporal graphs_ as a replacement for the original historical subgraph sequences to mitigate the complexity of learning historical information independently on multiple subgraphs. It allows us to model historical information more efficiently by combining it into a single unified graph, thereby enabling Figure 1. A generalized overview of inference based on temporal paths modeling. the discovery of complex temporal characteristics and reducing the computational burden during model inference. Furthermore, we redefine the notion of _temporal paths_ as a logical semantics path between the query entity and candidate entities in history temporal graph and design _query-aware temporal path processing_ to model the path feature on the single unified graph in the entity-independent manner for the final reasoning. This enhancement enables the model to comprehensively learn both the temporal and structural features present in the history temporal graph. The adoption of the entity-independent modeling approach allows the model to effectively handle the inductive setting on the history temporal graph. By jointly modeling the information within and between different timestamps in a single unified graph, it can capture a more intuitive and holistic view of the temporal reasoning logic, thus facilitating user understanding of the model's reasoning process and can even provide interpretable explanations for the reasoning results, meanwhile providing new insights for this task. In essence, this work revolves around capturing query-aware temporal path features on the history temporal graphs to enable the inference of missing future facts. Our main contributions are as follows: * To holistically capture historical connectivity patterns, we introduce a unified graph, namely the history temporal graph, to retain complete features from the historical context. The history temporal graph integrates the relationship of entities, timestamps of historical facts, and temporal characteristics among historical facts. * We define the concept of temporal paths. Subsequently, we propose the Temporal Inductive Path Neural Network (TiPNN) for the task of temporal extrapolated reasoning over TKGs. Our approach models the temporal path features corresponding to reasoning queries by considering temporal information within and between different timestamps, along with the compositional logical rules of historical subgraph sequences. * Extensive experiments demonstrate that our model outperforms the existing state-of-the-art results. We analyze the impact and effectiveness of individual components in TiPNN. Additionally, we construct datasets and conduct validation experiments for inductive reasoning tasks on temporal knowledge graphs. Furthermore, we present reasoning evidence demonstrations and analytical experiments based on history temporal graph inference. ## 2. Related Work We divide the related work into two categories: (i) knowledge graph reasoning methods, and (ii) temporal knowledge graph reasoning methods. ### Knowledge Graph Reasoning In recent years, knowledge graph representation learning has witnessed significant developments, aiming to embed entities and relations into continuous vector spaces while capturing their semantics (Beng et al., 2015)(Wang et al., 2016)(Wang et al., 2017). These methods can be broadly categorized into three categories: translation-based methods, semantic matching methods, and neural network-based methods. Translation-based methods, such as TransE (Beng et al., 2015). It considers relations as translations from subject entities to object entities in the vector space. Building upon TransE, several improved methods have been proposed, including TransH (Wang et al., 2016), TransR (Wang et al., 2017), TransD (Wang et al., 2018) and TransG (Wang et al., 2018), which introduce various strategies to enhance the modeling of entity-relation interactions. For semantic matching methods, RESCAL (Wang et al., 2016) proposes a tensor-based relational learning approach capable of collective learning. DistMult (Wang et al., 2018) simplifies RESCAL using diagonal matrices for efficiency, while HoIE (Hoi et al., 2017) and ComplEx (Wang et al., 2018) extend the representation capacity by incorporating higher-order interactions and complex-valued embeddings (Wang et al., 2018), respectively. Neural network-based methods have also gained attention, with approaches like GCN (Wang et al., 2016), R-GCN (Wang et al., 2017), WGCN (Wang et al., 2018), VR-GCN (Wang et al., 2018), and CompGCN (Wang et al., 2018), which integrate content and structural features within the graph, allowing for joint embedding of nodes and relations in a relational graph (Sutton et al., 2015)(Sutton et al., 2016). These methods effectively capture complex patterns and structural dependencies in KGs, pushing the boundary of KG representation learning. Despite the successes of existing methods in reasoning with static KGs, they fall short when it comes to predicting temporal facts due to the lack of temporal modeling. ### Temporal Knowledge Graph Reasoning An increasing number of studies have started focusing on the representation learning of temporal knowledge graphs, aiming to consider the temporal ordering of facts and capture the temporal knowledge of events. Typically, TKG reasoning methods can be categorized into two main classes based on the range of query timestamps: interpolation reasoning and extrapolation reasoning. For interpolation reasoning, the objective is to infer missing facts from the past within observed data (Dong et al., 2015)(Chen et al., 2016)(Chen et al., 2017). TTransE (Sutton et al., 2016) extends the TransE (Chen et al., 2016) by incorporating relations and timestamps as translation parameters between entities. TA-TransE (Sutton et al., 2016) integrates the timestamps corresponding to fact occurrences into the relation representations, while HyTE (Dong et al., 2015) associates timestamps with corresponding hyperplanes. Additionally, based on ComplEx (Sutton et al., 2016), TNT-ComplEx (Sutton et al., 2016) adopts a unique approach where the TKGs are treated as a 4th-order tensor, and the representations are learned through canonical decomposition. However, these methods are not effectively applicable for predicting future facts (Kang et al., 2016)(Kang et al., 2016)(Sutton et al., 2016). In contrast, the extrapolation setting, which this work focuses on, aims to predict facts in future timestamps based on historical TKG sequences. Know-Evolve (Konon et al., 2016) uses the temporal point process to represent facts in the continuous time domain. However, it falls short of capturing long-term dependencies. Similarly, DyREP (Konon et al., 2017) posits representation learning as a latent mediation process and captures the changes of the observed processes. CyGNet (Konon et al., 2018) utilizes a copy-generation mechanism that collects frequently repeated events with the same subject entities and relations to the query for inferring. RE-NET (Kon et al., 2018) conducts GRU and GCN to capture the temporal and structural dependencies in a sequential manner. xERTE (Konon et al., 2018) offers an explainable model for their predictions by searching in the subgraph with attentive propagation. RE-GCN (Sutton et al., 2016) considers both structural dependencies and static properties of entities at the same time, modeling in evolving manner. TANGO (Konon et al., 2018) designs neural ordinary differential equations to represent and model TKGs for continuous-time reasoning. In addition, reinforcement learning methods are used to uncover the most related facts to answer the query, such as TITer (Kang et al., 2016). Most existing methods are primarily transductive in nature, only focusing on modeling the graph features based on entities while overlooking the intrinsic temporal logical rules within TKG reasoning. As a result, there is still a need for more efficient methods that can effectively manage the robustness and scalability requirements in real-world situations. ## 3. Preliminaries In this section, we provide the essential background knowledge and formally define the context. We also introduce the basics of temporal knowledge graph reasoning task. The details of essential mathematical symbols and the corresponding descriptions of TiPNN are shown in Table 1. It is important to clarify beforehand that we will indicate vector representations in the upcoming context using **bold** items. ### Background of Temporal Knowledge Graph A temporal knowledge graph \(\mathcal{G}\) is essentially a multi-relational, directed graph that incorporates timestamped edges connecting various entities. It can be formalized as a succession of static subgraphs arranged in chronological order, i.e., \(\mathcal{G}=\{G_{1},G_{2},...,G_{t},...\}\). A subgraph in \(\mathcal{G}\) can be denoted as \(G_{t}=(\mathcal{V},\mathcal{R},\mathcal{E}_{t})\) corresponding to the snapshot at timestamp \(t\), where \(\mathcal{V}\) is the set of entities, \(\mathcal{R}\) is the set of relation types, and \(\mathcal{E}_{t}\) is the set of fact edges at timestamp \(t\). Each element in \(\mathcal{E}_{t}\) is a timestamped quadruple \((subject,relation,object,timestamp)\), which can be denoted as \((s,r,o,t)\) or \((s_{t},r_{t},o_{t})\), describing a relation type \(r\in\mathcal{R}\) occurs between subject entity \(s\in\mathcal{V}\) and object entity \(o\in\mathcal{E}\) at timestamp \(t\in\mathcal{T}\), where \(\mathcal{T}\) denote the finite set of timestamps. ### Formulation of Reasoning Task Temporal knowledge graph reasoning task involves predicting the missing entity by answering a query like \((s,r,?,t_{q})\) using historical known facts \(\{(s,r,o,t_{i})|t_{i}<t_{q}\}\). Note that the facts in the query time period \(t_{q}\) are unknown in this task setting. For the sake of generalization, we assume that the fact prediction at a future time depends on a sequence of historical subgraphs from the closest \(m\) timestamps. That is, in predicting the missing fact at timestamp \(t+1\), we consider the historical subgraph sequence \(\{G_{t-m+1},...,G_{t}\}\) for the inference. In addition, given an object entity query \((s,r,?,t_{q})\) at a future timestamp \(t_{q}\), we consider all entities in the entity set \(\mathcal{V}\) as candidates for the object entity reasoning. The final prediction is obtained after scoring and ranking all the candidates by a scoring function. When predicting the subject entity, i.e., \((?,r,o,t_{q})\), we can transform the problem into object entity prediction form \((o,r^{-1},?,t_{q})\). Therefore, we also insert the corresponding reverse edge \((o,r^{-1},s,t)\) when processing the history graph. Without loss of generality, the later section will be introduced in terms of object entity predictions. ## 4. Methodology In this section, we introduce the proposed model, **T**emporal **I**nductive **P**ath **N**eural **N**etwork (TiPNN). We start with a model overview and then discuss each part of the model as well as its training and reasoning process in detail. \begin{table} \begin{tabular}{l l} \hline \hline Notations & Descriptions \\ \hline \(\mathcal{G}\) & a temporal knowledge graph \\ \(G_{t}\) & a subgraph corresponding to the snapshot at timestamp \(t\) \\ \(\mathcal{V},\mathcal{R},\mathcal{T}\) & the finite set of entities, relation types and timestamps in the temporal knowledge graph \\ \(\mathcal{E}_{t}\) & the set of fact edges at timestamp \(t\) \\ \((s,r,o,t)\) & a quadruple (fact edge) at timestamp \(t\) \\ \((s,r,?,t+1)\) & a query with the missing object that is to be predicted at timestamp \(t+1\) \\ \hline \(\hat{G}\) & a history temporal graph \\ \(r_{\tau}\) & a temporal relation in history temporal graph \\ \((s,r_{\tau},o)\) & a temporal edge with time attribute \(\tau\) attached in history temporal graph \\ \(m\) & the length of the history used for reasoning \\ \(\omega\) & the number of temporal path aggregation layers \\ \(\mathbf{H},\mathbf{h}\) & the representation of temporal paths (and path) \\ \(\mathbf{R}\) & a learnable representation of relation types in \(\mathcal{R}\) \\ \(\Psi_{r}\) & the query relation \(r\)-aware basic static embedding of temporal edge \\ \(\Upsilon\) & the temporal embedding of temporal edge \\ \(\mathbf{w}_{r}\) & the query-aware temporal representation of a temporal edge \\ \(d\) & the dimension of embedding \\ \hline \hline \end{tabular} \end{table} Table 1. Notations and Descriptions. ### Model Overview The main idea of TiPNN is to model the compositional logical rules on multi-relational temporal knowledge graphs. By integrating historical subgraph information and capturing the correlated patterns of historical facts, we can achieve predictions for missing facts at future timestamps. Given a query \((s,r,?,t+1)\) at future timestamp \(t+1\), our focus is on utilizing historical knowledge derived from \(\{G_{t-m+1},...,G_{t}\}\) to model the connected semantic information between the query subject \(s\) and all candidate objects \(o\in\mathcal{V}\) in history for the query responding. Each subgraph in a temporal knowledge graph with a different timestamp describes factual information that occurred at different moments. These subgraphs are structurally independent of each other, as there are no connections between the node identifiers within them. And the timestamps are also independently and discretely distributed, which makes it challenging to model the historical subgraphs in a connected manner. To achieve this, we construct a logically connected graph, named _History Temporal Graph_, to replace the originally independent historical subgraphs, allowing a more direct approach to modeling the factual features of the previous timestamp. By utilizing connected relation features and relevant paths associated with the query, it can capture semantic connections between nodes and thus learn potential temporal linking logical rules. Specifically, for a given query subject entity, most candidate object entities are (directly or indirectly) connected to it in a logically connected history temporal graph. Therefore, _Temporal Path_ is proposed to comprehensively capture the connected edges between the query subject entity and other entities within the history temporal graph in a query-aware manner. By explicitly learning the path information between entities on the history temporal graph, the query-aware representation of the history temporal path between query subject and object candidates can be obtained. Based on the learned path feature, the inference of future missing facts can be made with a final score function. A high-level idea of this approach is to induce the connection patterns and semantic correlations of the historical context. We introduce the construction of history temporal graph in Section 4.2 and discuss the formulation of temporal path in Section 4.3. Then we present the detail of query-aware temporal path processing in Section 4.4. Additionally, we describe the scoring and loss function in Section 4.5, and finally analyze the complexity in Section 4.6. ### History Temporal Graph Construction To comprehensively capture the connectivity patterns between entities and complex temporal characteristics in the historical subgraphs, we construct a _history temporal graph_. One straightforward approach is to ignore the temporal information within the historical subgraphs and directly use all triplets from the subgraph sequence to form a complete historical graph. However, it is essential to retain the complete information of the historical subgraph sequence. The timestamps in the historical subgraph sequence are crucial for inferring missing facts for the future. The representations of entities and relationships can vary in semantic information across different timestamps in the historical subgraphs. Additionally, there is a temporal ordering between historical subgraphs at different time periods, which further contributes to the inference process. Therefore, we consider incorporating the timestamps of historical subgraphs into the history temporal graph to maximize the integrity of the original historical subgraph sequence, which allows us to retain the essential temporal context needed for inferring missing facts effectively. Specifically, given a query \((s,r,?,t+1)\) at future time \(t+1\), we use its corresponding historical subgraph sequence \(\{G_{t-m+1},...,G_{t}\}\) from the previous \(m\) timestamps to generate the history temporal graph \(\hat{G}_{t-m+1:t}\) according to the form specified in Equation 1. Figure 2 illustrates the construction approach of the history temporal graph. \[\hat{G}_{t-m+1:t}\leftarrow\left\{(s,r_{r},o)\Big{|}(s,r,o)\in G_{r},\tau\in[t-m+1,t ]\right\} \tag{1}\] Note that \(r_{\tau}\) denotes the _temporal relation_ in history temporal graph \(\hat{G}_{t-m+1:t}\), representing a relation type \(r\) with time attribute \(\tau\) attached. That is, for a temporal edge \((s,r_{r},o)\in\hat{G}_{t-m+1:t}\), it means \((s,r,o)\) occurred in \(G\tau\). For simplicity, we abbreviate the history temporal graph \(\hat{G}_{t-m+1:t}\) as \(\hat{G}_{<t+1}\) for the query at \(t+1\) timestamp. Attaching the timestamp feature to the relation can be understood as expanding the set of relation types into a combined form of _relation-timestamp_, which allows us to capture the temporal aspect of relations among the entities and enables the modeling of the temporal path in history temporal graph. ### Temporal Path Formulation The prediction of future missing facts relies on the development trends of historical facts. To capture the linkages between the query subject entity and object candidate entities on history temporal graph, the concept of temporal path is put forward. Definition 1 (Temporal Path).: is a logical path that aggregates the semantics of all connected paths in history temporal graph between a head entity and a tail entity through arithmetic logical operations. The aggregated temporal path can be used to simultaneously represent the temporal information and comprehensive path information from the head entity to the tail entity. This enables us to obtain a comprehensive query-aware paired representation for the future facts reasoning task by aggregating the information from various paths, which is from query subject node to object candidate nodes in history temporal graph. Note that it is essential to consider the relation type information from the query while learning paired representations. In other words, for a query \((s,r,?,t+1)\), we should also embed the relation type \(r\) into the paired representation (i.e., path representation) as well. This ensures that the relation type is incorporated into the learning process, enhancing the overall modeling of the query and enabling more accurate and context-aware inference of missing facts, and the detail will be discussed in Section 4.4. We denote the representation of temporal path in history temporal graph \(\hat{G}_{<t+1}\) as \(\mathbf{H}_{(s,r)\rightarrow\mathcal{V}}^{<t+1}\in\mathbb{R}^{|\mathcal{V}| \times d}\) corresponding to the query \((s,r,?)\) at future timestamp \(t+1\), where \(|\mathcal{V}|\) is the cardinality of object candidates set and \(d\) is the dimension of temporal path embedding. That is, \(\mathbf{H}_{(s,r)\rightarrow\mathcal{V}}^{<t+1}\) describes the representations of temporal paths from query subject entity \(s\) to all object candidate entities in \(\mathcal{V}\), considering the specific query \((s,r,?)\). Specifically, each item in Figure 2. Example of constructing the history temporal graph \(\hat{G}_{t-2t}\) with \(m=3\). For illustrative purposes, we use an undirected graph to demonstrate the construction method and omit the relationship types of edges in the historical subgraph sequence and the constructed history temporal graph. The time labels attached to the edges in the history temporal graph represent the timestamps of the corresponding edges in the historical subgraph sequence. \(\mathbf{H}_{(s,r)\rightarrow\mathcal{V}}^{<t+1}\), which is denoted as \(\mathbf{h}_{(s,r)\rightarrow o}^{<t+1}\in\mathbb{R}^{d}\), represents a specific temporal path feature that learns the representation of temporal path from subject \(s\) to a particular candidate object \(o\), where \(o\in\mathcal{V}\). Given query \((s,r,?,t+1)\), we take an object candidate \(o\in\mathcal{V}\) as an example. The temporal path between subject entity \(s\) and \(o\) we consider is the aggregation form of all paths that start with \(s\) and end with \(o\) in the topology of history temporal graph \(\hat{G}_{<t+1}\). Formally, we describe \(\mathbf{h}_{(s,r)\rightarrow o}^{<t+1}\) as follow: \[\mathbf{h}_{(s,r)\rightarrow o}^{<t+1}=\bigoplus\mathcal{P}_{s \rightarrow o}^{<t+1}=\mathbf{p}_{0}\oplus\mathbf{p}_{1}\oplus\cdots\oplus \mathbf{p}_{|\mathcal{P}_{s\rightarrow o}^{<t+1}|}\Big{|}_{\mathbf{p}_{k} \in\mathcal{P}_{s\rightarrow o}^{<t+1}}, \tag{2}\] where \(\oplus\) denotes the paths aggregation operator that aggregates paths feature between query subject \(s\) and object candidate \(o\), which will be introduced in the following section; \(\mathcal{P}_{s\rightarrow o}^{<t+1}\) denotes the set of paths from \(s\) to \(o\) in history temporal graph \(\hat{G}_{<t+1}\), and \(|\mathcal{P}_{s\rightarrow o}^{<t+1}|\) denotes the cardinality of the path set. A path feature \(\mathbf{p}_{k}\in\mathcal{P}_{s\rightarrow o}^{<t+1}\) is defined as follow when it contains edges as \((e_{0}\rightarrow e_{1}\rightarrow\cdots\rightarrow e_{|\mathbf{p}_{k}|})\): \[\mathbf{p}_{k}^{t}=\mathbf{w}_{r}(e_{0})\otimes\mathbf{w}_{r}(e_{1})\otimes \cdots\otimes\mathbf{w}_{r}(e_{|\mathbf{p}_{k}|}), \tag{3}\] where \(e_{(1,2,\ldots,|\mathbf{p}_{k}|)}\) denotes the temporal edges in path \(p_{k}\), and \(|\mathbf{p}_{k}|\) denotes the number of edges in path \(p_{k}\); \(\mathbf{w}_{r}(e_{*})\) is the query-aware temporal representation of edge \(e_{*}\) and \(\otimes\) denotes the operator of merging temporal edges information within the path. ### Query-aware Temporal Path Processing In this section, we discuss how to model temporal path representations for the queries at future timestamps based on history temporal graph, as illustrated in Figure 3. It aids in predicting missing facts by providing connection patterns and temporal information into the temporal relation type edges between entities and enabling practical inference for the future facts. #### 4.4.1. Temporal Path Aggregation Layer We propose a temporal path aggregation layer for comprehensively aggregating the connected temporal relation type edges between the query subject entity \(s\) and all candidate object entities in \(\mathcal{V}\) within the history temporal graph. Note that a temporal path representation is specific to a particular query. Different queries can lead to the same path, but they have different representations (e.g. when two queries share the same subject entity but have different query relations). This insight design ensures each path representations are context-dependent and take into account Figure 3. An illustrative diagram of the proposed TiPNN model for query-aware temporal path processing. the query's unique characteristics, such as the query subject entity and query relation type, which can influence the meaning and relevance of the path in different contexts. The entire aggregation process is carried out iteratively based on the sight outlined in Equations 2 and 3. Since the history temporal graph already contains the connection information and temporal feature of historical facts, we can directly capture the temporal path representations from history temporal graph relevant to a given query. We adopt the \(\omega\)-layers message passing approach of graph neural network at history temporal graph to expand the iterative path length and learn query-aware temporal path features, which enables the continuous and simultaneous collection of multiple path information and their corresponding temporal edge features, for the temporal path representations. Specifically, given a query \((s,r,?)\) at future timestamp \(t+1\), based on the structural and temporal feature of historical facts, the representation of temporal path \(\mathbf{h}_{(s,r)\rightarrow\mathbf{o}}^{<t+1}\) in history temporal graph \(\hat{G}_{<t+1}\) (i.e., the aggregated feature of path set \(\mathcal{P}_{s\rightarrow\mathbf{o}}^{<t+1}\)) is learned by the query-aware \(\omega\)-layers aggregation neural network. That is, \(\mathbf{H}_{(s,r)\rightarrow\mathcal{V}}^{<t+1}\) will be finally updated to represent the temporal paths from query subject \(s\) to all candidate objects \(o\in\mathcal{V}\) within a limited number of hops, after finishing \(\omega\)-th layer aggregation iteration. For the sake of simplicity, in the following text, we will use the query \((s,r,?,t+1)\) as an example, where \(s\) is the query subject entity, \(r\) is the query relation type, and \(t+1\) is the timestamp for the future fact to be predicted. We omit the superscript (and subscript) \(<t+1\) and the indicator of query subject and relation pair \((s,r)\) in notations. \(\mathbf{H}_{\mathcal{V}}^{l}\) is used to denote the status of temporal path representation at the \(l\)-th iteration, and \(\mathbf{h}_{o}^{l}\in\mathbf{H}_{\mathcal{V}}^{l}\) denotes the status of candidate \(o\in\mathcal{V}\) at the \(l\)-th iteration, where \(l\in[0,\omega]\). It should be stressed that \(\mathbf{H}_{\mathcal{V}}^{l}\) (or \(\mathbf{h}_{o}^{l}\)) is still describing the representation of temporal path(s), rather than node(s) representation. A temporal path is formed by logically aggregating multiple real existing paths from the history temporal graph. Each path consists of multiple temporal edges. Although the temporal edge exists in the form of _relation-timestamp_, it is still based on a specific relationship \(r\in\mathcal{R}\). Therefore, we initialize a trainable representation for the relation set, which serves as a foundational feature when processing temporal edge features during temporal path aggregation with query awareness. Here, we introduce and discuss the learnable relation representation and initialization of temporal path representation. * **Learnable Relation Representation** At the very beginning of the processing, we initialize the representation of all relation types, which are in the set of relation types \(\mathcal{R}\), with a learnable parameter \(\mathbf{R}\in\mathbb{R}^{|\mathcal{R}|\times d}\), where \(|\mathcal{R}|\) is the cardinality of relation types set. It is essential to note that the learnable relation types representation \(\mathbf{R}\) is shared throughout the entire model and is not independent between each time step prediction, enabling the model to leverage the learned relationship representations consistently across different time points and queries, enabling better generalization and inference capabilities. * **Temporal Path Initialization** When \(l=0\), \(\mathbf{H}_{\mathcal{V}}^{0}\) denotes the initial status of iteration that is used to prepare for the subsequent processing of temporal path iterations. Different from the common GNN-based approach and the previous model, we initialize the temporal path feature of \(o\in\mathcal{V}\) as query relation representation \(\mathbf{r}\in\mathbf{R}\) only when \(o\) is the same as the query subject entity \(s\), and a zero embedding otherwise. Formally, for the query \((s,r,?)\), any candidate \(o\in\mathcal{V}\) and query relation representation \(\mathbf{r}\), the temporal path representation \(\mathbf{h}_{o}^{0}\in\mathbf{H}_{\mathcal{V}}^{0}\) is initialized following \[\mathbf{h}_{o}^{0}\leftarrow\begin{cases}\mathbf{r}&if\ o\Leftrightarrow s,\\ \overline{0}&if\ o\Leftrightarrow s.\end{cases}\] (4) This initialization strategy ensures that the initial state is contextually sensitive to the given query, starting at the query subject with the query relation feature, providing a foundation for subsequent iterations to capture and model the temporal paths. With the initialization of temporal path representations, now we introduce the iterative aggregation process of the temporal path. Taking \(\mathbf{h}_{\mathbf{o}}^{I}\) as an example, \[\mathbf{h}_{\mathbf{o}}^{I}=\textsc{Agg}\left(\left\{\mathsf{Tmsg}\big{(} \mathbf{h}_{z}^{I-1},\mathbf{w}_{r}(z,p_{r},o)\big{)}\Big{|}(z,p_{r},o)\in \hat{G}_{<t+1}\right\}\right), \tag{5}\] where \(\textsc{Agg}(\cdot)\) and \(\textsc{Tmsg}(\cdot)\) are the aggregation and temporal edge merging function, corresponding to the operator in Equation 2 and 3, respectively, which will be introduced in the following section; \(\mathbf{w}_{r}\in\mathbb{R}^{d}\) denotes query-aware temporal representation of a temporal edge type; \((z,p_{r},o)\in\hat{G}_{<t+1}\) is a temporal edge in history temporal graph \(\hat{G}_{<t+1}\) that temporal relation \(p_{r}\) occurs between an entity \(z\) and candidate entity \(o\) at the history time \(\tau\). #### 4.4.2. Aggregation Operating Equation 5 describes the iterative aggregation process of \(\mathbf{h}_{\mathbf{o}}\) in the \(l\)-th layer. Here we provide explanations for the temporal edge merging function \(\textsc{Tmsg}(\cdot)\) and path aggregation function \(\textsc{Agg}(\cdot)\), respectively. Intuitively, the \(\textsc{Tmsg}(\cdot)\) function is responsible for iteratively searching for paths with a candidate object entity \(o\) as the endpoint, continuously propagating information along the temporal edges. On the other hand, the \(\textsc{Agg}(\cdot)\) function is responsible for aggregating the information obtained from each iteration along the temporal edges. Due to the initial state of \(\mathbf{H}_{\mathbf{o}}^{0}\), where only \(\mathbf{h}_{\mathbf{s}}^{0}\) is given the initial information, after \(\omega\) iterations of aggregation, \(\mathbf{h}_{\mathbf{o}}^{\omega}\) captures the comprehensive semantic connection and temporal dependency information of multiple paths from subject node \(s\) to object candidate \(o\) in the history temporal graph \(\hat{G}_{<t+1}\). * **Temporal Edge Merging Function \(\textsc{Tmsg}(\cdot)\)** It takes the current temporal path representation \(\mathbf{h}_{z}^{I-1}\), query relation representation \(\mathbf{r}\), and the temporal edge information as input and calculates the updated potential path information for the candidate object \(o\). Similar to message passing in knowledge graphs, we have adopted a vectorized multiplication method [57] to achieve feature propagation in history temporal graph, following \[\textsc{Tmsg}\big{(}\mathbf{h}_{z}^{I-1},\mathbf{w}_{r}(z,p_{r},o)\big{)}= \mathbf{h}_{z}^{I-1}\otimes\mathbf{w}_{r}(z,p_{r},o),\] (6) where the operator \(\otimes\) is defined as element-wise multiplication between \(\mathbf{h}_{z}^{I-1}\) and \(\mathbf{w}_{r}(z,p_{r},o)\). The vectorized multiplication can be understood as scaling \(\mathbf{h}_{z}^{I-1}\) by \(\mathbf{w}_{r}(z,p_{r},o)\) in our temporal edges merging [57]. As mentioned earlier, \(\mathbf{w}_{r}(z,p_{r},o)\) represents the query-aware temporal representation of a temporal edge \((z,p_{r},o)\), which needs to ensure its relevance to the query, enabling effective capturing of relation features and temporal characteristics. Therefore, the approach of temporal relation encoding is proposed for the temporal edge representation. _Temporal Relation Encoding_. Temporal edge representation should align with the query's characteristics, facilitating more contextually relevant and informative feature propagation. Specifically, we take into account the temporal edge features from the query relation representation, semantic and temporal characteristics of the temporal edges in the history temporal graph, generating comprehensive embedding with both static and temporal characteristics. Formally, \(\mathbf{w}_{r}(z,p_{r},o)\) can be derived as \[\mathbf{w}_{r}(z,p_{r},o)=g\Bigg{(}\mathbf{\Psi}_{r}(p)\Bigg{\|}\Upsilon(\Delta r) \Bigg{)},\] (7) where \(\mathbf{\Psi}_{r}(p)\in\mathbb{R}^{d}\) denotes the query relation \(r\)-aware basic static representation of edge type \(p\), \(\Upsilon(\Delta r)\in\mathbb{R}^{d}\) denotes the temporal embedding of the temporal edge type \(p_{r}\), \(\|\) denotes the operator of embedding concatenation, and \(g(\cdot)\) is a feed-forward neural network. For the basic static representation of \(p\), we obtain it through a linear transformation. Based on the query relation representation \(\mathbf{r}\), we map a representation as the static semantic information of the temporal edge type \(p\), following \[\mathbf{\Psi}_{r}(p)=\mathbf{W}_{p}\mathbf{r}+\mathbf{b}_{p},\] (8) where \(\mathbf{W}_{p}\) and \(\mathbf{b}_{p}\) are learnable parameters and serve as the weights and biases of the linear transformation, respectively, which makes the basic static information of the temporal edges derived from the given query relation embedding, ensuring the awareness of the query. And for the temporal embedding of the temporal edge, we use generic time encoding [55] to model the temporal information in temporal edges, following \[\Delta\tau=|\tau-t_{q}|,\] (9) \[\Upsilon(\Delta\tau)=\sqrt{\frac{1}{d}}\ \ \Big{[}cos(\mathbf{w}_{1}\Delta\tau+ \boldsymbol{\phi}_{1}),cos(\mathbf{w}_{2}\Delta\tau+\boldsymbol{\phi}_{2}), \cdots,cos(\mathbf{w}_{d}\Delta\tau+\boldsymbol{\phi}_{d})\ \Big{]},\] (10) where \(t_{q}\) denotes the query timestamp, \(\Delta\tau\) denotes the time interval between the query timestamp \(t_{q}\) and the temporal edge timestamp \(\tau\), which measures how far apart \(\tau\) is from \(t_{q}\), namely _relative time distance_. Moreover, \(\mathbf{w}_{*}\) and \(\boldsymbol{\phi}_{*}\) are learnable parameters, and \(d\) is the dimension of the vector representation, which is the same as the dimension of the static representation. The \(\text{Tmsg}(\cdot)\) function effectively bridges the temporal dependencies and semantic connections between the query subject and the candidate object through propagation along the temporal edges in paths. Besides, in the encoder of temporal relation, considering query relation feature allows to tailor the temporal edge representation to the specific relation type relevant to the query; temporal characteristics of edges help to model chronological order and time-based dependencies between different facts; and simultaneously modeling the basic static and temporal features of the temporal edge make more comprehensive semantics obtained. * **Path Aggregation Function Agg\((\cdot)\)** It considers the accumulated information from the previous iteration and the newly propagated information along the temporal edge to update the temporal path representation for a candidate object. We adopt principal neighborhood aggregation (PNA) proposed in [5], which leverages multiple aggregators (namely mean, maximum, minimum, and standard deviation) to learn joint feature, since previous work has verified its effectiveness [63][31]. We also consider the traditional aggregation function as a comparison, such as sum, mean, and max, which will be introduced in detail in Section 5.3. Finally, after the \(\omega\)-th iteration of aggregation, \(\mathbf{H}_{\mathcal{V}}^{\omega}\) will obtain the representation of the temporal paths from query subject \(s\) to all candidate objects in \(\mathcal{V}\), with comprehensive semantic connection and temporal dependency information of multiple paths. ### Learning and Inference TiPNN models the query-aware temporal path feature by aggregation process within the history temporal graph. By capturing comprehensive embedding with both static and temporal characteristics of temporal edges, and aggregating multiple paths that consist of temporal edges, TiPNN learns a comprehensive representation of temporal path. Different from most previous models, we utilize the temporal edge features in a path from query subject to candidate object, without considering any node embedding during modeling processing, thus TiPNN can solve the inductive setting, which will be presented in Section 5.2 in detail. #### 4.5.1. Score Function Here we show how to apply the final learned temporal paths representation to the temporal knowledge graph reasoning. Given query subject \(s\) and query relation \(r\) at timestamp \(t+1\), after obtaining temporal paths representation \(\mathbf{H}^{\omega}_{(s,r)\rightarrow\mathcal{V}}\), we predict the conditional likelihood of the future object candidate \(o\in\mathcal{V}\) using \(\mathbf{h}^{\omega}_{(s,r)\rightarrow\omega}\in\mathbf{H}^{\omega}_{(s,r) \rightarrow\mathcal{V}}\), following: \[p(o|s,r)=\sigma\Bigg{(}\mathcal{F}\Big{(}\mathbf{h}^{\omega}_{(s,r)\to \omega}\bigm{\|}\mathbf{r}\Big{)}\Bigg{)}, \tag{11}\] where \(\mathcal{F}(\cdot)\) is a feed-forward neural network, \(\sigma(\cdot)\) is the sigmoid function and \(||\) denotes embedding concatenation. Note that we append the query relation embedding \(\mathbf{r}\) to the temporal path feature \(\mathbf{h}^{\omega}_{(s,r)\rightarrow\omega}\), and it helps to alleviate the insensitivity to unreachable distances for nodes within a limited number of hops, and also enhances the learning capacity of relation embeddings \(\mathbf{R}\). As we have added inverse quadruple \((o,r^{-1},s,t)\) corrsponding to \((s,r,o,t)\) into the dataset in advance, without loss of generality, we can also predict subject \(s\in\mathcal{V}\) given query relation \(r^{-1}\) and query object \(o\) with the same model as: \[p(s|o,r^{-1})=\sigma\Bigg{(}\mathcal{F}\Big{(}\mathbf{h}^{\omega}_{(o,r^{-1}) \rightarrow s}\bigm{\|}\mathbf{r}^{-1}\Big{)}\Bigg{)}. \tag{12}\] #### 4.5.2. Parameter Learning Reasoning on a given query can be seen as a binary classification problem. The objective is to minimize the negative log-likelihood of positive and negative triplets, as shown in Equation 13. In the process of generating negative samples, we follow the Partial Completeness Assumption (Garon et al., 2016). Accordingly, for each positive triplet in the reasoning future triplet set, we create a corresponding negative triplet by randomly replacing one of the entities with a different entity. It ensures that the negative samples are derived from the future triplet set at the same timestamp, allowing the model to effectively learn to discriminate between correct and incorrect predictions at the same future timestamp. \[\mathcal{L}_{TKG}=-\log p(s,r,o)-\sum_{j=1}^{n}\frac{1}{n}log(1-p(\overline{s_ {j}},r,\overline{o_{j}})), \tag{13}\] where \(n\) is hyperparameter of negative samples number per positive sample; \((s,r,o)\) and \((\overline{s_{j}},r,\overline{o_{j}})\) are the positive sample and \(j\)-th negative sample, respectively. Besides, to promote orthogonality in the learnable raltion parameter \(\mathbf{R}\) initialized at the beginning, a regularization term is introduced in the objective function, inspired by the work in (Song et al., 2016). The regularization term is represented following 14, where \(\mathbf{I}\) is the identity matrix, \(\alpha\) is a hyperparameter and \(\|\cdot\|\) denotes the L2-norm. \[\mathcal{L}_{REG}=\Bigm{\|}\mathbf{R}^{T}\mathbf{R}-\alpha\mathbf{I}\Bigm{\|} \tag{14}\] Therefore, the final loss of TiPNN is the sum of two losses and can be denoted as: \[\mathcal{L}=\mathcal{L}_{TKG}+\mathcal{L}_{REG}. \tag{15}\] ### Complexity Analysis The score of each temporal path from the query subject and candidate object can be calculated in parallel operation. Therefore, to see the complexity of the proposed TiPNN, we analyze the computational complexity of each query. The major operation in TiPNN is query-aware temporal path processing, and each aggregation iteration contains two steps: temporal edge merging function \(\texttt{Tmsg}(\cdot)\) and path aggregation function \(\texttt{Agg}(\cdot)\) (as shown in Equation 5). We use \(|\mathcal{E}|\) to denote the maximum number of concurrent facts in the historical subgraph sequence, and \(|\mathcal{V}|\) to denote the cardinality of the entity set. \(\texttt{Tmsg}(\cdot)\) has the time complexity of \(O(m|\mathcal{E}|)\) since it performs Equation 6 for all edges in the history temporal graph, where \(m\) denotes the length of the history used for reasoning. \(\texttt{Agg}(\cdot)\) has the time complexity of \(O(|\mathcal{V}|)\) since the aggregation method we adopt will perform for every node. And \(\texttt{Agg}(\cdot)\) is executed after \(\texttt{MsG}(\cdot)\) at each aggregation iteration, so the time complexity is finally \(O(\omega(m|\mathcal{E}|+|\mathcal{V}|))\) when the number of aggregation layers is \(\omega\). ## 5. Experiments In this section, we carry out a series of experiments to assess the effectiveness and performance of our proposed TiPNN. Through these experiments, we aim to address and answer the following research questions: * **RQ1:** How does the proposed model perform as compared with state-of-the-art knowledge graph reasoning and temporal knowledge graph reasoning methods? * **RQ2:** How does each component of TiPNN (i.e., temporal relation encoding, temporal edge merging function, path aggregation function) affect the performance? * **RQ3:** How do the core parameters in history temporal graph construction (i.e., sampling history length) and query-aware temporal path processing (i.e., number of temporal path aggregation layers) affect the reasoning performance? * **RQ4:** How does the performance efficiency of TiPNN compared with state-of-the-art method for the reasoning? * **RQ5:** How does the proposed model solve the inductive setting on temporal knowledge graph reasoning? * **RQ6:** How does the proposed model provide reasoning evidence based on the history temporal path for the reasoning task? ### Experimental Settings #### 5.1.1. Datasets We conduct extensive experiments on four representative temporal knowledge graph datasets, namely, ICEWS18 (Zheng et al., 2019), GDELT (Zheng et al., 2019), WIKI (Zheng et al., 2019), and YAGO (Zheng et al., 2019). These datasets are widely used in the field of temporal knowledge reasoning due to their diverse temporal information. The ICEWS18 dataset is collected from the Integrated Crisis Early Warning System (Beng et al., 2019), which records various events and activities related to global conflicts and crises, offering valuable insights into temporal patterns of such events. The GDELT dataset (Gong et al., 2019) is derived from the Global Database of Events, Language, and Tone, encompassing a broad range of events across the world and enabling a comprehensive analysis of global events over time. WIKI (Zheng et al., 2019) and YAGO (Zheng et al., 2019) are two prominent knowledge bases containing factual information with explicit temporal details. WIKI covers a wide array of real-world knowledge with timestamps, while YAGO offers a structured knowledge base with rich semantic relationships and temporal annotations. And we focus on the subsets of WIKI and YAGO that have yearly granularity in our experiments. The details of the datasets are provided in Table 2. To ensure fair evaluation, we adopt the dataset splitting strategy employed in (Kang et al., 2018). The dataset is partitioned into three sets: training, validation, and test sets, based on timestamps. Specifically, (timestamps of the train) \(<\) (timestamps of the valid) \(<\) (timestamps of the test). #### 5.1.2. Evaluation Metrics To evaluate the performance of the proposed model for TKG reasoning, we employ the commonly used task of link prediction on future timestamps. To measure the performance of the method, we report the Mean Reciprocal Rank (MRR) and Hits@[1; 3; 10] metrics. MRR is a measure that evaluates the average reciprocal rank of correctly predicted facts for each query. Hits@k is a set of metrics that measures the proportion of queries for which the correct answer appears in the top-k positions of the ranking list. In contrast to the traditional filtered setting used in previous works (Beng et al., 2017)(Kang et al., 2018), where all valid quadruples appearing in the training, validation, or test sets are removed from the ranking list of corrupted facts, we consider that this setting is not suitable for TKG reasoning tasks. Instead, we opt for a more reasonable evaluation, namely time-aware filtered setting, where only the facts occurring at the same time as the query are filtered from the ranking list of corrupted facts, which is aligned with recent works (Kang et al., 2018)(Kang et al., 2018). #### 5.1.3. Compared Methods We conduct a comprehensive comparison of our proposed model with three categories of baselines: (1) _KG Reasoning Models_. It consists of traditional KG reasoning models that do not consider timestamps, including DistMult(Kang et al., 2018), ComplEx (Kang et al., 2018), ConvE (Chen et al., 2018), and RotatE (Kang et al., 2019). (2) _Interpolated TKG Reasoning models_. We also compare our proposed model with four interpolated TKG reasoning methods, namely TTransE (Kang et al., 2018), TA-DistMult (Kang et al., 2018), DE-SimpLE (Kang et al., 2018), and TNTComplEx (Kang et al., 2018). (3) _Extrapolated TKG Reasoning models_. We include state-of-the-art extrapolated TKG reasoning methods that infers future missing facts, including TANGO-Tucker (Kang et al., 2018), TANGO-DistMult (Kang et al., 2018), CyGNet (Kang et al., 2018), RE-NET (Kang et al., 2018), RE-GCN (Kang et al., 2018), TITer (Kang et al., 2018), xERTE (Kang et al., 2018), CEN (Kang et al., 2018), GHT (Kang et al., 2018), and DaeMon (DaeMon, 2018). #### 5.1.4. Implementation Details For history temporal graph construction, we perform a grid search on the history length \(m\) and present overview results with the lengths 25, 15, 10, 8, corresponding to the datasets ICEWS18, GDELT, WIKI, and YAGO in Table 3, which is described in detail presented in Figure 7. The embedding dimension \(d\) is set to 64 for temporal path representation. For query-aware temporal path processing, we set the number of temporal path aggregation layers \(\omega\) to 6 for ICEWS18 and GDELT datasets, and 4 for WIKI and YAGO datasets. We conduct layer normalization and shortcut on the aggregation layers. The activation function \(relu\) is adopted for the aggregation of the temporal path. Similar to (Kang et al., 2018), we use different temporal edge representations in different aggregation layers, that is the learnable parameter in \(\mathbf{\Psi}\) and \(\mathbf{\Upsilon}\) is independent of each aggregation iteration (Kang et al., 2018)(Kang et al., 2018). To facilitate parameter learning, we have set the number of negative sampling to 64 and the hyperparameter \(\alpha\) in the regularization term to 1. We use Adam (Kingmaa et al., 2014) \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Datasets & \(|\mathcal{V}|\) & \(|\mathcal{R}|\) & \(\mathcal{E}_{train}\) & \(\mathcal{E}_{valid}\) & \(\mathcal{E}_{test}\) & \(|\mathcal{T}|\) \\ \hline ICEWS18 & 23,033 & 256 & 373,018 & 45,995 & 49,545 & 304 \\ GDELT & 7,691 & 240 & 1,734,399 & 238,765 & 305,241 & 2,976 \\ WIKI & 12,554 & 24 & 539,286 & 67,538 & 63,110 & 232 \\ YAGO & 10,623 & 10 & 161,540 & 19,523 & 20,026 & 189 \\ \hline \hline \end{tabular} \end{table} Table 2. Statistics of Datasets (\(\mathcal{E}_{train}\), \(\mathcal{E}_{valid}\), \(\mathcal{E}_{test}\) are the numbers of facts in training, validation, and test sets.). for parameter learning, with a learning rate of \(1e-4\) for YAGO and \(5e-4\) for others. The maximum epoch of training has been set to 20. We also design an experiment in an inductive setting, which predicts facts with unseen entities in the training set, to demonstrate the inductive ability of our proposed model. All experiments were conducted with EPYC 7742 CPU and 8 TESLA A100 GPUs. ### Experimental Results (RQ1) The experiment results on the TKG reasoning task are shown in Table 3 in terms of time-aware filtered MRR and Hits@{1,3,10}. The results demonstrate the effectiveness of our proposed model with a convincing performance and validate the superiority of TiPNN in handling temporal knowledge graph reasoning tasks. It consistently outperforms all baseline methods and achieves state-of-the-art (SOTA) performance on the four TKG datasets. In particular, TiPNN exhibits better performance compared to all static models (listed in the first block of Table 3) and the temporal models in the interpolation setting (presented in the second block of Table 3). This is attributed to TiPNN's incorporation of temporal features of facts and its ability to learn temporal inductive paths, which enables it to effectively infer future missing facts. By leveraging comprehensive historical semantic information, TiPNN demonstrates remarkable proficiency in handling temporal knowledge graph reasoning tasks. Compared with the temporal models under the extrapolation setting (those presented in the third block of Table 3), the proposed model also achieves better results. TiPNN demonstrates its capability of achieving superior results by effectively modeling temporal edges. In contrast to the previous model, it starts from the query and obtains query-specific temporal path representations, allowing for more accurate predictions about the future. Thanks to the message propagation mechanism between \begin{table} \begin{tabular}{c|c c c|c c c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{ICEWS18} & \multicolumn{4}{c|}{GDELT} & \multicolumn{4}{c|}{WIKI} & \multicolumn{4}{c}{YAGO} \\ \cline{2-13} & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\ \hline DistMult & 11.51 & 7.03 & 12.87 & 20.86 & 8.68 & 5.58 & 9.96 & 17.13 & 10.89 & 8.92 & 10.97 & 16.82 & 44.32 & 25.56 & 48.37 & 58.88 \\ ComplEx & 22.94 & 15.19 & 27.05 & 42.11 & 16.96 & 11.25 & 19.52 & 32.35 & 24.47 & 19.69 & 27.28 & 34.83 & 44.38 & 25.78 & 48.20 & 59.01 \\ ConvE & 24.51 & 16.23 & 29.25 & 44.51 & 16.55 & 11.02 & 18.88 & 31.60 & 14.52 & 11.44 & 16.36 & 22.36 & 42.16 & 23.27 & 46.15 & 60.76 \\ RotatE & 12.78 & 4.01 & 14.89 & 31.91 & 13.45 & 6.95 & 14.09 & 25.99 & 46.10 & 41.89 & 49.65 & 51.89 & 41.28 & 22.19 & 45.33 & 58.39 \\ \hline TTransE & 8.31 & 1.92 & 8.56 & 21.89 & 5.50 & 0.47 & 4.94 & 15.25 & 29.27 & 21.67 & 34.43 & 42.39 & 31.19 & 18.12 & 40.91 & 51.21 \\ TA-DistMult & 16.75 & 8.61 & 18.41 & 33.59 & 12.00 & 5.76 & 12.94 & 23.54 & 44.53 & 39.92 & 48.73 & 51.71 & 54.92 & 48.15 & 59.61 & 66.71 \\ DE-SimplE & 19.30 & 11.53 & 21.86 & 34.80 & 19.70 & 12.22 & 21.39 & 33.70 & 45.43 & 42.60 & 47.71 & 49.55 & 54.91 & 51.64 & 57.30 & 60.17 \\ TNTComplEx & 21.23 & 13.28 & 24.02 & 36.91 & 19.53 & 12.41 & 20.75 & 33.42 & 45.03 & 40.04 & 49.31 & 52.03 & 57.98 & 52.92 & 61.33 & 66.69 \\ \hline TANGO-Tucker & 28.68 & 19.35 & 32.17 & 47.04 & 19.42 & 12.34 & 20.70 & 33.16 & 50.43 & 48.52 & 51.47 & 53.58 & 57.83 & 53.05 & 60.78 & 65.85 \\ TANGO-DistMult & 26.65 & 17.92 & 30.08 & 44.09 & 19.20 & 12.17 & 20.40 & 32.78 & 51.15 & 49.66 & 52.16 & 53.35 & 62.70 & 59.18 & 60.31 & 67.90 \\ CyGNet & 24.93 & 15.90 & 28.28 & 42.61 & 18.48 & 11.52 & 19.57 & 31.98 & 33.89 & 29.06 & 36.10 & 41.86 & 52.07 & 45.36 & 56.12 & 63.77 \\ RE-NET & 28.81 & 19.05 & 32.44 & 47.51 & 19.62 & 12.42 & 21.00 & 34.01 & 49.66 & 46.88 & 51.19 & 53.48 & 58.02 & 53.06 & 61.08 & 66.29 \\ RE-GCN & 30.58 & 21.01 & 34.34 & 48.75 & 19.64 & 12.42 & 20.90 & 33.69 & 77.55 & 73.75 & 80.38 & 83.68 & 84.12 & 80.76 & 86.30 & 89.98 \\ TITer & 29.98 & 22.05 & 33.46 & 44.83 & 15.46 & 10.98 & 15.61 & 24.31 & 75.50 & 72.96 & 77.49 & 79.02 & 87.47 & 84.89 & 89.96 & 90.27 \\ xERTE & 29.31 & 21.03 & 33.40 & 45.60 & 18.09 & 12.30 & 20.06 & 30.34 & 71.14 & 68.05 & 76.11 & 79.01 & 84.19 & 80.09 & 88.02 & 89.78 \\ CEN & 30.84 & 21.23 & 34.58 & 49.67 & 20.18 & 12.84 & 21.51 & 34.10 & 78.35 & 74.69 & 81.47 & 84.45 & 83.49 & 79.77 & 85.85 & 89.92 \\ GHT & 29.16 & 18.99 & 33.16 & 48.37 & 20.13 & 12.87 & 21.30 & 34.19 & 48.50 & 45.08 & 50.87 & 53.69 & 57.22 & 51.64 & 60.68 & 67.17 \\ DaeMon & 31.85 & 22.67 & 35.92 & 49.80 & 20.73 & 13.65 & 22.53 & 34.23 & 82.38 & 78.26 & 86.03 & 88.01 & 91.59 & 90.03 & 93.00 & 93.34 \\ \hline **TiPNN** & **32.17** & **22.74** & **36.24** & **50.72** & **21.17** & **14.03** & **22.98** & **34.76** & **83.04** & **79.04** & **86.45** & **88.54** & **92.06** & **90.79** & **93.15** & **93.58** \\ \hline \hline \end{tabular} \end{table} Table 3. Overall Performance Comparison of Different Methods. Evaluation metrics are time-aware filtered MRR and Hits@{1,3,10}. All results are multiplied by 100. The best results are highlighted in **bold**. And the second-best results are highlighted in underline. (Higher values indicate better performance.) temporal edges, it leverages both temporal and structural information to learn customized representations tailored to each query. This enables TiPNN to make more precise predictions for future missing facts. The impressive performance of DaeMon demonstrated its ability to model relation features effectively. However, we further enhance the integration of relation and temporal features by weakening the barriers between different time-stamped subgraphs using the constructed history temporal graph. It enables us to capture the characteristics of historical facts over a larger span of information and reduces the parameter loss caused by subgraph evolution patterns. ### Ablation Study (RQ2) Note that TiPNN involves three main operational modules when processing query-aware temporal paths: the temporal relation encoder, the temporal edge merging operation, and the path aggregation operation. The temporal relation encoder is responsible for learning the features of temporal edges. The temporal edge merging operation is used to generate path features for each path from the query subject entity to the candidate object entity. The path aggregation module is responsible for consolidating all path features for pair-wise representations of temporal paths. To show the impact of these components, we conducted an ablation analysis on the temporal encoder \(\Upsilon\) of temporal relation encoding and discussed the influence of temporal encoding independence on the results. We also compared three variants of the temporal edge merging operation in Tmsg(\(\cdot\)), as well as replacements for the aggregator in the path aggregation module Agg(\(\cdot\)). These comprehensive evaluations allowed us to assess the effectiveness of each component and gain insights into their respective contributions to the overall performance of TiPNN. #### 5.3.1. Effectiveness of Temporal Relation Encoding The temporal encoder \(\Upsilon\) plays a crucial role in modeling the temporal order of historical facts and the time factor of temporal edges in the temporal paths. Therefore, in this study, we aim to evaluate the ability of the temporal encoder in predicting future facts. To do so, we conducted an ablation analysis by removing the temporal encoder component from TiPNN and comparing its performance. The results are illustrated in Figure 4, where we present a performance comparison between TiPNN with and without the temporal encoder. The experimental results demonstrate that when the temporal encoder is removed, the performance of TiPNN decreases across all datasets. This outcome aligns with our expectations since relying solely on the query-aware static representation from the temporal relation modeling cannot capture the temporal information of historical facts. As a consequence, the absence of the temporal encoder hinders the accurate learning of temporal features among past facts. Furthermore, we also investigated the impact of the independence of parameters in the temporal encoder on all datasets. Specifically, we considered the parameters \(\mathbf{w}*\) and \(\mathbf{\phi}*\) in \(\Upsilon\) of Equation 10, which are responsible for modeling the temporal features of the temporal edge \((z,p_{\tau},o)\), to be independently learned for each static type \(p\). In other words, we assumed that the same relative time distance may have different temporal representations for different relation types. Thus, for each distinct edge type \(p\), we utilized a relation-aware temporal encoder to represent the temporal features for relative time distance \(\Delta\tau\). We conducted experiments on all datasets, as shown in Table 4. We use 'Shared Param.' to indicate that the parameters in \(\Upsilon\) are shared and not learned independently for each edge type \(p\), and 'Specific Param.' to indicate that the parameters are not shared and are learned independently based on the edge types. Based on Table 4(a), we observe that using shared parameters yields better performance on the ICEWS18 and GDELT datasets. Conversely, from Table 4(b), we find that independently learning the parameters of the temporal encoder is more beneficial on the WIKI and YAGO datasets. We analyze this phenomenon and suggest that WIKI and YAGO have fewer edge types, and independently learning specific parameters for each edge type allows for easier convergence, making this approach more advantageous compared to learning shared parameters. #### 5.3.2. Variants of Temporal Edge Merging Method The temporal edge merging operation \(\texttt{Tmsg}(\cdot)\) utilizes the message passing mechanism to iteratively extend the path length while merging the query-relevant path information. As mentioned in Equation 6, we employ the scaling operator from DistMult (Srivastava et al., 2017) for computing the current temporal path representation with the features of temporal edges through element-wise multiplication. Additionally, we have also conducted experiments using the translation operator from TransE (Beng et al., 2019) (i.e., element-wise summation) and the rotation operator from RotatE (Rutah et al., 2019) to provide further evidence supporting the effectiveness of the temporal edge merging operation. \begin{table} \end{table} Table 4. Impact of the Independence Setting of Parameters in Temporal Encoder. Figure 4. Ablation Results on Temporal Encoding. We conducted experimental validation and comparison on two types of knowledge graphs: the event-based graph ICEWS18 and the knowledge-based graph YAGO. Figure 5 displays the results obtained with different merging operators. The findings demonstrate that TiPNN benefits from these excellent embedding methods, performing on par with DistMult and RotatE, and outperforming TransE, particularly evident on MRR and H@1 in the knowledge-based graph YAGO. #### 5.3.3. Variants of Path Aggregation Method The path aggregation operation Agg\((\cdot)\) is performed after merging the temporal edges at each step, and the aggregated feature serves as the temporal path representation from the query subject entity to the object entity candidates for future fact reasoning. As mentioned in Section 4.4.2, we employ the principal neighborhood aggregation (PNA) [(5)] as the aggregator to aggregate the propagated messages. PNA aggregator considers mean, maximum, minimum, and standard deviation as aggregation features to obtain comprehensive neighbor information. To validate its effectiveness, we compare it with several individual aggregation operations: SUM, MEAN, and MAX. Similarly, we conduct experiments on both the event-based graph ICEWS18 and the knowledge-based graph YAGO. Figure 6 illustrates the results with different aggregation methods. The findings demonstrate that TiPNN performs optimally with the PNA aggregator and performs the worst with the MEAN aggregator, consistently in both YAGO and ICEWS18, except for the H@3 and H@10 metrics on YAGO, where the differences are less pronounced. Figure 5. Ablation Results on Temporal Edge Merging Method. Figure 6. Ablation Results on Path Aggregation Method. ### Parameter Study (RQ3) To provide more insights on query-aware temporal path processing, we test the performance of TiPNN with different sampling history lengths \(m\), and the number of temporal path aggregation layers \(\omega\). #### 5.4.1. Sampling History Length In order to comprehensively capture the interconnection patterns between entities in the historical subgraphs, we constructed a history temporal graph to learn query-aware temporal paths between entities, enabling the prediction of future missing facts. Unlike previous methods based on subgraph sequences (Kang et al., 2017)(Kang et al., 2018), we fuse multiple historical subgraphs for reasoning. However, it may lead to the inclusion of redundant edge information from different timestamps when the historical length is excessively long, resulting in unnecessary space consumption in the learning process. Moreover, facts that are far from the prediction target timestamp have minimal contribution to the prediction, even with the addition of relative distance-aware temporal encoding in the temporal relation encoder. Conversely, if the history length is too short, it can hinder the model's ability to capture cross-time features between entities, thereby reducing the modeling performance for temporal patterns in the history temporal graph. Hence, we conducted a discussion on the sampling history length to find a balance. The experimental results of the history length study are shown in Figure 7. Since TiPNN initializes the learning of temporal paths based on the query, it requires the history temporal graph's topological structure to include as many entities as possible to ensure that the learned temporal paths from the query subject entity to each candidate object entity capture more temporal connection patterns. We present the density and interval for each dataset, as shown in Table 5, where \(|\mathcal{V}|\) denotes the number of entities in the dataset. For WIKI and YAGO, which have longer time intervals (i.e., 1 year), each subgraph contains more factual information, and the average number of entities in each subgraph is naturally higher. As a result, these datasets demonstrate relatively stable performance across various history length settings, as shown in Figure 7(c&d). On the other hand, ICEWS18 Figure 7. The Performance of Different History Length Settings. and GDELT datasets exhibit different behavior, as depicted in Figure 7(a&b). Their time intervals are relatively shorter compared to WIKI and YAGO. Consequently, they require longer history lengths to compensate for data sparsity and incorporate more comprehensive historical information in the history temporal graph. To assess the impact of different history lengths on the model's performance, we calculate the ratio of \(|\mathcal{V}|\) to \(|\mathcal{V}_{avg}|\) and present it in the last row of Table 5. We observe that the model's performance tends to stabilize around this ratio, providing a useful reference for parameter tuning. Finally, we choose the balanced parameters for ICEWS18, GDELT, WIKI, and YAGO as 25, 15, 10, and 8, respectively. #### 5.4.2. Number of Temporal Path Aggregation Layers The temporal path aggregation layer is a fundamental computational unit in TiPNN. It operates on the constructed history temporal graph using the temporal edge merging and path aggregation operation to learn query-aware temporal path representations for future fact inference. The number of layers in the temporal path aggregation layer directly impacts the number of hops in the temporal message passing on the history temporal graph, which corresponds to the logical maximum distance of temporal paths. Therefore, setting the number of layers faces a similar challenge as determining the history length in Section 5.4.1. When the number of layers is too high, TiPNN captures excessively long path information in temporal paths, leading to increased space consumption. However, these additional paths do not significantly improve TiPNN's performance because distant nodes connected to the target node via multiple hops are less relevant for inference of future facts. Therefore, considering distant nodes in the inference process has limited impact on the accuracy of predictions for future facts. On the other hand, if the number of layers is too low, TiPNN may not effectively capture long-distance logical connections in the history temporal graph. This may result in insufficient learning of comprehensive topological connection information and precise combination rules of temporal paths. Figure 8. The Performance of Different Numbers of Aggregation Layers. \begin{table} \begin{tabular}{c|c c c c} \hline \hline Datasets & ICEWS18 & GDELT & WIKI & YAGO \\ \hline Interval & 24 hours & 15 mins & 1 year & 1 year \\ \(|\mathcal{V}|\) & 23,033 & 7,691 & 12,554 & 10,623 \\ \(|\mathcal{V}_{avg}|\) & 986.44 & 393.18 & 2,817.47 & 1,190.72 \\ \hline \(|\mathcal{V}|\) & 23.35 & 19.56 & 4.46 & 8.92 \\ \hline \hline \end{tabular} \end{table} Table 5. Density and Interval of Datasets (\(|\mathcal{V}_{aug}|\) is the average number of entities involved in the subgraph of each timestamp.). To find an optimal setting for the number of layers in temporal path aggregation, we conducted experiments on ICEWS18 and YAGO as shown in Figure 8, which illustrates the results for varying the number of temporal path aggregation layers. The results indicate that ICEWS18 performs best with 6 layers, and the number of layers has a minor impact on the results. As the number of layers increases, there is no significant improvement in the model's accuracy. On the other hand, YAGO performs best with 4 layers, and beyond 4 layers, the model's performance declines noticeably. We consider that this difference in performance could be attributed to the fact that ICEWS18, as an event-based graph, involves longer-distance logical connections for inferring future facts. On the contrary, YAGO, being a knowledge-based graph, has sparser relation types, resulting in more concise reasoning paths during inference. We finally set the optimal number of layers for ICEWS18 and GDELT to 6, and 4 for WIKI and YAGO. ### Comparison on Prediction Time (RQ4) To further analyze the efficiency of TiPNN, we compared the inference time of TiPNN with DaeMon (DaeMon, 2018) on the TKG reasoning task. For a fair comparison, we use the test sets of four datasets and align the model parameters under the same setting and environment. The runtime comparison between the two models is illustrated in Figure 9, where we present the ratio of their inference times in terms of multiples of the unit time. The comparative experiments demonstrate that TiPNN achieves significant reductions in runtime compared to DaeMon, with time savings of approximately 80%, 69%, 67%, and 73% on the ICEWS18, GDELT, WIKI, and YAGO datasets, respectively. The efficiency of TiPNN is mainly attributed to its construction of history temporal graphs, which enables the modeling of logical features between temporal facts and the capture of semantic and temporal patterns from historical moments. In contrast, DaeMon relies on graph evolution methods to handle the potential path representations, while TiPNN employs a reduced number of message-passing layers to learn real existing path features in the history temporal graph. Additionally, TiPNN leverages temporal edges to simultaneously capture relational and temporal features, eliminating the need for separate sequence modeling units to learn temporal representations. As a result, TiPNN exhibits lower complexity and higher learning efficiency compared to DaeMon. ### Inductive Setting (RQ5) To validate the model's inductive reasoning ability, we also design experiments for TiPNN in an inductive setting. The inductive setting is a common scenario in knowledge graphs, where during the training process, the model can only access a subset of entities, and during testing, it needs to reason about unseen entities. This setting aims to simulate real-world situations where predictions are required for new facts based on existing knowledge. In the context of the temporal knowledge graph, the inductive setting requires the model to perform reasoning across time, meaning that the learned representations from the historical subgraphs should generalize to future timestamps for predicting Figure 9. Runtime Comparison (The runtime is proportionally represented as multiples of the unit time.). missing facts. This demands the model to possess the ability to generalize to unseen entities, leveraging the learned temporal connectivity patterns from the historical subgraphs. In the inductive setting experiment, we train the model using only a portion of entities and then test its inference performance on the reasoning of the unseen entities. This type of validation allows for a comprehensive evaluation of the model's generalization and inductive reasoning capabilities, validating the effectiveness of TiPNN in predicting future missing facts. In the context of TKGs, there are few existing datasets suitable for inductive validation. Therefore, we follow the rules commonly used in KG for inductive settings and construct an inductive dataset specifically tailored for TKGs (Krishnan et al., 2017). Finally, we conduct experiments on our proposed model using the inductive dataset to evaluate its inference performance under the inductive setting. #### 5.6.1. Inductive Datasets In order to facilitate the inductive setting, we create a fully-inductive benchmark dataset by sampling disjoint entities from the YAGO dataset (Krishnan et al., 2017). Specifically, the inductive dataset consists of a pair of TKGs: YAGO\({}^{1}\) and YAGO\({}^{2}\), satisfying the following conditions: (i) they have non-overlapping sets of entities and (ii) they share the same set of relations. To provide a comprehensive evaluation, we have sampled three different versions of the pair inductive dataset based on varying proportions of entity set cardinality. Table 6 provides the statistical data for these inductive datasets, where \(|\mathcal{V}|\) denotes the number of entities in the dataset, \(|\mathcal{R}|\) denotes the number of relations in the dataset. The labels v1(5:5), v2(6:4), and v3(7:3) in the table correspond to the three different datasets created with different entity set partition ratios. #### 5.6.2. Results of Inductive Experiment In the Inductive setting, we conduct inference experiments and validations by cross-utilizing the training and test sets of a pair of TKGs: YAGO\({}^{1}\) and YAGO\({}^{2}\). Specifically, we train on the training set of YAGO\({}^{1}\) and test on the test set of YAGO\({}^{2}\), and vice versa, to achieve cross-validation in each version of the dataset. Additionally, we include experimental results in the transductive setting as a baseline for comparison with the inductive setting. We used 'bias' to display the difference in results between the two settings. As shown in Table 7, the experimental results demonstrate that the bias is within a very small range. The success of TiPNN in handling the Inductive setting is not surprising. This is because TiPNN models the structural semantics and temporal features of historical facts without relying on any learnable entity-related parameters. Instead, it leverages temporal edge features to model temporal path features relevant to the query. This inherent capability of TiPNN allows it to naturally perform inference on datasets that include unseen entities, making it well-suited for the inductive setting. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{2}{c}{Datasets} & \(|\mathcal{V}|\) & \(|\mathcal{R}|\) & \(\mathcal{E}_{train}\) & \(\mathcal{E}_{valid}\) & \(\mathcal{E}_{test}\) & Interval \\ \hline \multirow{2}{*}{v1 (5:5)} & YAGO\({}^{1}\) & 3,980 & 10 & 39,588 & 4,681 & 6,000 & 1year \\ & YAGO\({}^{2}\) & 3,963 & 10 & 37,847 & 4,544 & 5,530 & 1year \\ \hline \multirow{2}{*}{v2 (6:4)} & YAGO\({}^{1}\) & 4,590 & 10 & 46,070 & 5,685 & 5,904 & 1year \\ & YAGO\({}^{2}\) & 3,293 & 10 & 33,091 & 4,836 & 4,244 & 1year \\ \hline \multirow{2}{*}{v3 (7:3)} & YAGO\({}^{1}\) & 5,705 & 10 & 64,036 & 7,866 & 9,663 & 1year \\ & YAGO\({}^{2}\) & 2,407 & 10 & 20,884 & 2,545 & 3,180 & 1year \\ \hline \hline \end{tabular} \end{table} Table 6. Statistics of Inductive Datasets (\(\mathcal{E}_{train}\), \(\mathcal{E}_{valid}\), \(\mathcal{E}_{test}\) are the numbers of facts in training, validation, and test sets.). ### Reasoning Evidence (RQ6) Since we have integrated historical information into the constructed history temporal graph and perform inference on future facts by modeling temporal paths, we can use the path in the history temporal graph to provide evidence for the inference process and visualize the reasoning basis. An intuitive idea is that the model should provide important reasoning paths from the history temporal graph that contribute significantly to the inference for the corresponding response. Although the temporal path representation we obtain is a comprehensive representation logically aggregated from multiple paths in the history temporal graph, we can still estimate the importance of each individual path following the local interpretation method (Beng et al., 2017)(Wang et al., 2017). As described in Section 4.5.1, we convert the learned temporal path representation into corresponding scores, and thus, we can estimate the importance of paths through backtracking. Drawing inspiration from path interpretation (Wang et al., 2017), we define the importance of a path as the weights assigned to it during the iteration process. We achieve this by computing the partial derivative of the temporal path scores with respect to the path's weights. Specifically, for a reasoning response \((s,r,o,t+1)\) of a missing object query \((s,r,?,t+1)\), we consider the top-k paths as the basis for inference, which is defined as shown in Equation 16. \[\begin{split}\text{P}_{1},\text{P}_{2},\cdots,\text{P}_{k}= \underset{\text{P}\in\mathcal{P}^{-\mathcal{L}+1}_{s\to\infty}}{\text{Top-}k} \ \frac{\partial\ p(s,r,o)}{\partial\ P}\end{split} \tag{16}\] In practice, since directly computing the importance of an entire path is facing a challenge, we calculate the importance of each individual temporal edge in the history temporal graph, which can be obtained using automatic differentiation. And then, we aggregate the importance of temporal edges with a summation operation within each path to determine the corresponding path's importance. By applying beam search to traverse and compute the importance of each path, we can finally obtain the top-k most important paths as our reasoning evidence. We selected several queries from the test set of event-based graph ICEWS18 to conduct an analysis and discussion of the reasoning evidence. As shown in Table 8, it presents the responses corresponding to these queries and their top-\(2\) related reasoning evidence. Here, we provide our analysis and interpretation of the reasoning evidence for the first three queries. (I) For the first query (_Shinzo Abe, Make a visit,?, 10/23/2018_), TiPNN's response is China. The most significant reasoning clue is that Shinzo Abe expressed intent to meet or negotiate on 10/22/2018. Additionally, on 10/21/2018, there was also an expression of intent to meet or negotiate, followed by the action of _Make a visit_ on 10/22/2018. These two clues are logically consistent and form a reasoning pathway: _Express intent to meet or negotiate \(\rightarrow\) Make a visit_. (II) For the query (_European Union, Engage in diplomatic cooperation,?, 10/23/2018_), the response is United Kingdom. \begin{table} \begin{tabular}{c|c c c c|c c c c|c c c c} \hline \hline Dataset & \multicolumn{4}{c|}{v1} & \multicolumn{4}{c|}{v2} & \multicolumn{4}{c}{v3} \\ & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\ \hline YAGO\({}^{1}\) & 90.85 & 89.31 & 92.10 & 92.58 & 91.19 & 89.79 & 92.35 & 92.73 & 91.67 & 90.46 & 92.72 & 93.09 \\ YAGO\({}^{2}\)\(\rightarrow\)YAGO\({}^{1}\) & 90.75 & 89.09 & 92.11 & 92.64 & 91.07 & 89.60 & 92.27 & 92.73 & 91.19 & 89.58 & 92.62 & 92.94 \\ Bias \((\pm)\) & \multicolumn{4}{c|}{\(\leq\) 0.22} & \multicolumn{4}{c|}{\(\leq\) 0.19} & \multicolumn{4}{c}{\(\leq\) 0.88} \\ \hline YAGO\({}^{2}\) & 91.33 & 89.83 & 92.61 & 93.02 & 92.88 & 91.61 & 93.93 & 94.27 & 90.66 & 89.22 & 91.70 & 92.43 \\ YAGO\({}^{1}\)\(\rightarrow\)YAGO\({}^{2}\) & 91.15 & 89.47 & 92.65 & 93.07 & 92.88 & 91.63 & 93.90 & 94.36 & 90.96 & 89.66 & 91.95 & 92.63 \\ Bias \((\pm)\) & \multicolumn{4}{c|}{\(\leq\) 0.36} & \multicolumn{4}{c|}{\(\leq\) 0.09} & \multicolumn{4}{c}{\(\leq\) 0.44} \\ \hline \hline \end{tabular} \end{table} Table 7. Inductive Performance of TiPNN (\(\rightarrow\) points from training set to test set corresponding to the inductive setting, and the row results without arrow marked is the corresponding transductive setting as the baseline. ‘Bias’ denotes represents the difference in comparison.). One of the reasoning clues is that on 10/14/2018, European Union engaged in diplomatic cooperation with the United Kingdom, and on 10/22/2018 (the day before the query time), European Union expressed intent to engage in diplomatic cooperation with the United Kingdom. This is easily understandable as it follows a logical reasoning pathway: _Express intent to engage in diplomatic cooperation -> Engage in diplomatic cooperation._ (III) For the query _(Russia, Host a visit,?, 10/24/2018)_, the response is Head of Government (Italy). One reliable reasoning clue is that on 10/23/2018, Russia was visited by the Head of Government (Italy). This is also not surprising, as it follows a logical reasoning pathway: _Make a visit\({}^{-1}\)\(\sim\) Host a visit_, which is in line with common sense rules. \begin{table} \begin{tabular}{r l} \hline \hline **Query:** & **(Shinzo Abe, Make a visit,?, 10/23/2018)** \\ **Response:** & China \\ \hline 0.784 & \textless{}Shinzo Abe, Express intent to meet or negotiate, China, 10/22/2018\textgreater{}. \\ 0.469 & \textless{}Shinzo Abe, Express intent to meet or negotiate, Li Keqiang, 10/21/2018\textgreater{}\ Through these examples of reasoning evidence, we can observe that TiPNN is capable of learning temporal reasoning logic from the constructed history temporal graphs. These reasoning clues provide users with more comprehensible inference results and intuitive evaluation criteria. One can utilize these clues to understand the reasoning process for future temporal facts, thereby increasing confidence in the inference results and enabling potential refinements or improvements when needed. ## 6. Conclusion In this work, we have introduced TiPNN, an innovative query-aware temporal path reasoning model for TKGs reasoning tasks, addressing the challenge of predicting future missing temporal facts under the temporal extrapolated setting. First, we presented a unified graph, namely the history temporal graph, which represents the comprehensive features of the historical context. This constructed graph allows for more comprehensive utilization of interactions among entities at different historical instances and the temporal feature between historical facts during graph representation learning. Second, we introduced a novel concept of temporal paths, designed to capture query-relevant logical semantic paths on the history temporal graph, which can provide rich structural and temporal context for reasoning tasks. Third, a query-aware temporal path processing framework was also designed, integrated with the introduced temporal edge merging and path aggregation functions. It enables the modeling of temporal path features over the history temporal graph for future temporal fact reasoning. Overall, the proposed model avoids the need for separate graph learning on each temporal subgraph, making use of the unified graph to represent the information of historical feature to enhance the efficiency of the reasoning process. By starting from the query, capturing and learning over query-aware temporal paths, TiPNN accounts for both structural information and temporal dependencies between entities of separate subgraphs in historical context, and achieves the prediction of future missing facts while offering interpretable reasoning evidence that facilitates users' analysis of results and model fine-tuning. Besides, in learning historical patterns, the modeling process adopts an entity-independent manner, that means TiPNN doesn't rely on specific entity representations, enabling it to naturally handle TKG reasoning tasks under the inductive setting. We have conducted extensive experiments on TKG reasoning benchmark datasets to evaluate the performance of TiPNN. The results demonstrate that the proposed model exhibits superior effectiveness and attains new state-of-the-art achievements.
2309.03279
Let Quantum Neural Networks Choose Their Own Frequencies
Parameterized quantum circuits as machine learning models are typically well described by their representation as a partial Fourier series of the input features, with frequencies uniquely determined by the feature map's generator Hamiltonians. Ordinarily, these data-encoding generators are chosen in advance, fixing the space of functions that can be represented. In this work we consider a generalization of quantum models to include a set of trainable parameters in the generator, leading to a trainable frequency (TF) quantum model. We numerically demonstrate how TF models can learn generators with desirable properties for solving the task at hand, including non-regularly spaced frequencies in their spectra and flexible spectral richness. Finally, we showcase the real-world effectiveness of our approach, demonstrating an improved accuracy in solving the Navier-Stokes equations using a TF model with only a single parameter added to each encoding operation. Since TF models encompass conventional fixed frequency models, they may offer a sensible default choice for variational quantum machine learning.
Ben Jaderberg, Antonio A. Gentile, Youssef Achari Berrada, Elvira Shishenina, Vincent E. Elfving
2023-09-06T18:00:07Z
http://arxiv.org/abs/2309.03279v2
# Let Quantum Neural Networks Choose Their Own Frequencies ###### Abstract Parameterized quantum circuits as machine learning models are typically well described by their representation as a partial Fourier series of the input features, with frequencies uniquely determined by the feature map's generator Hamiltonians. Ordinarily, these data-encoding generators are chosen in advance, fixing the space of functions that can be represented. In this work we consider a generalization of quantum models to include a set of trainable parameters in the generator, leading to a trainable frequency (TF) quantum model. We numerically demonstrate how TF models can learn generators with desirable properties for solving the task at hand, including non-regularly spaced frequencies in their spectra and flexible spectral richness. Finally, we showcase the real-world effectiveness of our approach, demonstrating an improved accuracy in solving the Navier-Stokes equations using a TF model with only a single parameter added to each encoding operation. Since TF models encompass conventional fixed frequency models, they may offer a sensible default choice for variational quantum machine learning. ## I Introduction The field of quantum machine learning (QML) remains a promising application for quantum computers. In the fault-tolerant era, the prospect of quantum advantage is spearheaded by the exponential speedups in solving linear systems of equations [1], learning distributions [2; 3] and topological data analysis [4]. Yet the arrival of large fault-tolerant quantum computers is not anticipated in the next decade, reducing the practical impact of such algorithms today. One approach to solving relevant problems in machine learning with today's quantum computers is through the use of parameterized quantum circuits (PQCs) [5; 6], which have been applied to a variety of use cases [7; 8; 9; 10; 11; 12; 13]. A PQC consists of quantum feature maps (FMs) \(\hat{U}_{F}(\vec{x})\), which encode an input \(\vec{x}\) into the Hilbert space, and variational ansatze \(\hat{U}_{A}(\vec{\theta}_{A})\) which contain trainable parameters. Previously, it was shown that the measured output of many variational QML models can be mathematically represented as a partial Fourier series in the network inputs [14], leading to a range of new insights [15; 16; 17]. Most strikingly, it follows that the set of frequencies \(\Omega\) appearing in the Fourier series are uniquely determined by the eigenvalues of the generator Hamiltonian of the quantum FM, while the series coefficients are tuned by the variational parameters \(\vec{\theta}_{A}\). Conventionally, a specific generator is chosen beforehand, such that the model frequencies are fixed throughout training. In theory this is not a problem, since by choosing a generator that produces regularly spaced frequencies, the basis functions of the Fourier series form a orthogonal basis set. This ensures that asymptotically large fixed frequency (FF) quantum models are universal function approximators [14]. Yet in reality, finite-sized quantum computers will only permit models with a finite number of frequencies. Thus, in practice, great importance should be placed on the _choice_ of basis functions, for which the orthogonal convention may not be the best. This raises the additional complexity: what is the optimal choice of basis functions? Indeed for many problems, it is not obvious what this would be without prior knowledge of the solution. Here we address this issue by exploring a natural extension of quantum models in which an additional set of trainable parameters \(\vec{\theta}_{F}\) are included Figure 1: An overview of the concepts discussed in this paper. Top: we introduce a parameterized quantum circuit in which the generator of the data-encoding block is a function of trainable parameters \(\vec{\theta}_{F}\), alongside the standard trainable variational ansatz. Left: the output of such a model is a Fourier-like sum over different individual modes. Right: in conventional quantum models, tuning the ansatz parameters \(\vec{\theta}_{A}\) allows the coefficients of each mode to be changed. By using a trainable frequency feature map (TFFM), tuning \(\vec{\theta}_{F}\) leads to a quantum model in which the frequencies of each mode can also be trained. in the FM generator. This simple idea has a significant impact on the effectiveness of quantum models, allowing the generator eigenspectrum to change over the course of training in a direction that minimises the objective loss. This in turn creates a quantum model with trainable frequencies as visualised in Figure 1. In a quantum circuit learning [6] setting, we numerically demonstrate cases where trainable frequency (TF) models can learn a set of basis functions that better solves the task at hand, such as when the solution has a spectral decomposition with non-regularly spaced frequencies. Furthermore, we show that an improvement is realisable on more advanced learning tasks. We train quantum models with the differentiable quantum circuits (DQC) algorithm [18] to learn the solution to the 2D time-dependent Navier-Stokes differential equations. For the problem of predicting the wake flow of fluid passing a circular cylinder, a TF quantum model achieves lower loss and better predictive accuracy than the equivalent FF model. Overall, our results raise the prospect that TF quantum models could improve performance on other near-term QML problems. ## II Previous works The idea of including trainable parameters in the feature map generator is present in some previous works. In a study of FM input redundancy, Gil Vidal et al. [19] hypothesise that a variational input encoding strategy may improve the expressiveness of quantum models, followed by limited experiments [20]. Other works also suggest that encoding unitaries with trainable weights can reduce circuit depths of quantum models, as discussed for single-qubit classifiers [21] and quantum convolutional neural networks [22]. Nevertheless, our work is different due to novel contributions demonstrating (a) an analysis of the effect of trainable generators on the Fourier modes of quantum models, (b) evidence of specific spectral features in data for which TF models offer an advantage over FF models and (c) direct comparison between FF and TF models for a practically relevant learning problem. In the language of quantum kernels [23; 8; 24], several recent works use the term "trainable feature map" to describe the application of unitaries with trainable parameters on data already encoded into the Hilbert space [25; 26]. Such a distinction is due because quantum kernel models often contain no trainable parameters at all. However, the trainable parameters of these feature maps do not apply directly on the generator Hamiltonian. As discussed in section III, this is intrinsically different to our scheme as it does not lead to a model with trainable frequencies. To not confuse the two schemes, here we adopt the wording "trainable _frequency_ feature map" and "trainable _frequency_ models". ## III Method Practically, quantum computing entails the application of sequential operations (e.g., laser pulses) to a physical system of qubits. Yet to understand how these systems can be theoretically manipulated, it is often useful to work at the higher-level framework of linear algebra, from which insights can be translated back to real hardware. This allows studying strategies encompassing digital, analog, and digital-analog paradigms [27]. In a more abstract formulation, a broad class of FMs can be described mathematically as the tensor product of an arbitrary number of sub-feature-maps, each represented by the time evolution of a generator Hamiltonian applied on an arbitrary subset of the qubits \[\hat{U}_{F}(\vec{x})=\bigotimes_{m}e^{-\frac{i}{2}\hat{G}_{m}(\gamma_{m}) \phi(\vec{x})}, \tag{1}\] where for the sub-feature-map \(m\), \(\hat{G}_{m}(\gamma_{m})\) is the generator Hamiltonian that depends on non-trainable parameters \(\gamma_{m}\). Furthermore, \(\phi(\vec{x}):\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is an encoding function that depends on the input features \(\vec{x}\). Practically speaking, \(\gamma_{m}\) is typically related to the index of the tensor product space the sub-feature-map is applied to and can be used to set the number of unique frequencies the model has access to. Furthermore, in some cases \(\hat{U}_{F}(\vec{x})\) can be applied several times across the quantum circuit, interleaved with variational ansatz \(\hat{U}_{A}(\vec{\theta}_{A})\) layers, for example in data re-uploading [21; 28] or serial feature maps [14]. The measured output of a quantum model with a feature map defined in Eq. (1) can be expressed as a Fourier-type sum in the input dimensions \[f(\vec{x},\vec{\theta}_{A})=\sum_{\vec{\omega}_{j}\in\Omega}\vec{c}_{j}(\vec {\theta}_{A})e^{i\vec{\omega}_{j}\cdot\phi(\vec{x})}, \tag{2}\] where \(\vec{c}_{j}\) are the coefficients of the multi-dimensional Fourier mode with frequencies \(\vec{\omega}_{j}\). Crucially, the frequency spectrum \(\Omega\) of a model is uniquely determined by the eigenvalues of \(\hat{G}_{m}\)[14]. More specifically, let us define the final state produced by the PQC as \(|\psi_{f}\rangle\). If the model is a quantum kernel, where the output derives from the distance to a reference quantum state \(|\psi\rangle\) (e.g., \(f(\vec{x},\vec{\theta}_{A})=|\langle\psi|\psi_{f}\rangle|^{2}\)), then \(\Omega\) is explicitly the set of eigenvalues of the composite generator \(\hat{G}\) such that \(\hat{U}_{F}(\vec{x})=e^{-\frac{i}{2}\hat{G}\phi(\vec{x})}\). When the generators \(\hat{G}_{m}\) commute, the composite generator is simply \(\hat{G}=\sum_{m}G_{m}\). If the model is a quantum neural network (QNN), where the output is derived from the expectation value of a cost operator \(\hat{C}\) (e.g., \(f(\vec{x},\vec{\theta}_{A})=\langle\psi_{f}|\hat{C}|\psi_{f}\rangle\)), then \(\Omega\) contains the gaps in the eigenspectrum of \(\hat{G}\). The key insight here is that in such quantum models, a specific feature map generator is chosen in advance. This fixes the frequency spectrum over the course of training, setting predetermined basis functions \(e^{i\vec{\omega}_{j}\cdot\phi(\vec{x})}\) from which the model can construct a solution. For these FF models, only the coefficients \(\tilde{c}_{j}\) can be tuned by the variational ansatz during training. In this work, we replace the FM generator with one that includes trainable parameters \(\hat{G}_{m}(\gamma_{m},\tilde{\theta}_{F})\). In doing so, the generator eigenspectrum, and thus the model frequencies, can also be tuned over the course of training. For this reason we refer to such feature maps as trainable frequency feature maps (TFFMs), which in turn create TF quantum models. The output of a TF quantum model will be a Fourier-type sum where the frequencies of each mode depend explicitly on the parameterization of the feature map. For example, for a QNN the output of the model can be written as \[f(\tilde{x},\tilde{\theta}_{A},\tilde{\theta}_{F})=\sum_{\tilde{\omega}_{j} \in\Omega}\tilde{c}_{j}(\tilde{\theta}_{A},\hat{C})e^{i\tilde{\omega}_{j}( \tilde{\theta}_{F})\cdot\phi(\tilde{x})}, \tag{3}\] Moreover, as \(\tilde{\theta}_{F}\) is optimized with respect to minimising a loss function \(\mathcal{L}\), the introduction of trainable frequencies \(\tilde{\omega}_{j}(\tilde{\theta}_{F})\) ideally allows the selection of spectral modes that better fit the specific learning task. For gradient-based optimizers, the derivative of parameters in the generator \(\frac{\partial\mathcal{L}}{\partial\theta_{F}}\) can be calculated as laid out in Appendix A, which for the generators used in our experiments simplifies to the parameter-shift rule (PSR) [17; 29]. We note that the idea here is fundamentally different to simply viewing a combination of feature maps and variational ansatze (e.g., serial feature maps) as a higher level abstraction containing one unitary \(\hat{U}_{F}(\tilde{x},\tilde{\theta})=\hat{U}_{F}(\tilde{x})\hat{U}_{A}( \tilde{\theta})\). The key concept in TF models is that a parameterization is introduced that acts directly on the generator Hamiltonian in the exponent of Eq. (1). This is what allows a trainable eigenvalue distribution, leading to quantum models with trainable frequencies. Furthermore, in this work the FF models we consider are those in which the model frequencies are regularly spaced (i.e., integers or integer valued multiples of a base frequency), such that the basis functions form an orthogonal set. This has become the conventional choice in the literature [30; 31; 32; 33; 34; 12; 35], owing to the theoretical grounding that such models are universal function approximators in the asymptotic limit [14]. However, it should be made clear that a FF model could mimic any TF model if the non-trainable unitaries in the FM were constructed with values corresponding to the final trained values of \(\tilde{\theta}_{F}\). The crux however, is that having knowledge of such values without going through the training process is highly unlikely, and might only occur where considerable a-priori knowledge of the solution is available. ## IV Proof of principle results The potential advantage of TF quantum models stem their ability to be trained such that their frequency spectra contain non-uniform gaps, producing non-orthogonal basis functions. We demonstrate this effect by first considering a fixed frequency feature map (FFFM) where for simplicity we restrict \(\hat{G}_{m}\) to single-qubit operators. Overall we choose a generator Hamiltonian \(\hat{G}=\sum_{m=1}^{N}\gamma_{m}\hat{Y}^{m}/2\), where \(N\) is the number of qubits, \(\hat{Y}^{m}\) is the Pauli matrix applied to the tensor product space of qubit \(m\), \(\gamma_{m}=1\), and \(\phi(\tilde{x})=x\) is a 1-D Fourier encoding function. This is the commonly used angle encoding FM [36], which we use to train a QNN to fit data produced by a cosine series \[y(x)=\frac{1}{|\Omega_{d}|}\sum_{\omega_{d}\Omega_{D}}\cos(\omega_{d}x). \tag{4}\] Here the data function contains a set of frequencies \(\Omega_{d}\), from which \(n_{d}\) data points are generated equally spaced in the domain \(\mathcal{D}=[-4\pi,4\pi]\). The value of \(n_{d}\) is determined by the Nyqvist sampling rate such that \(n_{d}=[2|\mathcal{D}|\max(\Omega_{d})]\). Figure 2 demonstrates where FF models succeed and fail. In these experiments, a small QNN with \(N=3\) qubits and \(L=4\) variational ansatz layers is used. More details of the quantum models and training hyperparameters for all experiments can be found in Appendix B. The FF QNN defined above is first trained on data with frequencies \(\Omega_{d}=\{1,2,3\}\). After training, the prediction of the model is recorded as shown in Figure 2a. Here we see the underlying function can be perfectly learned. To understand why, we note that the set of degenerate eigenvalues of \(\hat{G}\) are \(\lambda=\{-\frac{3}{2},-\frac{1}{2},\frac{1}{2},\frac{3}{2}\}\) and thus the unique gaps are \(\Delta=\{1,2,3\}\). In this case, the natural frequencies of the model \(\Delta\) are equal to the frequencies of the data \(\Omega_{d}\), making learning trivial. Furthermore, we find that similar excellent fits are possible for data containing frequencies that differ from the natural model frequencies by a constant factor (e.g., \(\Omega_{d}=\{1.5,3,4.5\}\)), provided the quantum model is given the trivial classical resource of parameters that can globally scale the input \(x\). By contrast, Figure 2b illustrates how a FF QNN fails to fit data with frequencies \(\Omega_{d}=\{1,1.2,3\}\). This occurs because no global scaling of the data can enable the fixed generator eigenspectrum to contain gaps with unequal spacing. In such a setting, no additional training would lead to accurate fitting of the data. Furthermore, no practical number of extra ansatz layers would enable the FF model to fit the data (see section VI for further discussion), for which we verify up to \(L=128\). Conversely, a QNN containing a TFFM with simple parameterization \(\hat{G}_{\theta}=\sum_{m=1}^{N}\theta_{m}\hat{Y}^{m}/2\) does significantly better in fitting the data. This is precisely possible because training of the generator parameters converges on the values \(\tilde{\theta}_{F}=\{0.89,1.05,1.04\}\), leading to an eigenvalue spectrum which contains gaps \(\Delta_{\theta}=\{...,0.95,1.050,1.200,3.000,...\}\) as shown in Figure 2c. In this experiment and all others using TF models, the TFFM parameters are initialised as the unit vector \(\tilde{\theta}_{F}=\mathds{1}\). A further advantage of TF models is their flexible spectral richness. For FF models using the previously defined \(\hat{G}\), one can pick values \(\gamma_{m}=1\), \(\gamma_{m}=m\) or \(\gamma_{m}=2^{(m-1)}\) to produce generators where the number of unique spectral gaps \(|\Omega|\) scales respectively as \(\mathcal{O}(N),\mathcal{O}(N^{2})\) and \(\mathcal{O}(2^{N})\) respectively. Typically, a practitioner may need to try all of these so-called simple, tower [18] and exponential [13] FMs, yet a TFFM can be trained to effectively represent any of these. To test this, we again sample from Eq. (4) to construct 7 data sets that contain between 1 and 7 frequencies equally spaced in the range \(\omega_{d}\in[1,3]\). Figure 3 shows the mean squared error (MSE) achieved by each QNN across these data sets. In the best case, the TF and FF models perform equally well, since each fixed generator produces a specific number of frequencies it is well suited for. However, we find that a TF model outperforms FF models in the average and worst-case scenarios. Despite the data having orthogonal basis functions, the FF models have either too few frequencies (e.g., simple FM for data with \(|\Omega_{d}|>3\)) or too many (e.g., exponential FM for data with \(|\Omega_{d}|<7\)) to perform well across all data sets. Thus, we find that even when the optimal basis functions are orthogonal, TF models can be useful when there is no knowledge of the ideal number of spectral modes of the solution. ## V Application to fluid dynamics In this section we demonstrate the impact of TF models on solving problems of practical interest. Specifically, we focus on the DQC algorithm [18], in which a quantum model is trained to find a solution to a partial differential equation (PDE). The PDE to be solved is the incompressible 2D time-dependent Navier-Stokes equations (NSE), defined as \[\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}+v \frac{\partial u}{\partial y}-\frac{1}{Re}\left(\frac{\partial^{2}u}{\partial x ^{2}}+\frac{\partial^{2}u}{\partial y^{2}}\right)+\frac{\partial p}{\partial x }=0, \tag{5}\] \[\frac{\partial v}{\partial t}+u\frac{\partial v}{\partial x}+v \frac{\partial v}{\partial y}-\frac{1}{Re}\left(\frac{\partial^{2}v}{\partial x ^{2}}+\frac{\partial^{2}v}{\partial y^{2}}\right)+\frac{\partial p}{\partial y }=0, \tag{6}\] where \(u\) and \(v\) are the velocity components in the \(x\) and \(y\) directions respectively, \(p\) is the pressure and \(Re\) is the Reynolds number, which represents the ratio of inertial forces to viscous forces in the fluid. The goal of this experiment is to train a quantum model to solve the downstream wake flow of fluid moving past a circular cylinder. This is one of the canonical systems of study in physics-informed neural networks (PINNs), the classical analogue of DQC, due to the vortex shedding patterns and other complex dynamics exhibited even in the laminar regime [37; 38; 39]. A high-resolution data set for \(Re=100\) is obtained from [37] for the region \(x=1\) to \(x=8\) downstream from a cylinder at \(x=0\), solved using the NekTar high-order polynomial finite element method (FEM) [40; 41]. Figure 3: Prediction MSE of simple, tower and exponential FFFMs compared to a TFFM when training on multiple data sets with equally spaced frequencies \(\tilde{\omega_{d}}\in[1,3]\). For each box, the triangle and orange line denote the mean and median respectively. Figure 2: Fitting cosine series of different frequencies using fixed frequency (FF) and trainable frequency (TF) QNNs. (a) Prediction after training on data with \(\Omega_{d}=\{1,2,3\}\) (b) Prediction after training on data with \(\Omega_{d}=\{1,1.2,3\}\). (c) Spectra of trained models in (b) as obtained using a discrete Fourier transform. The blue dashed lines indicate frequencies of the data. The quantum model is trained through minimising the loss \(\mathcal{L}=\mathcal{L}_{\text{PDE}}+\mathcal{L}_{\text{data}}\). To obtain \(\mathcal{L}_{\text{PDE}}\), the LHS of equations (5) and (6) are evaluated for each collocation point and the MSEs with respect to \(0\) are computed, the sum of which gives \(\mathcal{L}_{\text{PDE}}\). Meanwhile, \(\mathcal{L}_{\text{data}}\) is computed as the MSE to a small subset of training data. In this case, only \(1\%\) of the reference solution is given as a form of data-driven boundary conditions, whilst the remainder of the flow must be predicted through learning a solution that directly solves the NSE. Given the increased problem complexity compared to section IV, here we employ a more advanced quantum architecture. The overall model consists of two QNNs, the output of which represent the stream function \(\psi\) and pressure \(p\) respectively. The introduction of \(\psi\), from which the velocities can be obtained via the relations \[u =\frac{\partial\psi}{\partial y} v =-\frac{\partial\psi}{\partial x},\] ensures that the mass continuity equation is automatically satisfied, which would otherwise require a third term in the loss function. The architecture of each QNN, consisting of \(N=6\) qubits, is shown in Figure 4. First, a single ansatz layer \(\hat{U}_{A}(\vec{\theta}_{A})\) is applied, followed by a TFFM which encodes each dimension \((x,y,t)\) in parallel blocks. Each dimension of the TF encoding once again uses the simple parameterization \(\hat{G}_{\theta}=\sum_{m=1}^{N}\theta_{m}\hat{Y}^{m}\!/2\), whilst the FF model uses the same generator without trainable parameters \(\hat{G}=\sum_{m=1}^{N}\hat{Y}^{m}\!/2\). After the FM, a sequence of \(L=8\) ansatz layers are then applied. Subsequently a data-reuploading feature map is applied, which is a copy of the TFFM block including sharing the parameters. Finally, the QNN architecture ends with a single ansatz layer. Overall, each QNN has a circuit depth of \(52\) and \(180\) trainable ansatz parameters. Figure 5 gives a visualization of the results of this experiment, where the pressure field at a specific time is compared. Here the quantum models are trained for 5,000 iterations; more details can be found in Appendix B. The left panel shows the reference solution for the pressure \(p\) at time \(t=3.5\). Here, the \(10\times 5\) grid of cells with red borders correspond to the points that are given to the quantum models at each time step to construct \(\mathcal{L}_{\text{data}}\), overall \(1\%\) of the total \(100\times 50\) grid. The middle panel illustrates the prediction of the TF QNN. Whilst the model does not achieve perfect agreement with the reference solution, it captures import qualitative features including the formation of a large negative-pressure bubble on the left and an additional two separated bubbles on the right. By contrast, the FF QNN solution only correctly predicts the global background, unable to resolve the distinct different regions of pressure that form as fluid passes to the right. This demonstrates just how impactful TFFMs can be on the expressiveness of quantum models, even with limited width and depth. Further still, we show in Appendix C how deeper TF models can match the FEM solution whilst FF models cannot. To quantify this benefit, the mean absolute error relative to the median (MAERM) \(\frac{1}{M}\sum_{i=1}^{M}\left|\frac{\hat{y}_{i}-\hat{y}_{i}}{\hat{y}}\right|\) is calculated for each time step and observable, where the sum spans the \(M\) spatial grid points. The results, presented in Table 1, numerically demonstrate the improved performance of the TF model across all observables and time steps. Particularly notable is the large improved accuracy for the vertical velocity \(v\). Finally, it is worth considering how the inclusion of additional trainable parameters into a quantum model affects the cost of training. For a gradient-based optimizer such as Adam, each training iteration requires the calculation of \(\partial L/\partial\theta_{i}\) for all trainable parameters \(\theta_{i}\) in the Figure 4: Circuit diagram of the trainable frequency (TF) QNN architecture used in experiments solving the Navier-Stokes equations. The trainable frequency feature map (TFFM, dashed box) contains the generator parameters \(\theta_{F}\) that allow training of the underlying model frequencies. The TFFM is followed by \(L=8\) ansatz layers, a data-reuploading and then a final ansatz layer before the qubits are measured. Here, we study both digital and digital-analog versions of such layered abstraction. circuit. If one were to calculate these gradients on real quantum hardware, the PSR would need to be used. For the quantum architecture used in this section, all parameterized unitaries decompose into single-qubit Pauli rotations, such that only two circuit evaluations are required to compute the gradient of each parameter in \(\tilde{\theta}_{A}\) and \(\tilde{\theta}_{F}\). Thus, it is easy to calculate that a factor of \(C_{f}=\frac{|\tilde{\theta}_{F}|+|\tilde{\theta}_{A}|}{|\tilde{\theta}_{A}|}\) more circuit evaluations are required to train the TF model. For the models used in this section, the cost factor is \(C_{f}=\frac{12+180}{180}=1.07\). This represents only a 7% increase in the number of quantum circuits evaluated, a modest cost for the improvement in performance observed. ## VI Discussion In this work we explore an extension of variational QML models to include trainable parameters in the data-encoding Hamiltonian, compatible with a wide range of models including those based on digital and digital-analog paradigms. As introduced in section III, when viewed through the lens of a Fourier representation, the effect of such parameters is fundamentally different to those in the variational ansatz, as they enable the frequencies of the quantum model to be trained. Furthermore, in section IV we showed how this leads to quantum models with specific spectral properties inaccessible to the conventional approach of only tuning the coefficients of fixed orthogonal basis functions. Finally, in section V we demonstrated the benefit of TF models for practical learning problems, leading to a learnt solution of the Navier-Stokes equations closer to the ground truth than FF models. We note that, in theory, a FF model could also achieve parity with TF models if it had independent control of the coefficients of each basis function. Given data with a minimum frequency gap \(\Delta_{d,\min}\) and spanning a range \(r_{d}=|\Omega_{d,\max}-\Omega_{d,\min}|\), a quantum model with \(\frac{r_{d}}{\Delta_{d,\min}}\) fixed frequencies could span all modes of the data. In this case, such a model could even represent data with non-regularly spaced frequencies (e.g., \(\Omega_{d}=\{1,1.2,3\}\) by setting the coefficients \(\tilde{c}_{j}=0\) for \(\tilde{\omega}_{j}=\{1.1,1.3,1.4,...,2.9\}\)). However, such a model would be exponentially costly to train, since independent control of the coefficients of the model frequencies would generally require \(\mathcal{O}(2^{N})\) ansatz parameters. It is for this reason that we present TF models as having a practical advantage, within the context of scalable approaches to quantum machine learning. Looking forward, an interesting open question remains around the performance of other parameterizations of Figure 5: Pressure field at \(t=3.5\) of the wake flow of fluid passing a circular cylinder at x=0. Left: reference solution obtained by finite-element method (FEM) in [37]. Cells with red borders indicate the training data accessible to the quantum models, see main text. Middle: prediction of trainable frequency (TF) QNN using generators \(\hat{G}_{\theta}\). Right: prediction of fixed frequency (FF) QNN using generators \(\hat{G}\). The quantum circuits are based on a sliced Digital-Analog approach (sDAQC [27]) suitable for platforms such as neutral atom quantum computers. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{u} & \multicolumn{2}{c|}{v} & \multicolumn{2}{c|}{p} \\ \hline & FF & **TF** & FF & **TF** & FF & **TF** \\ \hline Min & \(16.8\pm 0.3\) & \(\mathbf{12.8\pm 1.9}\) & \(90.3\pm 2.6\) & \(\mathbf{27.4\pm 3.3}\) & \(62.9\pm 1.9\) & \(\mathbf{51.0\pm 6.0}\) \\ \hline Max & \(18.3\pm 0.4\) & \(\mathbf{14.2\pm 3.0}\) & \(122.0\pm 2.4\) & \(\mathbf{34.3\pm 4.6}\) & \(78.3\pm 1.8\) & \(\mathbf{62.8\pm 10.3}\) \\ \hline Mean & \(17.5\pm 0.3\) & \(\mathbf{13.2\pm 2.0}\) & \(103.2\pm 1.4\) & \(\mathbf{30.4\pm 2.7}\) & \(73.2\pm 1.9\) & \(\mathbf{57.0\pm 3.7}\) \\ \hline \end{tabular} \end{table} Table 1: MAERM error predicting \(u\), \(v\) and \(p\) averaged over 10 runs. The minimum, maximum and mean are with respect to the 11 time points between \(t=0\) to \(t=5.5\). the generator. A notable instance of this would be \(\hat{G}=\sum_{m=1}^{N}\text{NN}(\theta_{F})\hat{Y}^{m}/2\), where the parameterization is set by a classical neural network. Interestingly, this has already been implemented in a different context in so-called hybrid quantum-classical networks [42; 43; 44; 45], including studies of DQC [46]. The use of hybrid networks is typically motivated as relieving the computational burden off today's small-scale quantum models. Our work offers the insight that such architectures are actually using a classical neural network to set the frequencies of the quantum model. Compellingly, for many different parameterizations of TFFMs, a generator with regularly spaced eigenvalues is accessible within the parameter space. This is particularly true for the parameterizations studied in this work, which can be trivially realised as an orthogonal model when \(\widetilde{\theta}_{F}=\mathds{1}\). This implies that, at worst, many classes of with TF models can fall back to the behaviour of standard FF models. We find this, along with our results, a strong reason to explore in the future whether TF models could be an effective choice as a new default for variational quantum models. ## Appendix A Computing derivatives of trainable parameters in the generator Training a variational quantum circuit often involves performing gradient-based optimization against the trainable parameters of the ansatz. In this section we make clear how in the case of TF models, the trainable parameters in the generator can also be optimised in the same way. For gradient-based optimization, one needs to compute \(\frac{\partial\mathcal{L}}{\partial\theta}\), for a suitably defined loss function \(\mathcal{L}\) which captures the adherence of the solution \(f(x)\) to the conditions set by the training problem. Let us first define the state produced by a TF quantum model as \[|f_{\theta_{F},\tilde{\theta}_{A}}(x)\rangle=\hat{U}_{A}(\vec{\theta}_{A}) \hat{U}_{F}(x,\vec{\theta}_{F})|0\rangle. \tag{10}\] The output of the quantum models used in the main text, given as the expectation value of a cost operator \(\hat{C}\) \[f(x,\vec{\theta}_{F},\vec{\theta}_{A})=\langle f_{\theta_{F},\tilde{\theta}_ {A}}(x)|\hat{C}|f_{\theta_{F},\tilde{\theta}_{A}}(x)\rangle, \tag{11}\] can then act as a surrogate for the target function \(f\). For a supervised learning (SVL) loss contribution we can define \[\mathcal{L}_{SVL}=\frac{1}{M}\sum_{i}^{M}L(f(x_{i},\vec{\theta}_{F},\vec{ \theta}_{A}),y_{i}), \tag{12}\] where \(L\) is a suitable distance function. In a DQC setting, with each (partial, differential) equation embedded in a functional \(F[\partial_{X}f(x),f(x),x]\) to be estimated on a set of \(M\) collocation points \(\{x_{i}\}\), one can define a physics-informed loss function as \[\mathcal{L}_{DQC}=\frac{1}{M}\sum_{i}^{M}L(F[\partial_{X}f(x_{i}),f(x_{i}),x_ {i}],0). \tag{13}\] When optimising the loss against a certain variational parameter \(\theta\), if \(\theta\in\vec{\theta}_{A}\) is an ansatz parameter then \(\frac{\partial\mathcal{L}}{\partial\theta}\) can be computed as standard with the PSR and generalised parameter-shift rules (GPSR) [17; 18]. If instead \(\theta\in\vec{\theta}_{F}\), we can account for the inclusion of the feature \(x\) using the linearity of differentiation \[\frac{\partial\mathcal{L}_{SVL}}{\partial\theta} =\frac{1}{M}\sum_{i}^{M}\frac{dL}{df}\frac{\partial f(x_{i})}{ \partial\theta}, \tag{14}\] \[\frac{\partial\mathcal{L}_{DQC}}{\partial\theta} =\frac{1}{M}\sum_{i}^{M}\frac{\partial L}{\partial\theta}\left(F \left[\partial_{X}f(x_{i}),f(x_{i}),x_{i}\right],0\right). \tag{15}\] Note that in Eq. 15 we leave the right-hand side (RHS) as its implicit form, as it depends not only upon the actual choice of the distance \(L\), as in Eq. 14, but also upon the terms involved in the functional \(F\). In order to give further guidance on the explicit treatment of this latter case, we can split the discussion according to the various terms involved in the functional \(F\), under the likely assumption that the problem variable(s) \(X\) are independent from the variational parameter \(\theta\): * Terms that depend solely on \(x_{i}\) have null \(\frac{\partial\cdot}{\partial\theta}\) and can be neglected. * Terms containing the function \(f\) itself can be addressed via the chain rule already elicited in Eq. 14. * Finally, terms depending on \(\partial_{X}f(x_{i})\) can be similarly decomposed as \[\frac{d^{2}L}{dfdX}\frac{\partial^{2}f(x_{i})}{\partial\theta\partial X}= \frac{dL}{d(\partial_{X}f)}\frac{\partial}{\partial X}\frac{\partial f(x_{i} )}{\partial\theta}\] (16) where the latter descends from the independence highlighted above, and the first term on the RHS can be simply attained from the (known) analytical form of \(L\) and \(F\). Thus, following the chain rule, in all cases we obtain a dependency upon the term \(\partial f(x_{i})/\partial\theta\). In terms of computing \(\partial f(x_{i})/\partial\theta\), we again omit the case where \(\theta\in\vec{\theta}_{A}\) as it is known from the literature [17; 6]. For the specific case of \(\theta\in\vec{\theta}_{F}\) instead, let us first combine the unitary ansatz and the cost operator as \(\hat{U}_{A}^{\dagger}(\vec{\theta}_{A})\hat{C}\hat{U}_{A}(\vec{\theta}_{A}) \equiv\hat{C}_{A}(\vec{\theta}_{A})\) for brevity. Rewriting Eq. 11, using the generic FM provided in Eq. 1 with a trainable generator \(\hat{G}_{m}(\vec{\gamma},\vec{\theta}_{F})\) and isolating the only term dependent on the \(\theta_{\hat{m}}\) of interest, we get \[\hat{U}_{F}(\vec{x},\vec{\gamma},\vec{\theta}_{F})=e^{-i\hat{C}_{ 1}(\vec{\gamma},\theta_{1})\phi_{1}(\vec{x})}\otimes...\] \[\otimes e^{-i\hat{G}_{m}(\vec{\gamma},\theta_{m})\phi_{m}(\vec{x} )}\otimes...\otimes e^{-i\hat{G}_{M}(\vec{\gamma},\theta_{M})\phi_{M}(\vec{x})} \tag{17}\] \[\equiv\hat{U}_{FLR}\otimes e^{-i\hat{G}_{m}(\vec{\gamma},\theta_{ m})\phi_{m}(\vec{x})}\otimes\hat{U}_{FR}. \tag{18}\] Further simplifying the notation using \(\hat{U}_{FR}|0\rangle\equiv|f_{FR}\rangle\) and \(\hat{U}_{FL}^{\dagger}\hat{C}_{A}\hat{U}_{FL}\equiv\hat{C}_{AF}\) produces \[f(x,\theta_{F},\theta_{A})=\] \[\langle f_{FR}|e^{i\hat{G}_{\hat{m}}(\tilde{\gamma},\theta_{\hat{m} })\phi_{\hat{m}}(\tilde{x})}\hat{C}_{AF}e^{-i\hat{G}_{\hat{m}}(\tilde{\gamma}, \theta_{\hat{m}})\phi_{\hat{m}}(\tilde{x})}|f_{FR}\rangle. \tag{10}\] With the model expressed in terms of the dependency on the single FM parameter \(\theta_{\hat{m}}\), we can now address computing \(\partial f/\partial\theta_{\hat{m}}\): \[\frac{\partial f}{\partial\theta_{\hat{m}}}=\langle f_{FR}|e^{i \hat{G}_{\hat{m}}\phi_{\hat{m}}}\bigg{[}\phi_{\hat{m}}\frac{\partial\hat{G}_{ \hat{m}}}{\partial\theta_{\hat{m}}},\hat{C}_{AF}\bigg{]}e^{-i\hat{G}_{\hat{m} }\phi_{\hat{m}}}|f_{FR}\rangle, \tag{11}\] where for simplicity we have omitted the parameter dependencies of \(\phi_{\hat{m}}(\tilde{x})\) and the \(\hat{G}_{\hat{m}}(\tilde{\gamma},\theta_{\hat{m}})\) generator, and we have introduced the commutator notation \([\cdot,\cdot]\). Observing Eq. 11, we are thus left with obtaining the partial derivative \(\partial\hat{G}_{\hat{m}}/\partial\theta_{\hat{m}}\), and to this extent we distinguish three cases. (i) If \(\hat{G}_{\hat{m}}(\tilde{\gamma},\theta_{\hat{m}})=\gamma\theta_{\hat{m}} \hat{\sigma}^{\alpha}\), i.e. a single Pauli operator for a chosen \(\alpha\) axis, then we can obtain the target derivative using the standard PSR. (ii) When a similar dependency on the (non) trainable parameters holds, but we generalise beyond involutory and idempotent primitives, i.e. \(\hat{G}_{\hat{m}}(\tilde{\gamma},\theta_{\hat{m}})=\gamma\theta_{\hat{m}}\hat {G}_{\hat{m}}\), we can instead rely on a single application of the GPSR to obtain \(\partial\hat{G}_{\hat{m}}/\partial\theta_{\hat{m}}\). (iii) Finally, in the most generic case considered in this work, the spectral gaps of \(\hat{G}_{\hat{m}}\) might depend non-trivially upon \(\theta_{\hat{m}}\). In this last case, one should recompute such gaps for each new trained \(\theta_{\hat{m}}\), in order to apply GPSR. Note, however, that one could always decompose \(\hat{G}_{\hat{m}}(\tilde{\gamma},\theta_{\hat{m}})=\sum_{i}^{N}\hat{P}_{i}( \tilde{\gamma},\theta_{\hat{m}})\), i.e. a sum of Pauli strings \(\hat{P}\). With this decomposition approach, ignoring any structure in \(\hat{G}_{\hat{m}}\), at most \(2I\) circuit evaluations would suffice to retrieve \(\partial\hat{G}_{\hat{m}}/\partial\theta_{\hat{m}}\)[47]. ## Appendix B Quantum models and training hyperparameters For all quantum models used in this work, the variational ansatz is a variant of the hardware-efficient ansatz (HEA) [48], with entangling unitaries connecting the qubits in a ring topology, where we study both digital and sDAQC [27] versions of the HEA. Furthermore, after all FMs and ansatz layers are applied, a final state \(\ket{\psi}\) is produced. The output (e.g., the predicted value of \(y(x)\) or \(\psi(x,y,t)\) or \(p(x,y,t)\)) of all models is obtained via the expectation value \(\bra{\psi}\hat{C}\ket{\psi}\), where the cost operator \(\hat{C}=\sum_{m=1}^{N}\hat{Z}^{m}\) is an equally-weighted total magnetization across the \(N\) qubits. This combination of constant depth ansatz and 1-local observables is known to avoid cost function induced barren plateaus [49]. All models in this work were trained using the Adam optimizer. In Figure 2a the model was trained with hyperparameters: qubits \(N=4\), ansatz layers \(L=4\), CX entangling gates, training iterations \(N_{i}=2\),000, batch size \(b_{s}=1\), learning rate \(\text{lr}=10^{-3}\). In Figure 2b, both models were trained with: \(N=4\), \(L=4\), CX entangling unitaries, \(N_{i}=6\),000, \(b_{s}=2\), \(\text{lr}=10^{-3}\). In Figure 3 all models were trained with: \(N=4\), \(L=8\), CX entangling unitaries, \(N_{i}=4\),000, \(b_{s}=2\), \(\text{lr}=10^{-3}\). In Figure 5 and Table 1, all models were trained per QNN with: \(N=6\), \(L=10\) (divided as shown in Figure 4), sDAQC version of the HEA where entangling operations are fixed-duration Hamiltonian evolution of the form \(\exp(i\hat{n}_{k}\hat{n}_{l}\pi)\) between neighbouring pairs of qubits \((k,l)\), \(N_{i}=5\),000, \(b_{s}=600\), \(\text{lr}=10^{-2}\). In Appendix C, Figure 6 and Table 2, all models were trained per QNN with: \(N=4\), \(L=64\), CX entangling unitaries, \(N_{i}=5\),000, \(b_{s}=600\), \(\text{lr}=10^{-2}\). ## Appendix C Navier-Stokes results in the over-parameterized regime In this section we repeat the experiments in section V using an over-parameterized model, such that the number of trainable parameters is larger than the dimension of the Hilbert space [50]. Furthermore, here we use a serial TFFM in which each dimension \((x,y,t)\) is encoded serially in separate blocks, separated by an ansatz layer which acts to change the encoding basis to avoid loss of information. This is theoretically preferential to the parallel encoding strategy since it produces a quantum model with more unique frequencies. The TFFM uses the simple parameterization \(\hat{G}_{\theta}=\sum_{m=1}^{N}\theta_{m}\hat{Y}^{m}/2\) for each dimension, whilst the FF model uses the same generator without trainable parameters \(\hat{G}=\sum_{m=1}^{N}\hat{Y}^{m}/2\). After the FM, the model has \(L=64\) ansatz layers bisected by a data-reuploading FM. In total, each QNN has 804 trainable ansatz parameters. Figure 6 presents a visualization of the experiment's results, evaluating the pressure field at a specific time. In this deeper regime, the TF QNN achieves excellent agreement with the reference solution, successfully capturing more features such as the presence of two interconnected negative-pressure bubbles on the left. In contrast, despite its increased depth, the FF QNN solution is only approximately accurate and fails to correctly identify the separation between the pressure bubbles in the middle and right. The numerical performance of the models is given in Table 2.
2302.14311
Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks
Spiking Neural Networks (SNNs) are promising energy-efficient models for neuromorphic computing. For training the non-differentiable SNN models, the backpropagation through time (BPTT) with surrogate gradients (SG) method has achieved high performance. However, this method suffers from considerable memory cost and training time during training. In this paper, we propose the Spatial Learning Through Time (SLTT) method that can achieve high performance while greatly improving training efficiency compared with BPTT. First, we show that the backpropagation of SNNs through the temporal domain contributes just a little to the final calculated gradients. Thus, we propose to ignore the unimportant routes in the computational graph during backpropagation. The proposed method reduces the number of scalar multiplications and achieves a small memory occupation that is independent of the total time steps. Furthermore, we propose a variant of SLTT, called SLTT-K, that allows backpropagation only at K time steps, then the required number of scalar multiplications is further reduced and is independent of the total time steps. Experiments on both static and neuromorphic datasets demonstrate superior training efficiency and performance of our SLTT. In particular, our method achieves state-of-the-art accuracy on ImageNet, while the memory cost and training time are reduced by more than 70% and 50%, respectively, compared with BPTT.
Qingyan Meng, Mingqing Xiao, Shen Yan, Yisen Wang, Zhouchen Lin, Zhi-Quan Luo
2023-02-28T05:01:01Z
http://arxiv.org/abs/2302.14311v3
# Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks ###### Abstract Spiking Neural Networks (SNNs) are promising energy-efficient models for neuromorphic computing. For training the non-differentiable SNN models, the backpropagation through time (BPTT) with surrogate gradients (SG) method has achieved high performance. However, this method suffers from considerable memory cost and training time during training. In this paper, we propose the Spatial Learning Through Time (SLTT) method that can achieve high performance while greatly improving training efficiency compared with BPTT. First, we show that the backpropagation of SNNs through the temporal domain contributes just a little to the final calculated gradients. Thus, we propose to ignore the unimportant routes in the computational graph during backpropagation. The proposed method reduces the number of scalar multiplications and achieves a small memory occupation that is independent of the total time steps. Furthermore, we propose a variant of SLTT, called SLTT-K, that allows backpropagation only at \(K\) time steps, then the required number of scalar multiplications is further reduced and is independent of the total time steps. Experiments on both static and neuromorphic datasets demonstrate superior training efficiency and performance of our SLTT. In particular, our method achieves state-of-the-art accuracy on ImageNet, while the memory cost and training time are reduced by more than 70% and 50%, respectively, compared with BPTT. ## 1 Introduction Regarded as the third generation of neural network models [39], Spiking Neural Networks (SNNs) have recently attracted wide attention. SNNs imitate the neurodynamics of power-efficient biological networks, where neurons communicate through spike trains (, time series of spikes). A spiking neuron integrates input spike trains into its membrane potential. After the membrane potential exceeds a threshold, the neuron fires a spike and resets its potential [23]. The spiking neuron is active only when it experiences spikes, thus enabling event-based computation. This characteristic makes SNNs energy-efficient when implemented on neuromorphic chips [42, 47, 12]. As a comparison, the power consumption of deep Artificial Neural Networks (ANNs) is substantial. The computation of SNNs with discrete simulation can share a similar functional form as recurrent neural networks (RNNs) [44]. The unique component of SNNs is the non-differentiable threshold-triggered spike generation function. The non-differentiability, as a result, hinders the effective adoption of gradient-based optimization methods that can train RNNs successfully. Therefore, SNN training is still a challenging task. Among the existing SNN training methods, backpropagation through time (BPTT) with surrogate gradient (SG) [54, 10] has recently achieved high performance on complicated datasets in a small number of time steps (, short length of spike trains). The BPTT with SG method defines well-behaved surrogate gradients to approximate the derivative of the spike generation function. Thus the SNNs can be trained through the gradient-based BPTT framework [59], just like RNNs. With such framework, gradients are backpropagated through both the layer-by-layer spatial domain and the temporal domain. Accord Figure 1: The training time and memory cost comparison between the proposed SLTT-1 method and the BPTT with SG method on ImageNet. SLTT-1 achieves similar accuracy as BPTT, while owning better training efficiency than BPTT both theoretically and experimentally. Please refer to Secs. 4 and 5 for details. ingly, BPTT with SG suffers from considerable memory cost and training time that are proportional to the network size and the number of time steps. The training cost is further remarkable for large-scale datasets, such as ImageNet. In this paper, we develop the Spatial Learning Through Time (SLTT) method that can achieve high performance while significantly reducing the training time and memory cost compared with the BPTT with SG method. We first decompose the gradients calculated by BPTT into spatial and temporal components. With the decomposition, the temporal dependency in error backpropagation is explicitly presented. We then analyze the contribution of temporal information to the final calculated gradients, and propose the SLTT method to delete the unimportant routes in the computational graph for backpropagation. In this way, the number of scalar multiplications is reduced; thus, the training time is reduced. SLTT further enables online training by calculating gradient instantaneously at each time step, without the requirement of storing information of other time steps. Then the memory occupation is independent of the number of total time steps, avoiding the significant training memory costs of BPTT. Due to the instantaneous gradient calculation, we also propose the SLTT-K method that conducts backpropagation only at \(K\) time steps. SLTT-K can further reduce the time complexity without performance loss. With the proposed techniques, we can obtain high-performance SNNs with superior training efficiency. The wall-clock training time and memory costs of SLTT-1 and BPTT on ImageNet under the same experimental settings are shown in Fig. 1. Formally, our contributions include: 1. Based on our analysis of error backpropagation in SNNs, we propose the Spatial Learning Through Time (SLTT) method to achieve better time and memory efficiency than the commonly used BPTT with SG method. Compared with the BPTT with SG method, the number of scalar multiplications is reduced, and the training memory is constant with the number of time steps, rather than grows linearly with it. 2. Benefiting from our online training framework, we propose the SLTT-K method that further reduces the time complexity of SLTT. The required number of scalar multiplication operations is reduced from \(\Omega(T)\)1 to \(\Omega(K)\), where \(T\) is the number of total time steps, and \(K<T\) is the parameter indicating the number of time steps to conduct backpropagation. Footnote 1: \(f(x)=\Omega(g(x))\) means that there exist \(c>0\) and \(n>0\), such that \(0\leq cg(x)\leq f(x)\) for all \(x\geq n\). 3. Our models achieve competitive SNN performance with superior training efficiency on CIFAR-10, CIFAR-100, ImageNet, DVS-Gesture, and DVS-CIFAR10 under different network settings or large-scale network structures. On ImageNet, our method achieves state-of-the-art accuracy while the memory cost and training time are reduced by more than 70% and 50%, respectively, compared with BPTT. ## 2 Related Work The BPTT Framework for Training SNNs.A natural methodology for training SNNs is to adopt the gradient-descent-based BPTT framework, while assigning surrogate gradients (SG) to the non-differentiable spike generation functions to enable meaningful gradient calculation [30, 44, 55, 63, 64, 73]. Under the BPTT with SG framework, many effective techniques have been proposed to improve the performance, such as threshold-dependent batch normalization [74], carefully designed surrogate functions [36] or loss functions [16, 25], SNN-specific network structures [21], and trainable parameters of neuron models [22]. Many works conduct multi-stage training, typically including an ANN pre-training process, to reduce the latency (_i.e_., the number of time steps) for the energy efficiency issue, while maintaining competitive performance [8, 9, 50, 51]. The BPTT with SG method has achieved high performance with low latency on both static [21, 24] and neuromorphic [16, 37] datasets. However, those approaches need to back-propagate error signals through both temporal and spatial domains, thus suffering from high computational costs during training [14]. In this work, we reduce the memory and time complexity of the BPTT with SG framework with gradient approximation and instantaneous gradient calculation, while maintaining the same level of performance. Other SNN Training Methods.The ANN-to-SNN conversion method [15, 18, 26, 27, 52, 54, 68] has recently yielded top performance, especially on ImageNet [6, 41]. This method builds a connection between the firing rates of SNNs and some corresponding ANN outputs. With this connection, the parameters of an SNN are directly determined from the associated ANN. Despite the good performance, the required latency is much higher compared with the BPTT with SG method. This fact hurts the energy efficiency of SNN inference [11]. Furthermore, the conversion method is not suitable for neuromorphic data. Some gradient-based direct training methods find the equivalence between spike representations (_e.g_., firing rates or first spike times) of SNNs and some differentiable mappings or fixed-point equations [40, 43, 58, 67, 66, 62, 66, 75, 69]. Then the spike-representation-based methods train SNNs by gradients calculated from the corresponding mappings or fixed-point equations. Such methods have recently achieved competitive performance, but still suffer relatively high latency, like the conversion-based methods. To achieve low latency, our work is mainly based on the BPTT with SG method and then focuses on the training cost issue of BPTT with SG. Efficient Training for SNNs.Several RNN training methods pursue online learning and constant memory occupation agnostic time horizon, such as real time recurrent learning [60] and forward propagation through time [31]. Inspired by them, some SNN training methods [70, 71, 72, 2, 3] apply similar ideas to achieve memory-efficient and online learning. However, such SNN methods cannot scale to large-scale tasks due to some limitations, such as using feedback alignment [45], simple network structures, and still large memory costs although constant with time. [32] ignores temporal dependencies of information propagation to enable local training with no memory overhead for computing gradients. They use similar ways as ours to approximate the gradient calculation, but do not verify the reasonableness of the approximation, and cannot achieve comparable accuracy as ours, even for simple tasks. [48] presents the sparse SNN backpropagation algorithm in which gradients only backpropagate through "active neurons", that account for a small number of the total, at each time step. However, [48] does not consider large-scale tasks, and the memory grows linearly with the number of time steps. Recently, some methods [65, 69] have achieved satisfactory performance on large-scale datasets with time steps-independent memory occupation. Still, they either rely on pre-trained ANNs and cannot conduct direct training [69], or do not consider reducing time complexity and require more memory than our work due to tracking presynaptic activities [65]. Our work can achieve state-of-the-art (SOTA) performance while maintaining superior time and memory efficiency compared with other methods. ## 3 Preliminaries ### The Leaky Integrate and Fire Model A spiking neuron replicates the behavior of a biological neuron which integrates input spikes into its membrane potential \(u(t)\) and transmits spikes when the potential \(u\) reaches a threshold. Such spike transmission is controlled via some spiking neural models. In this paper, we consider a widely adopted neuron model, the leaky integrate and fire (LIF) model [7], to characterize the dynamics of \(u(t)\): \[\tau\frac{\mathrm{d}u(t)}{\mathrm{d}t}=-(u(t)-u_{rest})+R\cdot I(t),\;\mathrm{ when}\;u(t)<V_{th}, \tag{1}\] where \(\tau\) is the time constant, \(R\) is the resistance, \(u_{rest}\) is the resting potential, \(V_{th}\) is the spike threshold, and \(I\) is the input current which depends on received spikes. The current model is given as \(I(t)=\sum_{i}w^{\prime}_{i}s_{i}(t)+b^{\prime}\), where \(w^{\prime}_{i}\) is the weight from neuron-\(i\) to the target neuron, \(b^{\prime}\) is a bias term, and \(s_{i}(t)\) is the received train from neuron-\(i\). \(s_{i}(t)\) is formed as \(s_{i}(t)=\sum_{f}\delta(t-t_{i,f})\), in which \(\delta(\cdot)\) is the Dirac delta function and \(t_{i,f}\) is the \(f\)-th fire time of neuron-\(i\). Once \(u\geq V_{th}\) at time \(t_{f}\), the neuron output a spike, and the potential is reset to \(u_{rest}\). The output spike train is described as \(s_{out}(t)=\sum_{f}\delta(t-t_{f})\). In application, the discrete computational form of the LIF model is adopted. With \(u_{rest}=0\), the discrete LIF model can be described as \[\begin{cases}u[t]=(1-\dfrac{1}{\tau})v[t-1]+\sum_{i}w_{i}s_{i}[t]+b,\\ s_{out}[t]=H(u[t]-V_{th}),\\ v[t]=u[t]-V_{th}s_{out}[t],\end{cases} \tag{2}\] where \(t\in\{1,2,\cdots,T\}\) is the time step index, \(H(\cdot)\) is the Heaviside step function, \(s_{out}[t],s_{i}[t]\in\{0,1\}\), \(v[t]\) is the intermediate value representing the membrane potential before being reset and \(v[0]=0\), and \(w_{i}\) and \(b\) are reparameterized version of \(w^{\prime}_{i}\) and \(b^{\prime}\), respectively, where \(\tau\) and \(R\) are absorbed. The discrete step size is \(1\), so \(\tau>1\) is required. ### Backpropagation Through Time with Surrogate Gradient Consider the multi-layer feedforward SNNs with the LIF neurons based on Eq. (2): \[\mathbf{u}^{l}[t]=(1-\dfrac{1}{\tau})(\mathbf{u}^{l}[t-1]-V_{th}s^{l}[t-1])+ \mathbf{W}^{l}\mathbf{s}^{l-1}[t], \tag{3}\] where \(l=1,2,\cdots,L\) is the layer index, \(t=1,2,\cdots,T\), \(0<1-\frac{1}{\tau}<1\), \(\mathbf{s}^{0}\) are the input data to the network, \(\mathbf{s}^{l}\) are the output spike trains of the \(l^{\text{th}}\) layer, \(\mathbf{W}^{l}\) are the weight to be trained. We ignore the bias term for simplicity. The final output of the network is \(\mathbf{o}[t]=\mathbf{W}^{\text{o}}\mathbf{s}^{L}[t]\), where \(\mathbf{W}^{\text{o}}\) is the parameter of the classifier. The classification is based on the average of the output at each time step \(\frac{1}{T}\sum_{t=1}^{T}\mathbf{o}[t]\). The loss function \(\mathcal{L}\) is defined on \(\{\mathbf{o}[1],\cdots,\mathbf{o}[T]\}\), and is often defined as [36, 50, 66, 74] \[\mathcal{L}=\ell(\dfrac{1}{T}\sum_{t=1}^{T}\mathbf{o}[t],y), \tag{4}\] where \(y\) is the label, and \(\ell\) can be the cross-entropy function. BPTT with SG calculates gradients according to the computational graph of Eq. (3) shown in Fig. 2. The pseudocode is described in the Supplementary Materials. For each neuron \(i\) in the \(l\)-th layer, the derivative \(\frac{\partial\mathbf{s}^{l}_{i}[t]}{\partial\mathbf{u}^{l}_{i}[t]}\) is zero for all values of \(\mathbf{u}^{l}_{i}[t]\) except when \(\mathbf{u}^{l}_{i}[t]=V_{th}\), where the derivative is infinity. Such a non-differentiability problem is solved by approximating \(\frac{\partial\mathbf{s}^{l}_{i}[t]}{\partial\mathbf{u}^{l}_{i}[t]}\) with some well-behaved surrogate function, such as the rectangle function [63, 64] \[\frac{\partial s}{\partial u}=\dfrac{1}{\gamma}\mathbb{1}\left(|u-V_{th}|< \dfrac{\gamma}{2}\right), \tag{5}\] and the triangle function [16, 19] \[\frac{\partial s}{\partial u}=\dfrac{1}{\gamma^{2}}\max\left(0,\gamma-|u-V_{th} |\right), \tag{6}\] here \(\mathbb{1}(\cdot)\) is the indicator function, and the hyperparameter \(\gamma\) for both functions is often set as \(V_{th}\). ## 4 The proposed Spatial Learning Through Time Method ### Observation from the BPTT with SG Method In this subsection, we decompose the derivatives for membrane potential, as calculated in the BPTT method, into spatial components and temporal components. Based on the decomposition, we observe that the spatial components dominate the calculated derivatives. This phenomenon inspires the proposed method, as introduced in Sec. 4.2. According to Eq. (3) and Fig. 2, the gradients for weights in an SNN with \(T\) time steps are calculated by \[\nabla_{\mathbf{W}^{l}}\mathcal{L}=\sum_{t=1}^{T}\frac{\partial\mathcal{L}}{ \partial\mathbf{u}^{l}[t]}^{\top}\mathbf{s}^{l-1}[t]^{\top},\ l=L,L-1,\cdots,1. \tag{7}\] We further define \[\epsilon^{l}[t]\triangleq\frac{\partial\mathbf{u}^{l}[t+1]}{\partial\mathbf{u }^{l}[t]}+\frac{\partial\mathbf{u}^{l}[t+1]}{\partial\mathbf{s}^{l}[t]}\frac{ \partial\mathbf{s}^{l}[t]}{\partial\mathbf{u}^{l}[t]} \tag{8}\] as the sensitivity of \(\mathbf{u}^{l}[t+1]\) with respect to \(\mathbf{u}^{l}[t]\), represented by the red arrows shown in Fig. 2. Then with the chain rule, \(\frac{\partial\mathcal{L}}{\partial\mathbf{u}^{l}[t]}\) in Eq. (7) can be further calculated recursively. In particular, for the output layer, we arrive at \[\frac{\partial\mathcal{L}}{\partial\mathbf{u}^{L}[t]}=\frac{\partial\mathcal{ L}}{\partial\mathbf{s}^{L}[t]}\frac{\partial\mathbf{s}^{L}[t]}{\partial \mathbf{u}^{L}[t]}+\sum_{t^{\prime}=t+1}^{T}\frac{\partial\mathcal{L}}{ \partial\mathbf{s}^{L}[t^{\prime}]}\frac{\partial\mathbf{s}^{L}[t^{\prime}]}{ \partial\mathbf{u}^{L}[t^{\prime}]}\prod_{t^{\prime\prime}=1}^{t^{\prime}-t }\epsilon^{L}[t^{\prime}-t^{\prime\prime}]. \tag{9}\] and for the intermediate layer \(l=L-1,\cdots,1\), we have \[\frac{\partial\mathcal{L}}{\partial\mathbf{u}^{l}[t]}= \frac{\partial\mathcal{L}}{\partial\mathbf{u}^{l+1}[t]}\frac{ \partial\mathbf{u}^{l+1}[t]}{\partial\mathbf{s}^{l}[t]}\frac{\partial \mathbf{s}^{l}[t]}{\partial\mathbf{u}^{l}[t]} \tag{10}\] \[+ \sum_{t^{\prime}=t+1}^{T}\frac{\partial\mathcal{L}}{\partial \mathbf{u}^{l+1}[t^{\prime}]}\frac{\partial\mathbf{u}^{l+1}[t^{\prime}]}{ \partial\mathbf{s}^{l}[t^{\prime}]}\frac{\partial\mathbf{s}^{l}[t^{\prime}]}{ \partial\mathbf{u}^{l}[t^{\prime}]}\prod_{t^{\prime\prime}=1}^{t^{\prime}-t }\epsilon^{L}[t^{\prime}-t^{\prime\prime}].\] The detailed derivation can be found in the Supplementary Materials. In both Eqs. (9) and (10), the terms before the addition symbols on the R.H.S. (the blue terms) can be treated as the spatial components, and the remaining parts (the green terms) represent the temporal components. We observe that the temporal components contribute a little to \(\frac{\partial\mathcal{L}}{\partial\mathbf{u}^{l}[t]}\), since the diagonal matrix \(\prod_{t^{\prime\prime}=1}^{t^{\prime}-t}\epsilon^{L}[t^{\prime}-t^{\prime\prime}]\) is supposed to have a small spectral norm for typical settings of surrogate functions. To see this, we consider the rectangle surrogate (Eq. (5)) with \(\gamma=V_{th}\) as an example. Based on Eq. (3), the diagonal elements of \(\epsilon^{l}[t]\) are \[\left(\epsilon^{l}[t]\right)_{jj}=\left\{\begin{array}{ll}0,&\frac{1}{2}V_{ th}<\left(\mathbf{u}^{l}[t]\right)_{j}<\frac{3}{2}V_{th},\\ 1-\frac{1}{\tau},&\text{otherwise.}\end{array}\right. \tag{11}\] Define \(\lambda\triangleq 1-\frac{1}{\tau}\), then \(\left(\epsilon^{l}[t]\right)_{jj}\) is zero in an easily-reached interval, and is at least not large for commonly used small \(\lambda\) (_e.g._, \(\lambda=0.5\)[16, 65], \(\lambda=0.25\)[74], and \(\lambda=0.2\)[25]). The diagonal values of the matrix \(\prod_{t^{\prime\prime}=1}^{t^{\prime}-t}\epsilon^{l}[t^{\prime}-t^{\prime \prime}]\) are smaller than the single term \(\epsilon^{l}[t^{\prime}-t^{\prime\prime}]\) due to the product operations, especially when \(t^{\prime}-t\) is large. The temporal components are further unimportant if the spatial and temporal components have similar directions. Then the spatial components in Eqs. (9) and (10) dominate the gradients. For other widely-used surrogate functions and their corresponding hyperparameters, the phenomenon of dominant spatial components still exists since the surrogate functions have similar shapes and behavior. In order to illustrate this, we conduct experiments on CIFAR-10, DVS-CIFAR10, and ImageNet using the triangle surrogate (Eq. (6)) with \(\gamma=V_{th}\). We use the BPTT with SG method to train the SNNs on the abovementioned three datasets, and call the calculated gradients the baseline gradients. During training, we also calculate the gradients for weights when the temporal components are abandoned, and call such gradients the spatial gradients. We compare the disparity between baseline and spatial gradients by calculating their cosine similarity. The results are demonstrated in Fig. 3. The similarity maintains a high level for different datasets, the number of time steps, and \(\tau\). In particular, for \(\tau=1.1\)\((\lambda=1-\frac{1}{\tau}\approx 0.09)\), the baseline and spatial gradients consistently have a remarkably similar direction on CIFAR-10 and DVS-CIFAR10. In conclusion, the spatial components play a dominant role in the gradient backpropagation process. ### Spatial Learning Through Time Based on the observation introduced in Sec. 4.1, we propose to ignore the temporal components in Eqs. (9) and (10) to achieve more efficient backpropagation. In detail, the gradients for weights are calculated by \[\nabla_{\mathbf{W}^{l}}\mathcal{L}=\sum_{t=1}^{T}\mathbf{e}_{\mathbf{W}}^{l}[t ],\quad\mathbf{e}_{\mathbf{W}}^{l}[t]=\mathbf{e}_{\mathbf{u}}^{l}[t]^{\top} \mathbf{s}^{l-1}[t]^{\top}, \tag{12}\] Figure 2: Computational graph of multi-layer SNNs. Dashed arrows represent the non-differentiable spike generation functions. where \[\mathbf{e}_{\mathbf{u}}^{l}[t]=\left\{\begin{array}{ll}\frac{\partial\mathcal{L}}{ \partial\mathbf{s}^{L}[t]}\frac{\partial\mathbf{s}^{L}[t]}{\partial\mathbf{u}^ {L}[t]},&l=L,\\ \mathbf{e}_{\mathbf{u}}^{l+1}[t]\frac{\partial\mathbf{u}^{l+1}[t]}{\partial \mathbf{s}^{l}[t]}\frac{\partial\mathbf{s}^{l}[t]}{\partial\mathbf{u}^{l}[t]},&l<L,\end{array}\right. \tag{13}\] and \(\mathbf{e}_{\mathbf{u}}^{l}[t]\) is a row vector. Compared with Eqs. (7), (9) and (10), the required number of scalar multiplications in Eqs. (12) and (13) is reduced from \(\Omega(T^{2})\) to \(\Omega(T)\). Note that the BPTT method does not conduct naive computation of the sum-product as shown in Eqs. (9) and (10), but in a recursive way to achieve \(\Omega(T)\) computational complexity, as shown in the Supplementary Materials. Although BPTT and the proposed update rule both need \(\Omega(T)\) scalar multiplications, such multiplication operations are reduced due to ignoring some routes in the computational graph. Please refer to Supplementary Materials for time complexity analysis. Therefore, the time complexity of the proposed update rule is much lower than that of BPTT with SG, although they are both proportional to \(T\). According to Eqs. (12) and (13), the error signals \(\mathbf{e}_{\mathbf{W}}^{l}\) and \(\mathbf{e}_{\mathbf{u}}^{l}\) at each time step can be calculated independently without information from other time steps. Thus, if \(\frac{\partial\mathcal{L}}{\partial\mathbf{s}^{L}[t]}\) can be calculated instantaneously at time step \(t\), \(\mathbf{e}_{\mathbf{W}}^{l}[t]\) and \(\mathbf{e}_{\mathbf{u}}^{l}[t]\) can also be calculated instantaneously at time step \(t\). Then there is no need to store intermediate states of the whole time horizon. To achieve the instantaneous calculation of \(\frac{\partial\mathcal{L}}{\partial\mathbf{s}^{L}[t]}\), we adopt the loss function [16, 25, 65] \[\mathcal{L}=\frac{1}{T}\sum_{t=1}^{T}\ell(\mathbf{o}[t],y), \tag{14}\] which is an upper bound of the loss introduced in Eq. (4). ``` 0: Time steps \(T\); Network depth \(L\); Network parameters \(\{\mathbf{W}^{l}\}_{l=1}^{L}\); Training data \((\mathbf{s}^{0},\mathbf{y})\); Learning rate \(\eta\); Required backpropagation times \(K\) (for SLTT-K). 0:\(\Delta\mathbf{W}^{l}=0,\;l=1,2,\cdots,L\). 1:if using SLTT-K then 2: Sample \(K\) numbers in \([1,2,\cdots,T]\) w/o replacement to form \(required\_bp\_steps\); 3:else 4:\(required\_bp\_steps=[1,2,\cdots,T]\); 5:endif 6:for\(t=1,2,\cdots,T\)do 7: Calculate \(\mathbf{s}^{L}[t]\) by Eqs. (2) and (3); //Forward 8: Calculate the instantaneous loss \(\ell\) in Eq. (14); 9:if\(t\) in \(required\_bp\_steps\)then//Backward 10:\(\mathbf{e}_{\mathbf{u}}^{L}[t]=\frac{1}{T}\frac{\partial\mathcal{L}}{ \partial\mathbf{s}^{L}[t]}\frac{\partial\mathbf{s}^{L}[t]}{\partial\mathbf{u}^ {L}[t]}\); 11:for\(l=L-1,\cdots,1\)do 12:\(\mathbf{e}_{\mathbf{u}}^{l}[t]=\mathbf{e}_{\mathbf{u}}^{l+1}[t]\frac{\partial \mathbf{u}^{l+1}[t]}{\partial\mathbf{s}^{l}[t]}\frac{\partial\mathbf{s}^{l}[ t]}{\partial\mathbf{u}^{l}[t]}\); 13:\(\Delta\mathbf{W}^{l}\)\(+=\mathbf{e}_{\mathbf{u}}^{l}[t]^{\top}\mathbf{s}^{l-1}[t]^{\top}\); 14:endfor 15:endif 16:endfor 17:\(\mathbf{W}^{l}=\mathbf{W}^{l}-\eta\Delta\mathbf{W}^{l},\;l=1,2,\cdots,L\); 18: Trained network parameters \(\{\mathbf{W}^{l}\}_{l=1}^{L}\). ``` **Algorithm 1** One iteration of SNN training with the SLTT or SLTT-K methods. In Algorithm 1, all the intermediate terms at time step \(t\), such as \(\mathbf{e}_{\mathbf{u}}^{l}[t],\mathbf{s}^{l}[t],\frac{\partial\mathbf{u}^{l+ 1}[t]}{\partial\mathbf{s}^{l}[t]}\), and \(\frac{\partial\mathbf{s}^{l}[t]}{\partial\mathbf{u}^{l}[t]}\), are never used in other time steps, so the required memory overhead of SLTT is constant agnostic to the total number of time steps \(T\). On the contrary, the BPTT with SG method has an \(\Omega(T)\) memory cost associated with storing all intermediate states for all time steps. In summary, the proposed method is both time-efficient and memory-efficient, and has the potential to enable online learning for neuromorphic substrates [72]. Figure 3: The cosine similarity between the gradients calculated by BPTT and the “spatial gradients”. For the CIFAR-10, DVS-CIFAR10, and ImageNet datasets, the network architectures of ResNet-18, VGG-11, and ResNet-34 are adopted, respectively. Other settings and hyperparameters for the experiments are described in the Supplementary Materials. We calculate the cosine similarity for different layers and report the average in the figure. For ImageNet, we only train the network for 50 iterates since the training is time-consuming. Dashed curves represent a larger number of time steps. ### Further Reducing Time Complexity Due to the online update rule of the proposed method, the gradients for weights are calculated according to an ensemble of \(T\) independent computational graphs, and the time complexity of gradient calculation is \(\Omega(T)\). The \(T\) computational graphs can have similar behavior, and then similar gradient directions can be obtained with only a portion of the computational graphs. Based on this, we propose to train a portion of time steps to reduce the time complexity further. In detail, for each iteration in the training process, we randomly choose \(K\) time indexes from the time horizon, and only conduct backpropagation with SLTT at the chosen \(K\) time steps. We call such a method the SLTT-K method, and the pseudo-code is given in Algorithm 1. Note that setting \(K=T\) results in the original SLTT method. Compared with SLTT, the time complexity of SLTT-K is reduced to \(\Omega(K)\), and the memory complexity is the same. In our experiments, SLTT-K can achieve satisfactory performance even when \(K=1\) or \(2\), as shown in Sec. 5, indicating superior efficiency of the SLTT-K method. ## 5 Experiments In this section, we evaluate the proposed method on CIFAR-10 [33], CIFAR-100 [33], ImageNet [13], DVS-Gesture [1], and DVS-CIFAR10 [34] to demonstrate its superior performance regarding training costs and accuracy. For our SNN models, we set \(V_{th}=1\) and \(\tau=1.1\), and apply the triangle surrogate function (Eq. (6)). An effective technique, batch normalization (BN) along the temporal dimension [74], cannot be adopted to our method, since it requires calculation along the total time steps and then intrinsically prevents time-steps-independent memory costs. Therefore, for some tasks, we borrow the idea from normalization-free ResNets (NF-ResNets) [5] to replace BN by weight standardization (WS) [49]. Please refer to the Supplementary Materials for experimental details. ### Comparison with BPTT The major advantage of SLTT over BPTT is the low memory and time complexity. To verify the advantage of SLTT, we use both methods with the same experimental setup to train SNNs. For CIFAR-10, CIFAR-100, ImageNet, DVS-Gesture, and DVS-CIFAR10, the network architectures we adopt are ResNet-18, ResNet-18, NF-ResNet-34, VGG-11, and VGG-11, respectively, and the total number of time steps are 6, 6, 6, 20, and 10, respectively. For ImageNet, to accelerate training, we first train the SNN with only 1 time step for 100 epochs to get a pre-trained model, and then use SLTT or BPTT to fine-tune the model with 6 time steps for 30 epochs. Details of the training settings can be found in the Supplementary Materials. We run all the experiments on the same Tesla-V100 GPU, and ensure that the GPU card is running only one experiment at a time to perform a fair comparison. It is not easy to directly compare the running time for two training methods since the running time is code-dependent and platform-dependent. In our experiments, we measure the wall-clock time of the total training process, including forward propagation and evaluation on the validation set after each epoch, to give a rough comparison. For ImageNet, the training time only includes the 30-epoch fine-tuning part. The results of maximum memory usage, total wall-clock training time, and accuracy for both SLTT and BPTT on different datasets are listed in Tab. 1. SLTT enjoys similar accuracy compared with BPTT while using less memory and time. For all the datasets, SLTT requires less than one-third of the GPU memory of BPTT. In fact, SLTT maintains constant memory cost over the different number of time steps \(T\), while the training memory of BPTT grows linearly in \(T\). The memory occupied by SLTT for \(T\) time steps is always similar to that of BPTT for \(1\) time step. Regarding training time, SLTT also enjoys faster training on both algorithmic and practical aspects. For DVS-Gesture, the training time for both methods are almost the same, deviating from the algorithmic time complexity. That may be due to really little training time for both methods and the good parallel computing performance of the GPU. ### Performance of SLTT-K As introduced in Sec. 4.3, the proposed SLTT method has a variant, SLTT-K, that conducts backpropagation only in randomly selected \(K\) time steps for reducing training time. We verify the effectiveness of SLTT-K on the neuromorphic datasets, DVS-Gesture and DVS-CIFAR10, and the large-scale static dataset, ImageNet. For the ImageNet dataset, we first pre-train the 1-time-step networks, and then fine-tune them with 6 time steps, as described in Sec. 5.1. \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & Method & Memory & Time & Acc \\ \hline \multirow{2}{*}{CIFAR-10} & BPTT & 3.00G & 6.35h & **94.60\%** \\ & SLTT & **1.09G** & **4.58h** & 94.59\% \\ \hline \multirow{2}{*}{CIFAR-100} & BPTT & 3.00G & 6.39h & 73.80\% \\ & SLTT & **1.12G** & **4.68h** & **74.67\%** \\ \hline \multirow{2}{*}{ImageNet} & BPTT & 28.41G & 73.8h & **66.47\%** \\ & SLTT & **8.47G** & **66.9h** & 66.19\% \\ \hline \multirow{2}{*}{DVS-Gesture} & BPTT & 5.82G & 2.68h & 97.22\% \\ & SLTT & **1.07G** & **2.64h** & **97.92\%** \\ \hline \multirow{2}{*}{DVS-CIFAR10} & BPTT & 3.70G & 4.47h & 73.60\% \\ & SLTT & **1.07G** & **3.43h** & **77.30\%** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of training memory cost, training time, and accuracy between SLTT and BPTT. The “Memory” column indicates the maximum memory usage on an GPU during training. And the “Time” column indicates the wall-clock training time. We train the NF-ResNet-101 networks on a single Tesla-A100 GPU, while we use a single Tesla-V100 GPU for other experiments. As shown in Tab. 2, the SLTT-K method yields competitive accuracy with SLTT (also BPTT) for different datasets and network architectures, even when \(K=\frac{1}{6}T\) or \(\frac{1}{5}T\). With such small values of \(K\), further compared with BPTT, the SLTT-K method enjoys comparable or even better training results, less memory cost (much less if \(T\) is large), and much faster training speed. ### Comparison with Other Efficient Training Methods There are other online learning methods for SNNs [2, 3, 65, 69, 70] that achieve time-steps-independent memory costs. Among them, OTTT [65] enables direct training on large-scale datasets with relatively low training costs. In this subsection, we compare SLTT and OTTT under the same experimental settings of network structures and total time steps (see Supplementary Materials for details). The wall-clock training time and memory cost are calculated based on 3 epochs of training. The two methods are comparable since the implementation of them are both based on PyTorch [46] and SpikingJelly [20]. The results are shown in Tab. 3. SLTT outperforms OTTT on all the datasets regarding memory costs and training time, indicating the superior efficiency of SLTT. As for accuracy, SLTT also achieves better results than OTTT, as shown in Tab. 4. ### Comparison with the State-of-the-Art The proposed SLTT method is not designed to achieve the best accuracy, but to enable more efficient training. Still, our method achieves competitive results compared with the SOTA methods, as shown in Tab. 4. Besides, our method obtains such good performance with only a few time steps, leading to low energy consumption when the trained networks are implemented on neuromorphic hardware. For the BPTT-based methods, there is hardly any implementation of large-scale network architectures on ImageNet due to the significant training costs. To our knowledge, only Fang _et al_. [21] leverage BPTT to train an SNN with more than 100 layers, while the training process requires near 90G GPU memory for \(T=4\). Our SLTT-2 method succeeds in training the same-scale ResNet-101 network with only 34G memory occupation and 4.10h of training time per epoch (Tabs. 2 and 4). Compared with BPTT, the training memory and time of SLTT-2 are reduced by more than 70% and 50%, respectively. Furthermore, since the focus of the SOTA BPTT-type methods (_e.g_., surrogate function, network architecture, and regularization) are orthogonal to ours, our training techniques can be plugged into their methods to achieve better training efficiency. Some ANN-to-SNN-based and spike representation-based methods [6, 35, 69] also achieve satisfactory accuracy with relatively small training costs. However, they typically require a (much) larger number of time steps (Tab. 4), which hurts the energy efficiency for neuromorphic computing. ### Influence of \(T\) and \(\tau\) For efficient training, the SLTT method approximates the gradient calculated by BPTT by ignoring the temporal components in Eqs. (9) and (10). So when \(T\) or \(\tau\) is large, the \begin{table} \begin{tabular}{c c c c c} \hline \hline Network & Method & Memory & Time & Acc \\ \hline \multicolumn{5}{c}{DVS-Gesture, \(T=20\)} \\ \hline \multirow{2}{*}{VGG-11} & SLTT & \multirow{2}{*}{\(\approx\)1.1G} & 2.64h & **97.92\%** \\ & SLTT-4 & & **1.69h** & 97.45\% \\ \hline \multicolumn{5}{c}{DVS-CIFAR10, \(T=10\)} \\ \hline \multirow{2}{*}{VGG-11} & SLTT & \multirow{2}{*}{\(\approx\)1.1G} & 3.43h & **77.16\%** \\ & SLTT-2 & & **2.49h** & 76.70\% \\ \hline \multicolumn{5}{c}{ImageNet, \(T=6\)} \\ \hline \multirow{2}{*}{NFRN-34} & SLTT & \multirow{2}{*}{\(\approx\)8.5G} & 66.90h & **66.19\%** \\ & SLTT-2 & & 41.88h & 66.09\% \\ & SLTT-1 & & **32.03h** & 66.17\% \\ \hline \multirow{2}{*}{NFRN-50} & SLTT & \multirow{2}{*}{\(\approx\)24.5G} & 126.05h & **67.02\%** \\ & SLTT-2 & & 80.63h & 66.98\% \\ & SLTT-1 & & **69.36h** & 66.94\% \\ \hline \multirow{2}{*}{NFRN-101} & SLTT & \multirow{2}{*}{\(\approx\)33.8G} & 248.23h & 69.14\% \\ & SLTT-2 & & 123.05h & **69.26\%** \\ \multicolumn{5}{c}{} \\ \end{tabular} \end{table} Table 2: Comparison of training time and accuracy between SLTT and SLTT-K. “NFRN” means Normalizer-Free ResNet. For DVS-Gesture and DVS-CIFAR10, the “Acc” column reports the average accuracy of 3 runs of experiments using different random seeds. We skip the standard deviation values since they are almost 0, except for SLTT on DVS-CIFAR10 where the value is 0.23%. \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & Method & Memory & Time/Epoch \\ \hline \multirow{2}{*}{CIFAR-10} & OTTT & 1.71G & 184.68s \\ & SLTT & **1.00G** & **54.48s** \\ \hline \multirow{2}{*}{CIFAR-100} & OTTT & 1.71G & 177.72s \\ & SLTT & **1.00G** & **54.60s** \\ \hline \multirow{2}{*}{ImageNet} & OTTT & 19.38G & 7.52h \\ & SLTT & **8.47G** & **2.23h** \\ \hline \multirow{2}{*}{DVS-Gesture} & OTTT & 3.38G & 236.64s \\ & SLTT & **2.08G** & **67.20s** \\ \hline \multirow{2}{*}{DVS-CIFAR10} & OTTT & 4.32G & 114.84s \\ & SLTT & **1.90G** & **48.00s** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of training memory cost and training time per epoch between SLTT and OTTT. approximation may not be accurate enough. In this subsection, we conduct experiments with different \(\tau\) and \(T\) on the neuromorphic datasets, DVS-Gesture and DVS-CIFAR10. We verify that the proposed method can still work well for large \(T\) and commonly used \(\tau\)[16, 25, 65, 74], as shown in Fig. 4. Regarding large time steps, SLTT obtains similar accuracy with BPTT even when \(T=50\), and SLTT can outperform BPTT when \(T<30\) on the two neuromorphic datasets. For different \(\tau\), our method can consistently perform better than BPTT, although there is a performance drop for SLTT when \(\tau\) is large. ## 6 Conclusion In this work, we propose the Spatial Learning Through Time (SLTT) method that significantly reduces the time and memory complexity compared with the vanilla BPTT with SG method. We first show that the backpropagation of SNNs through the temporal domain contributes a little to the final calculated gradients. By ignoring unimportant temporal components in gradient calculation and introducing an online calculation scheme, our method reduces the scalar multiplication operations and achieves time-step \begin{table} \begin{tabular}{l|l c c c c} \hline \hline & Method & Network & Time Steps & Efficient Training & Mean\(\pm\)Std (Best) \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & LTL-Online [69]1 & ResNet-20 & 16 & \(\checkmark\) & \(93.15\%\) \\ & OTTT [65] & VGG-11 (WS) & 6 & \(\checkmark\) & \(93.52\pm 0.06\%\) (93.58\%) \\ & Dspike [36] & ResNet-18 & 6 & \(\checkmark\) & \(94.25\pm 0.07\%\) \\ & TET [16] & ResNet-19 & 6 & \(\checkmark\) & \(\mathbf{94.50\pm 0.07\%}\) \\ \cline{2-6} & SLTT (ours) & ResNet-18 & 6 & \(\checkmark\) & \(94.44\%\pm 0.21\%\) (\(\mathbf{94.59\%}\)) \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & OTTT [65] & VGG-11 (WS) & 6 & \(\checkmark\) & \(71.05\pm 0.04\%\) (71.11\%) \\ & ANN-to-SNN [6]1 & VGG-16 & 8 & \(\checkmark\) & \(73.96\%\) \\ & RecDis [25] & ResNet-19 & 4 & \(\checkmark\) & \(74.10\pm 0.13\%\) \\ & TET [16] & ResNet-19 & 6 & \(\checkmark\) & \(\mathbf{74.72\pm 0.28\%}\) \\ \cline{2-6} & SLTT (ours) & ResNet-18 & 6 & \(\checkmark\) & \(74.38\%\pm 0.30\%\) (74.67\%) \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & ANN-to-SNN [35]1 & ResNet-34 & 32 & \(\checkmark\) & \(64.54\%\) \\ & TET [16] & ResNet-34 & 6 & \(\checkmark\) & \(64.79\%\) \\ & OTTT [65] & NF-ResNet-34 & 6 & \(\checkmark\) & \(65.15\%\) \\ & SEW [21] & Sew ResNet-34,50,101 & 4 & \(\checkmark\) & \(67.04\%,67.78\%,68.76\%\) \\ \cline{2-6} & SLTT (ours) & NF-ResNet-34,50 & 6 & \(\checkmark\) & \(66.19\%,67.02\%\) \\ & SLTT-2 (ours) & NF-ResNet-101 & 6 & \(\checkmark\) & \(\mathbf{69.26\%}\) \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & STBP-tdBN [74] & ResNet-17 & 40 & \(\checkmark\) & \(96.87\%\) \\ & OTTT [65] & VGG-11 (WS) & 20 & \(\checkmark\) & \(96.88\%\) \\ & PLIF [22] & VGG-like & 20 & \(\checkmark\) & \(97.57\%\) \\ & SEW [21] & Sew ResNet & 16 & \(\checkmark\) & \(97.92\%\) \\ \cline{2-6} & SLTT (ours) & VGG-11 & 20 & \(\checkmark\) & \(97.92\pm 0.00\%\) (97.92\%) \\ & VGG-11 (WS) & 20 & \(\checkmark\) & \(\mathbf{98.50\pm 0.21\%}\) (\(\mathbf{98.62\%}\)) \\ \hline \multirow{4}{*}{ \begin{tabular}{} \end{tabular} } & Dspike [36]2 & ResNet-18 & 10 & \(\checkmark\) & \(75.40\pm 0.05\%\) \\ & InfLoR [24]2 & ResNet-19 & 10 & \(\checkmark\) & \(75.50\pm 0.12\%\) \\ \cline{1-1} & OTTT [65]2 & VGG-11 (WS) & 10 & \(\checkmark\) & \(76.27\pm 0.05\%\)(76.30\%) \\ \cline{1-1} & TET [16]2 & VGG-11 & 10 & \(\checkmark\) & \(\mathbf{83.17\pm 0.15\%}\) \\ \cline{1-1} \cline{2-6} & SLTT (ours) & VGG-11 & 10 & \(\checkmark\) & \(77.17\pm 0.23\%\) (77.30\%) \\ \cline{1-1} & SLTT (ours)2 & VGG-11 & 10 & \(\checkmark\) & \(82.20\pm 0.95\%\) (\(83.10\%\)) \\ \hline \hline \end{tabular} * Pre-trained ANN models are required. 2 With data augmentation. \end{table} Table 4: Comparisons with other SNN training methods on CIFAR-10, CIFAR-100, ImageNet, DVS-Gesture, and DVS-CIFAR10. Results of our method on all the datasets, except ImageNet, are based on 3 runs of experiments. The “Efficient Training” column means whether the method requires less training time or memory occupation than the vanilla BPTT method for one epoch of training. independent memory occupation. Additionally, thanks to the instantaneous gradient calculation in our method, we propose a variant of SLTT, called SLTT-K, that allows backpropagation only at \(K\) time steps. SLTT-K can further reduce the time complexity of SLTT significantly. Extensive experiments on large-scale static and neuromorphic datasets demonstrate superior training efficiency and high performance of the proposed method, and illustrate the method's effectiveness under different network settings and large-scale network structures.
2306.00134
A Quantum Optical Recurrent Neural Network for Online Processing of Quantum Times Series
Over the last decade, researchers have studied the synergy between quantum computing (QC) and classical machine learning (ML) algorithms. However, measurements in QC often disturb or destroy quantum states, requiring multiple repetitions of data processing to estimate observable values. In particular, this prevents online (i.e., real-time, single-shot) processing of temporal data as measurements are commonly performed during intermediate stages. Recently, it was proposed to sidestep this issue by focusing on tasks with quantum output, thereby removing the need for detectors. Inspired by reservoir computers, a model was proposed where only a subset of the internal parameters are optimized while keeping the others fixed at random values. Here, we also process quantum time series, but we do so using a quantum optical recurrent neural network (QORNN) of which all internal interactions can be trained. As expected, this approach yields higher performance, as long as training the QORNN is feasible. We further show that our model can enhance the transmission rate of quantum channels that experience certain memory effects. Moreover, it can counteract similar memory effects if they are unwanted, a task that could previously only be solved when redundantly encoded input signals were available. Finally, we run a small-scale version of this last task on the photonic processor Borealis, demonstrating that our QORNN can be constructed using currently existing hardware.
Robbe De Prins, Guy Van der Sande, Peter Bienstman
2023-05-31T19:19:25Z
http://arxiv.org/abs/2306.00134v1
# A Quantum Optical Recurrent Neural Network ###### Abstract Over the last decade, researchers have studied the synergy between quantum computing (QC) and classical machine learning (ML) algorithms. However, measurements in QC often disturb or destroy quantum states, requiring multiple repetitions of data processing to estimate observable values. In particular, this prevents online (i.e., real-time, single-shot) processing of _temporal_ data as measurements are commonly performed during intermediate stages. Recently, it was proposed to sidestep this issue by focusing on tasks with quantum output, thereby removing the need for detectors. Inspired by reservoir computers, a model was proposed where only a subset of the internal parameters are optimized while keeping the others fixed at random values [1]. Here, we also process quantum time series, but we do so using a quantum optical recurrent neural network (QORNN) of which _all_ internal interactions can be trained. As expected, this approach yields higher performance, as long as training the QORNN is feasible. We further show that our model can enhance the transmission rate of quantum channels that experience certain memory effects. Moreover, it can counteract similar memory effects if they are unwanted, a task that could previously only be solved when redundantly encoded input signals were available. Finally, we run a small-scale version of this last task on the photonic processor Borealis [2], demonstrating that our QORNN can be constructed using currently existing hardware. ## I Introduction In the pursuit of improved data processing, there is an increasing emphasis on combining machine learning (ML) techniques with quantum computing (QC). Building on the established belief that quantum systems can outperform classical ways of computing [3], quantum machine learning (QML) [4] investigates how to design and deploy software and hardware that harnesses the complexity of such systems. Inverse to this 'QC for ML' approach, also research is being carried out to improve QC algorithms and hardware using ML techniques. In classical machine learning, algorithms such as recurrent neural networks (RNNs) [5; 6], transformers [7; 8], long short-term memory (LSTM) networks [9], and reservoir computing (RC) [10] have led to state-of-the-art performances in natural language processing, computer vision, and audio processing. This makes them good sources of inspiration for new QML models. However, the common use of projective measurements in quantum computing leads to the requirement of processing the same input data multiple times to estimate the expectation values of detected observables. This is a real bottleneck for _temporal_ tasks, as such measurements are often carried out at intermediate processing stages, leading to back-actions on the state of the quantum system. On the one hand, this leads to laborious operating procedures and large overheads [11]. On the other hand, it prevents one from performing online time series processing (i.e. constantly generating output signals in real-time, based on a continuous stream of input signals). Recently, an approach was introduced that proposes to sidestep this detection issue by performing online processing of quantum states and thereby removing the need for detectors [1]. The model was inspired by the concept of RC [10], where random dynamical systems, also called reservoirs, are made to process temporal input data. RC research has demonstrated that training only a simple output layer to process the reservoir's output signals can achieve state-of-the-art performance in various computational tasks while significantly reducing training costs. Building on this idea, Ref. [1] tackled several computational tasks using a random network of harmonic oscillators and training only the interactions between that network and some input-carrying oscillators. Here, we introduce a quantum optical recurrent neural network (QORNN) in which all interactions are trainable within the Gaussian state formalism [12]. We first compare our model with the findings of Ref. [1] by conducting classical simulations of two computational tasks: the short-term quantum memory (STQM) task and the entangler task. We will provide detailed definitions of these tasks in the following sections. They respectively serve as benchmarks to assess the QORNN's linear memory capabilities and its ability to entangle different states in a time series. The results will demonstrate that the relaxation of the RC strategy to a QORNN indeed increases the performance, as long as training is tractable. We further demonstrate the capabilities of our model by applying it to a quantum communication task. Specifically, we show that the QORNN can enhance the capacity of a quantum memory channel. In such a chan nel, subsequent signal states are correlated through interactions with the channel's environment. Our network achieves this enhancement by generating an entangled quantum information carrier. Indeed, it is known that the asymptotic transmission rate of memory channels can be higher than the maximal rate achieved by separable (unentangled) channel uses. It is said that the capacity of such channels is'superadditive'. For a bosonic memory channel with additive Gauss-Markov noise [13], it was previously shown that the generation of such entangled carriers can be performed sequentially (i.e. without creating all channel entries all at once) while achieving near-optimal enhancement of the capacity [14]. Our model achieves the same result, while having a simpler encoding scheme and being more versatile, as it can adapt to hardware imperfections. Moreover, we show that a QORNN can also compensate for unwanted memory effects in quantum channels (the so-called quantum channel equalization or QCE task). Existing work on this task required the availability of redundantly encoded input signals [1]. This undermines the practicality of the method. Moreover, such a redundant encoding is impossible without full prior knowledge of the input states (e.g. when they result from a previous quantum experiment that is not exactly repeatable) because quantum states cannot be cloned. Here, we show that the increased flexibility of the QORNN allows us to lift the restriction of redundant encoding. Additionally, we find that the QORNN's performance can be improved by allowing the reconstruction of the channel's input to be performed with some delay. Finally, we run a small-scale version of the QCE task on the recently introduced photonic processor Borealis [2]. Although the results are restricted by the limited tunability of Borealis' phase modulators, we can still demonstrate that our model can be constructed using currently existing hardware. The rest of this paper is structured as follows. In Section II, we introduce our QORNN model. In Section III, we benchmark our model with the STQM task (III.1) and the entangler task (III.2). In Section IV.1, we show how the QORNN can lead to superadditivity in a bosonic memory channel. In Section IV.2, we show that the QORNN can tackle the QCE task without requiring its input signals to be encoded with redundancy. Finally, the results of the experimental demonstration of the QCE task are given in Section IV.3. ## II Model Our QORNN model is presented in Fig. 1. It incorporates an \(m\)-mode circuit \(\mathbf{S}\) that generally consists of beam splitters, phase shifters, and optical squeezers. Such a circuit can be described by a symplectic matrix. Hence, we will further refer to it as a symplectic circuit. The \(m_{\mathrm{io}}\) upper modes at the left (right) side of \(\mathbf{S}\) are the input (output) modes of the QORNN. The remaining modes of \(\mathbf{S}\) are connected from left to right using \(m_{\mathrm{mem}}=m-m_{\mathrm{io}}\) delay lines. The delay lines are equally long and we fill further denote them as'memory modes'. To perform a temporal task, we send a time series of quantum states (e.g., obtained from encoding classical information or a previous quantum experiment) to the input modes of the QORNN. The temporal spacing between the states is chosen equal to the propagation time of \(\mathbf{S}\) and the delay lines, such that we can describe the QORNN operation in discrete time. Because of the memory modes, output states depend on multiple past input states, which grants the QORNN some memory capacity. By training the circuit \(\mathbf{S}\) (essentially training the parameters of its constituent gates), the QORNN can learn to process temporal data. In further sections, we sometimes restrict \(\mathbf{S}\) to be _orthogonal_ symplectic. Such a circuit comprises only beam splitters and phase shifters, with optical squeezers being excluded. When applicable, we will denote it as \(\mathbf{O}\). ## III Benchmark Tasks ### Short-term quantum memory task The goal of the short-term quantum memory (STQM) task is to recall states that were previously fed to the QORNN after a specific number of iterations, denoted by \(D\). This task is visualized in Fig. 2 for the case where \(m_{\mathrm{io}}=2\) and the QORNN consists of an orthogonal circuit. Note that if we were to use a general symplectic network instead of an orthogonal one, optical squeezers could be added and optimized, such the results would be at least equally good. However, we will show that we can reach improved performance without including optical squeezers in the QORNN, which is beneficial for an experimental setup. Figure 1: QORNN model. Quantum states are repeatedly sent in the upper \(m_{\mathrm{io}}\) modes. These input modes are combined with \(m_{\mathrm{mem}}\) memory modes and sent through a symplectic network \(\mathbf{S}\) (i.e. a network of beam splitters, phase shifters, and optical squeezers). Afterwards, the state on the upper \(m_{\mathrm{io}}\) modes is collected as output, while the remaining \(m_{\mathrm{mem}}\) modes are looped back to the left side of \(\mathbf{S}\). We focus our attention on the case where \(D=1\). The input states are chosen randomly from a set of squeezed thermal states (more details in Section A of Methods). Fig. 3(a) shows the average fidelity between an input state at iteration \(k\) and an output state at iteration \(k+D\), as a function of \(m_{\rm mem}\) and \(m_{\rm io}\). We see that the QORNN perfectly solves the STQM task if \(m_{\rm io}\leq m_{\rm mem}\). This is easy to understand as \({\bf O}\) can be trained to perform several SWAP operations (i.e. operations that interchange the states on two different modes). More specifically, the QORNN can learn to swap every input mode with a different memory mode, such that the input state is memorized for a single iteration before being swapped back to the corresponding output mode. For \(m_{\rm io}>m_{\rm mem}\), such a SWAP-based circuit is not possible, leading to less than optimal behavior of the QORNN. In Fig. 3(b), the fidelity values obtained by the RC-inspired model of Ref. [1] are subtracted from our results. Across all values of \(m_{\rm mem}\) and \(m_{\rm io}\), we observe that the QORNN scores equally well or better. Although Ref. [1] also achieves a fidelity of 1 for certain combinations of \(m_{\rm io}\) and \(m_{\rm mem}\), the set of these combinations is smaller than for the QORNN. Moreover, it is important to note that the RC-inspired design limits the number of trainable parameters, making a SWAP-based solution impossible in general. As a result, prior to training, it is more challenging to guarantee optimal performance of the RC-inspired model, while this is not the case for the QORNN. ### Entangler task The objective of the entangler task is to entangle different states of a time series that were initially uncorrelated. The performance of this task is evaluated based on the average logarithmic negativity between output states at iterations \(k\) and \(k+S\). Negativity [12] is an entanglement measure for which higher values indicate greater levels of entanglement between the states. Note that if we consider output states with spacing \(S=1\), then we aim to entangle nearest-neighbor states. This last task is visualized in Fig. 4 for the case where \(m_{\rm io}=1\). We choose vacuum states as input and hence the circuit \({\bf S}\) should _not_ be orthogonal as we want to generate states with nonzero average photon numbers. For \(m_{\rm io}=1\), Fig. 5(a) displays the average logarithmic negativity obtained by the QORNN for various values of \(m_{\rm mem}\) and \(S\). For a given spacing, the performance increases with \(m_{\rm mem}\). This can be attributed to the fact that a higher value of \(m_{\rm mem}\) leads to a bigger circuit \({\bf S}\), such that more entangling operations can be applied. It can also be seen that the performance roughly stays the same along the diagonal (\(S=m_{\rm mem}\)) and along lines parallel to the diagonal. This can be explained by the findings of Section III.1, which indicate that increasing \(m_{\rm mem}\) can effectively address the increased linear memory requirements of the task that arise from increasing \(S\). Finally, our comparison with the RC-inspired model proposed in Ref. [1], as shown in Fig. 5(b), indicates that Figure 3: STQM performance for \(D=1\) and for different values of \(m_{\rm io}\) and \(m_{\rm mem}\). Fig. (a) shows the average fidelity between a desired output state and a state resulting from the QORNN. In Fig. (b), the corresponding results achieved in Ref. [1] are subtracted from our results. Figure 2: Setup for the STQM task with \(m_{\rm io}=2\). The QORNN consists of an orthogonal symplectic network \({\bf O}\). Pulses of different colors represent a time series of quantum states. A state that is sent into the QORNN at iteration \(k\) should appear at the output at iteration \(k+D\). the QORNN performs better, owing to its larger number of trainable parameters. ## IV Quantum communication tasks ### Superadditivity In this section, we show that the QORNN can enhance the transmission rate of a quantum channel that exhibits memory effects. When a state is transmitted through such a'memory channel', it interacts with the channel's environment. As subsequent input states also interact with the environment, correlations arise between different channel uses. Contrary to memoryless channels, it is known that the transmission rate of memory channels can be enlarged by providing them with input states that are entangled over subsequent channel uses [15], a phenomenon that is better known as'superadditivity'. Here, we aim to create such entangled input states using our QORNN. Note the similarity with the definition of the entangler task. Now however, the goal is not to create maximal entanglement between the different states, but rather a specific type of entanglement that depends on the memory effects of the channel and that will increase the transmission rate. The setup for the'superadditivity task' is shown in Fig. 6. A QORNN with \(m_{\rm io}=1\) transforms vacuum states into an entangled quantum time series. Information is encoded by displacing each individual state of the series over a continuous distance in phase space. These distances are provided by a classical complex-valued information stream. Their probabilities follow a Gaussian distribution with zero mean and covariance matrix \(\mathbf{\gamma}_{\rm mod}\). The resulting time series is then sent through a memory channel. A number of \(K\) consecutive uses of the channel are modeled as a single parallel K-mode channel. The memory effects we consider here are modeled by correlated noise emerging from a Gauss-Markov process [13]. The environment has the following classical noise covariance matrix \(\mathbf{\gamma}_{\rm env}\): \[\mathbf{\gamma}_{\rm env}=\left(\begin{array}{cc}\mathbf{M}(\phi)&0\\ 0&\mathbf{M}(-\phi)\end{array}\right), \tag{1}\] \[M_{ij}(\phi)=N\phi^{|i-j|}. \tag{2}\] Here, \(\phi\in[0,1)\) denotes the strength of the nearest-neighbor correlations and \(N\in\mathbb{R}\) is the variance of the noise. In Eq. (1), \(\mathbf{M}(\phi)\) correlates the \(q\) quadratures, while \(\mathbf{M}(-\phi)\) anti-correlates the \(p\) quadratures. The transmission rate of the channel is calculated from the von Neumann entropy of the states that pass through the channel (i.e. from the Holevo information). Here we adopt the approach and the parameter values outlined in Ref. [13]. Note that the average photon number that is transmitted per channel use (\(\bar{n}\)) has a contribution from both the QORNN (i.e. from its squeezers) and from the displacer. Given a value for \(\bar{n}\), the transmission rate is maximized by training both the circuit \(\mathbf{S}\) and \(\mathbf{\gamma}_{\rm mod}\) under the energy constraint imposed by \(\bar{n}\). Nonzero squeezing val Figure 4: Setup for the entangler task for \(m_{\rm io}=1\) and spacing \(S=1\). Circles of different colors represent an input time series of vacuum states. Pulses of different colors are entangled output states. Figure 5: Entangler task performance for \(m_{\rm io}=1\) and for different values of \(S\) and \(m_{\rm mem}\). Fig. (a) shows the logarithmic negativity resulting from the QORNN. In Fig. (b), the corresponding results achieved in Ref. [1] are subtracted from our results.[1]. ues are obtained, leading to an information carrier. This highlights the counter-intuitive quantum nature of the superadditivity phenomenon: by spending a part of the available energy on the carrier generation rather than on classical modulation, one can reach higher transmission rates, something that has no classical analog. We now define a quality measure for the superadditivity task. The gain \(G\) is the ratio of the achieved transmission rate to the optimal transmission rate for separable (i.e. unentangled) input states. For 30 channel uses, Fig. 7 shows \(G\) as a function of the average photon number \(\bar{n}\) per use of the channel and for different values of the correlation parameter \(\phi\). We take the signal-to-noise ratio \(\mathrm{SNR}=\bar{n}/N=3\), where \(N\) is defined in Eq. (2). We observe that superadditivity is achieved, as the gain is higher than 1 and can reach as high as 1.10. These results agree with the optimal gain values that were analytically derived in prior studies of this memory channel (cfr. Fig. 7 of Ref. [16]). While a scheme already exists to generate carriers sequentially (i.e., generating carriers without creating all channel entries simultaneously) [14], our model also provides a simpler and more versatile alternative. Unlike the existing scheme, our model eliminates the need for Bell measurements, while achieving the same near-optimal gains. Additionally, our model is able to adapt to hardware imperfections, as they can be taken into account during training. ### Quantum channel equalization In this section, we use the QORNN as a model for a quantum memory channel. This time, we assume its memory effects to be unwanted (unlike Section IV.1) and compensate for them by sending the channel's output through a second QORNN instance. Fig. 8 shows the setup for the quantum channel equalization (QCE) task in more detail. An 'encoder' QORNN acts as a model for a memory channel. Because such channels normally do not increase the average photon number of transmitted states, we restrict the encoder's symplectic circuit to be orthogonal and denote it as \(\mathbf{O}_{\mathrm{enc}}\). This circuit is initialized randomly and will not be trained later. A second 'decoder' QORNN is trained to invert the transformation caused by the encoder. Similar to the STQM task, we will show that an orthogonal symplectic circuit \(\mathbf{O}_{\mathrm{dec}}\) is enough to lead to the desired performance, without requiring optical squeezers, which is Figure 6: Setup for the superadditivity task. A QORNN (with \(m_{\mathrm{io}}=1\)) transforms vacuum states into a quantum information carrier that is entangled across different time bins. A displacer (D) modulates this carrier to encode classical input information. The resulting signal is sent to a bosonic memory channel [13]. A number of \(K\) consecutive uses of the channel are modeled as a single parallel K-mode channel. The channel’s environment introduces noise (\(\blacklozenge\)) that leads to correlations between the successive channel uses. As a result of the entangled carrier, the transmission rate of the channel can be enhanced. Figure 7: Performance of the superadditivity task for 30 channel uses. The gain in transmission rate is plotted as a function of the average photon number per use of the channel (\(\bar{n}\)) and for different values of the noise correlation parameter (\(\phi\)). Additional parameters are chosen as follows: \(m_{\mathrm{io}}=m_{\mathrm{mem}}=1\), \(N=\bar{n}/3\). beneficial for experimental realizations. We will further denote the number of memory modes of the encoder and decoder as \(m_{\text{mem,enc}}\) and \(m_{\text{mem,dec}}\) respectively. Finally, we introduce a delay of \(D\) iterations between the input and output time series, similar to the definition of the STQM task (see Fig. 2). Assume for a moment that the input time series of the encoder only consists of a single state, i.e. we are looking at an impulse response of the system. By taking \(D>0\), we allow the decoder to receive multiple output states from the encoder before it has to reconstruct the original input state. The longer \(D\), the more information that was stored in the memory modes of the encoder will have reached the decoder by the time it needs to start the reconstruction process. A similar reasoning applies when the input time series consists of multiple states. This approach effectively addresses the challenge posed by the no-cloning principle, which prevents the decoder from accessing information stored in the encoder's memory or in the correlations between the encoder's memory and output. For the RC-inspired model of Ref. [1], only the case where \(D=0\) was considered. The no-cloning problem was addressed by redundantly encoding the input signals of the encoder. I.e., multiple copies of the same state were generated based on _classical_ input information and subsequently fed to the model through different modes ('spatial multiplexing') or at subsequent iterations ('temporal multiplexing'). Here, we show that taking \(D>0\) allows us to solve the QCE task without such redundancy, ultimately using each input state only once. This not only simplifies the operation procedure but also enables us to perform the QCE task without prior knowledge of the input states, which is often missing in real-world scenarios such as quantum key distribution. As these input states cannot be cloned, our approach significantly increases the practical use of the QCE task. It is worth noting that such an approach where \(D>0\) was also attempted for the RC-inspired model [17], but this was unsuccessful, which we attribute here to its limited number of trainable interactions. Additionally, we will show that the QCE performance of the QORNN is influenced by two key factors: the memory capacity of the decoder (as determined by the value of \(m_{\text{mem,dec}}\)), and the response of the encoder to a single input state (observed at the encoder's output modes). More formally, we measure the impulse response of the encoder by sending in a single squeezed vacuum state (with an average photon number of \(\bar{n}_{\text{impulse}}\)) and subsequently tracking the average photon number \(h_{\text{enc}}\) in its output modes over time. We denote the impulse response at iteration \(k\) by \(h_{\text{enc}}^{k}\). We now define: \[I_{\text{enc}}=\frac{1}{\bar{n}_{\text{impulse}}}\sum_{k=0}^{D}h_{\text{enc}}^ {k} \tag{3}\] \(I_{\text{enc}}\) is a re-normalized cumulative sum that represents the fraction of \(\bar{n}_{\text{impulse}}\) that leaves the encoder before the decoder has to reconstruct the original input state. We now consider 20 randomly initialized encoders with \(m_{\text{mem,enc}}=2\). The input states are randomly sampled from a set of squeezed thermal states (more details in Section D of Methods). Fig. 9(a) shows the average fidelity between an input state of the encoder at iteration \(k\) and an output state of the decoder at iteration \(k+D\) as a function of \(I_{\text{enc}}\) and for different values of \(D-m_{\text{mem,dec}}\). We see that if \(D\leq m_{\text{mem,dec}}\) (blueish dots), the decoder potentially has enough memory, and the quality of constructing the input states increases as the decoder receives more information from the encoder (i.e. as \(I_{\text{enc}}\) increases). If \(D>m_{\text{mem,dec}}\) (reddish dots), we ask the decoder to wait for a longer time before starting to reconstruct the input. This explains why the dots are clustered on the right side of the diagram because more information about the input will be received and \(I_{\text{enc}}\) will be higher. On the other hand, if the delay is too long, it will exceed the memory of the decoder, and the input will start to be forgotten. This explains that the darkest dots with the longest delay have the worst performance. Note that \(D\) is a hyperparameter that can be chosen freely. Also note that the optimal choice for the value of \(D\) is not necessarily \(D=m_{\text{mem,dec}}\) (light grey dots) and Figure 8: Setup for the QCE task when \(m_{\text{io}}=1\) and for a delay \(D\). Pulses of different colors represent a time series of quantum states. The encoder and decoder respectively consist of orthogonal symplectic networks \(\mathbf{O}_{\text{enc}}\) and \(\mathbf{O}_{\text{dec}}\). \(\mathbf{O}_{\text{enc}}\) is initialized randomly and kept fixed. \(\mathbf{O}_{\text{dec}}\) is trained such that an input state that is sent into the encoder at iteration \(k\) appears at the output of the decoder at iteration \(k+D\). the actual optimum depends on the exact configuration of the encoder. Fig. 9(b) shows a subset of the results in Fig. 9(a), where the optimal value of \(D\) is chosen for every encoder initialization and every value of \(m_{\rm mem,dec}\). We observe that the task can be tackled without redundantly encoded input signals and that the performance increases with both \(m_{\rm mem,dec}\) and \(I_{\rm enc}\). For \(m_{\rm mem,dec}=3\), all 20 encoders are equalized better than is done in Ref. [1]. ### Experimental demonstration of quantum channel equalization To show that our model can be implemented using currently existing hardware, we perform the QCE task on the recently introduced quantum processor Borealis [2]. Because of hardware restrictions, we only consider the case where \(m_{\rm mem,enc}=m_{\rm mem,dec}=1\). The setup for this task is visualized in Fig. 10. The input time series consists of squeezed vacuum states (whereas squeezed _thermal_ states were used in Section IV.2). Both the encoder and decoder are decomposed using beam splitters and phase shifters. Here we use the following definitions for those respective components: \[BS(\theta)=e^{\theta(a_{i}a_{j}^{\dagger}-a_{i}^{\dagger}a_{j})} \tag{4}\] \[PS(\phi)=e^{i\phi a_{i}^{\dagger}a_{i}} \tag{5}\] where \(a_{i}\) (\(a_{i}^{\dagger}\)) is the creation (annihilation) operator on mode \(i\). Note that the transmission amplitude of the beamsplitter is \(\cos(\theta)\). Whereas Section IV.2 presented the results of training the decoder, here we will visualize a larger part of the cost function landscape (including sub-optimal decoder configurations). Note that while evaluating a certain point of the cost function landscape, i.e. while processing a single time series, the parameters of the beam splitters and phase shifters are kept fixed. Hence, in Fig. 10, the PNR results are not influenced by the phase shifters outside of the loops (i.e. outside of the memory modes). These components can be disregarded. Consequently, we can parameterize the encoder (decoder) using only a beam splitter angle \(\theta_{\rm enc}\) (\(\theta_{\rm dec}\)) and a phase shift angle \(\phi_{\rm enc}\) (\(\phi_{\rm dec}\)). We detect output states with a photon-number-resolving (PNR) detector. In contrast to Section IV.2, we will not use fidelity as a quality measure, but we will estimate the performance of the QORNN using the following cost function: \[\mathrm{cost}=\sum_{k=0}^{K}|\bar{n}_{\rm out}^{k}-\bar{n}_{\rm target}^{k}|\, \tag{6}\] where \(\bar{n}_{\rm out}^{k}\) and \(\bar{n}_{\rm target}^{k}\) are the average photon numbers of the _actual_ output state and the _target_ output state at iteration \(k\) respectively. \(K\) is the total number of states in the input time series. The small-scale version of the QCE task (depicted in Fig. 10) is run on the photonic processor Borealis [2]. The Borealis setup is depicted in Fig. 11. It consists of a single optical squeezer that generates squeezed vacuum states. These states are sent to a sequence of three dynamically programmable loop-based interferometers of which the delay lines have different lengths, corresponding with propagation times of \(T\), \(6T\), and \(36T\) (\(T=36\mu s\)). For our experiment, we only use the two leftmost loop-based interferometers. More formally, we choose \(\theta=0\) for the rightmost BS and \(\phi=0\) for the rightmost PS. Figure 9: QCE performance for 20 randomly initialized encoders that consist of 2 memory modes. The results are shown as a function of \(I_{\rm enc}\), i.e. the fraction of the photon number of a certain input state that reaches the decoder within \(D\) iterations. In Fig. (a), we consider \(D\in\{0,1,...,m_{\rm mem,dec}+m_{\rm mem,enc}+1\}\) and \(m_{\rm mem,dec}\in\{1,2,...,5\}\). In Fig. (b), the optimal value of \(D\) is chosen (given an encoder and \(m_{\rm mem,dec}\)). As is explained in more detail in Section E of Methods, we can virtually make the lengths of Borealis' delay lines equal. We do so by lowering the frequency at which we send input states and by putting the \(\theta=0\) for the leftmost beam splitter in between input times. We first consider the case where \(\phi_{\rm enc}=\phi_{\rm dec}=0\), such that all phase shifters can be disregarded. Fig. 12 compares the experimental and numerical performance of the QCE task for \(D=1\). We observe that the task is solved perfectly when either the encoder or the decoder delays states by \(D=1\) and the other QORNN transmits states without delay. The performance is worst when both the encoder and decoder delay states with an equal number of iterations (either \(D=0\) or \(D=1\)). Indeed, the cost of Eq. (6) is then evaluated between randomly squeezed vacuum states. For beam splitter angles between \(0\) and \(\pi/2\), we find that the cost follows a hyperbolic surface. We observe good agreement between simulation and experiment. We now consider the case where \(\phi_{\rm enc}\) and \(\phi_{\rm dec}\) are not restricted to \(0\). Note that the Borealis setup (Fig. 11) does not have phase shifters inside the loops. However, as is explained in more detail in Section E of Methods, we can _virtually_ apply the phase shifts \(\phi_{\rm enc}\) and \(\phi_{\rm dec}\)_inside_ Borealis' loops by dynamically adjusting the phase shifters positioned _before_ those loops over time. Unfortunately, this procedure is hampered by hardware restrictions. The range of Borealis' phase shifters is restricted to \([-\pi/2,\pi/2]\). This is problematic, since in order to apply a single virtual phase shift value within a specific loop, the phase shifters preceding that loop need to be tuned dynamically across the complete \([-\pi,\pi]\) range. As a result, about half of the parameter values of the phase Figure 11: Borealis setup from Ref. [2]. A squeezer (S) periodically produces single-mode squeezed states, resulting in a train of 216 states that are separated in time by \(T=36\mu s\). These states are sent through programmable phase shifters (PS) and beam splitters (BS). Each BS is connected to a delay line. From left to right in the figure, the lengths of these delay lines are \(T\), \(6T\), and \(36T\). The output states are measured by a photon-number-resolving (PNR) detector. Figure 10: Setup for the QCE task where both the encoder and decoder have a single memory mode. The matrices \({\bf O}_{\rm enc}\) and \({\bf O}_{\rm dec}\) are decomposed using beam splitters (BS) and phase shifters (PS). A squeezer (S) is used to generate input states, while the output states are measured using a photon-number-resolving (PNR) detector. As the PSs outside of the loops do not influence the PNR results, they can be disregarded. shifters cannot be applied correctly. When such a value falls outside of the allowed range, an additional phase shift of \(\pi\) is added artificially to ensure proper hardware operation. Fig. 13 shows the QCE (\(D=1\)) performance for three different encoders (corresponding to the three columns). We compare classical simulation results (rows 1 and 2) with experimental results (row 3). Whereas the restricted range of Borealis' phase shifters is taken into account in rows 2 and 3, it is not in row 1. While Fig. 12 (for \(\phi_{\mathrm{enc}}=\phi_{\mathrm{dec}}=0\)) showed that the decoder can easily be optimized by considering only \(\theta_{\mathrm{dec}}=0\) and \(\theta_{\mathrm{dec}}=\pi/2\), this optimization is less trivial when \(\phi_{\mathrm{enc}}\neq 0\) and \(\phi_{\mathrm{dec}}\neq 0\). We observe that the general trends of the cost function landscapes agree between rows 2 and 3, although row 3 is affected by experimental imperfections. ## V Conclusions We have introduced a new model to process time series of quantum states in real time. We have probed two of its key functionalities: the linear memory capacity and the capability to entangle distinct states within a time series. By comparing with an existing RC-inspired model, we showed that these functionalities benefit from an RNN structure where all internal interactions can be trained. Moreover, the QORNN showed the ability to tackle two _quantum communication_ tasks, a domain where optics naturally is the leading hardware platform. First, we demonstrated that the QORNN can enhance the transmission rate of a quantum memory channel with Gauss-Markov noise by providing it with entangled input states. The enhancement showed to be near-optimal as compared to previous analytical studies, while the generation scheme of the input states is simpler and can more Figure 13: QCE performance of Borealis when \(D=1\). (a) Simulation where phase shifters are tunable without range restriction. (b) Simulation where phase shifters are tunable within \([0,\pi)\). (c) Experiment where phase shifters are tunable within \([0,\pi)\). The columns correspond with different encoders. Each plot shows the QCE cost (as defined in Eq. (6)) as a function of the decoder parameters \(\theta_{\mathrm{dec}}\) and \(\phi_{\mathrm{dec}}\). easily adapt to hardware imperfections than previous approaches. Second, we showed that the QORNN can mitigate undesired memory effects in quantum channels. A small-scale experiment of this last task demonstrated the readiness of quantum optics to implement models like the QORNN. ## Methods The classical simulations of our QORNN were carried out using the open-source photonic optimization library MrMustard[18]. This allows us to perform gradient-based optimizations of symplectic circuits and orthogonal symplectic circuits. ### Stqm As is explained in Ref. [1], a special purpose cost function \(f\) can be used to solve the STQM task. \(f\) can be defined as a function of the submatrices of the orthogonal symplectic matrix that is associated with the circuit \(\mathbf{O}\). With slight abuse of notation, we write the orthogonal symplectic _matrix_ that is associated with the _circuit_\(\mathbf{O}\) as: \[\mathbf{O}=\left(\begin{array}{cc}\mathbf{A}&\mathbf{B}\\ \mathbf{C}&\mathbf{D}\end{array}\right). \tag{7}\] In contrast to Eq. (1), the quadratures are ordered here as follows: \((q_{1},p_{1},q_{2},p_{2},q_{3},p_{3},...)\), where \(q_{i}\) and \(p_{i}\) are the quadratures of mode \(i\). When the delay is nonzero (\(D>0\)), Ref. [1] shows that the goal of the STQM task can be redefined as the following system of equations: \[\begin{cases}\mathbf{D}\approx\mathbf{0}\,\\ \mathbf{C}\mathbf{A}^{D-1}\mathbf{B}\approx\mathbf{I}\,\\ \mathbf{C}\mathbf{A}^{t}\mathbf{B}\approx\mathbf{0}\,\ \forall\,t\neq D-1\.\end{cases} \tag{8}\] Hence, we choose the following cost function: \[\begin{split} f\left(\mathbf{U}\right)&=0.5\|\mathbf{D} \|+5\left\|\mathbf{C}\mathbf{A}^{D-1}\mathbf{B}-\mathbf{I}\right\|\\ &+0.5\sum_{\begin{subarray}{c}0\leq t<T\\ t\neq D-1\end{subarray}}\|\mathbf{C}\mathbf{A}^{t}\mathbf{B}\|,\end{split} \tag{9}\] where \(\|\cdot\|\) is the Frobenius norm. Note that the last sum in Eq. (9) was not used in Ref. [1], as these terms appeared to worsen the results. However, we have found that their inclusion is beneficial in our work. A numerical parameter sweep showed that the factors \(0.5\), \(5\), and \(0.5\) for the terms in Eq. (9), together with a value of \(T=10\) yield good results. The learning rate is initialized at a value of \(0.01\) and decreased throughout the training procedure until the cost function converges. For each value of \((m_{\text{io}},m_{\text{mem}})\) in Fig. 3, the training is repeated for an ensemble of \(100\) initial conditions of the QORNN. After training, a test score is calculated as the average fidelity over a test set of \(100\) states. Only the lowest test score in the ensemble of different initial conditions is kept, as this corresponds to a degree of freedom that is exploitable in practice. In order to evaluate the test score, squeezed thermal states are used as input. In the case of a single mode, we first generate a thermal state with an expected number of photons \(\bar{n}_{\text{th}}\). Afterwards, the state is squeezed according to a squeezing magnitude \(r_{\text{sq}}\) and a squeezing phase \(\phi_{\text{sq}}\). The relevant parameters of this state generation process are chosen uniformly within the following intervals: \(\bar{n}_{\text{th}}\in[0,10)\), \(r_{\text{sq}}\in[0,1)\) and \(\phi_{\text{sq}}\in[0,2\pi)\). If \(m_{\text{io}}>1\), the state generation is altered as follows. We first generate a product state of single-mode thermal states. The \(M\)-mode product state is then transformed by a random orthogonal symplectic circuit, followed by single-mode squeezers on all modes. A second random orthogonal symplectic circuit transforms the multi-mode state to the final input state. ### Entangler task We evaluate the cost function as follows. We send vacuum states to the QORNN for \(10\) iterations, such that the model undergoes a transient process. We numerically check that this transient process has ended by monitoring the convergence of the average photon number in the output mode. We then calculate the logarithmic negativity of the \(2\)-mode state formed by the output at iteration \(10\) and \(10+S\). The logarithmic negativity of a \(2\)-mode state is calculated from the determinants of the state's covariance matrix (and its submatrices) as described in Ref. [12]. Note that we do not calculate the negativity from the symplectic eigenvalues of the covariance matrix (as is done for example in Ref. [19]), which is beneficial for numerical optimization. The cost function is defined as the negative of the logarithmic negativity. The training is performed using \(4000\) updates of \(\mathbf{S}\) and with a learning rate of \(0.01\). For each value of \((S,m_{\text{mem}})\) in Fig. 5, the training is repeated for an ensemble of \(100\) initial conditions of \(\mathbf{S}\). Again, only the lowest score in the ensemble is kept. ### Superadditivity task For more details on the calculation of the optimal transmission rate of the quantum channel with Gauss-Markov noise, both with and without entangled input states, we refer to Ref. [16]. For the case of entangled input, the optimization problem defined by Eq. 9 of this Reference is solved under the restriction that the covariance matrix \(\mathbf{\gamma}_{\text{in}}\) is produced by the QORNN. \(\mathbf{\gamma}_{\rm mod}\) is a general covariance matrix that takes into account the required input energy constraint (Eq. 13 in Ref. [16]). The cost function is defined as the negative of the transmission rate. The training is performed using 1000 updates of \(\mathbf{S}\) and with a learning rate of 0.5. ### Quantum channel equalization In contrast to the STQM task, here we use the negative of the fidelity (between the desired output and the actual output) as a cost function, both during training and testing. The input states are single-mode squeezed thermal states where \(\bar{n}_{\rm th}\), \(r_{\rm sq}\), and \(\phi_{\rm sq}\) are chosen uniformly within the following ranges: \(\bar{n}_{\rm th}\in[0,10)\), \(r_{\rm sq}\in[0,1)\) and \(\phi_{\rm sq}\in[0,2\pi)\). \(\mathbf{O}_{\rm enc}\) and \(\mathbf{O}_{\rm dec}\) are initialized randomly. Every epoch, 30 squeezed thermal states are sent through the QORNNs. In order to account for the transient process at the start of each epoch, the fidelity is only averaged over the last 20 output states. The training is performed using 2000 updates of \(\mathbf{O}_{\rm dec}\) and with a learning rate of 0.05. In Fig. 9, the training is repeated for an ensemble of 20 initial conditions of \(\mathbf{S}\) for every value of \((D,m_{\rm mem,dec})\) and for each encoder initialization. After training, a test score is calculated by simulating a transient process during 10 iterations and averaging the fidelity over 100 subsequent iterations. The lowest test score in each ensemble is kept. ### Experimental demonstration of quantum channel equalization This section describes the technical details of both the simulations and the experiments shown in Section IV.3. We first describe how the QORNN can be mapped onto the Borealis setup. Afterwards, we describe the input state generation, the post-processing of the PNR results, and some other relevant parameters for the simulations and experiments. #### iv.5.1 Mapping the QORNN model on Borealis We map our model on the Borealis setup of Fig. 11. We choose \(\phi=0\) and \(\theta=0\) for the rightmost phase shifter and beam splitter respectively, such that these components and their corresponding delay line can be disregarded. After applying these changes, it is clear that the Borealis setup differs from the setup in Fig. 10 in two ways. First, the remaining delay lines of the Borealis setup have different lengths, which is not the case in Fig. 10. Second, Borealis does not have phase modulators in the delay lines. We can circumvent both of these problems as follows. First, we can virtually increase the length of the leftmost delay line (i.e. the encoder delay line) from \(T\) to \(6T\) by performing two changes: (1) we lower the frequency at which we input states from \(\frac{1}{T}\) to \(\frac{1}{6T}\) and (2) we store the encoder memory state as long as we do not input a new state. More formally, we do the following. Before generating a new input state, we put the values of the beam splitter angles to \(\theta_{\rm enc}\) (for the encoder) and \(\theta_{\rm dec}\) (for the decoder). Once a state enters the encoder, it interacts with the memory state, leading to an output state and a new memory state for the encoder. The output state of the encoder is sent to the decoder. We now put \(\theta_{\rm enc}=0\) for a period of \(6T\), such that no new inputs can enter the loop. Hence, the new memory state is stored in the delay line of the encoder. During this period, no states are generated and no interactions occur between the encoder's memory mode and input/output mode. Afterward, a new state is generated and the process is repeated. Second, we can apply virtual phase shifts in the delay lines of Fig. 11 by using the phase shifters in front of the loops. To do this, we utilize an existing functionality of Borealis' control software. This functionality (implemented in StrawberryFields [20] as compilers.tdm.Borealis.update_params) is normally used to compensate for unwanted phase shifts in the delay lines. Given such a static unwanted phase shift, the compensation is performed by adjusting the phase shifters in front of the loops _over time_, such that they apply different phase shifts at different iterations. The actual unwanted phase shifts that are present in the hardware can be measured before running an experiment. We now choose to operate the setup _as if_ there were phase offset sets \(\phi_{\rm 1,unwanted}-\phi_{\rm enc}\) and \(\phi_{\rm 2,unwanted}-\phi_{\rm dec}\) in the first two delay lines respectively. Hence, we apply net virtual phase shifts in the loop with values \(\phi_{\rm enc}\) and \(\phi_{\rm dec}\). Unfortunately, this procedure is undercut by a hardware restriction of Borealis. The range of these phase shifters is limited to \([-\pi/2,\pi/2]\), which means that about half of the desired phase shifts cannot be implemented correctly. When a phase shift falls outside of the allowed range, an additional phase shift of \(\pi\) is added artificially to ensure proper hardware operation. Listing 1 shows the Python code for this process. #### iv.5.2 Generating input states, post-processing PNR results, and simulation parameters Both for the computer simulations and for the experiments presented in Section IV.3, the input states are generated by a single-mode optical squeezer that operates on a vacuum state. In every iteration, the squeezing magnitude is chosen randomly between either 0 or 1, effectively encoding bit strings into the quantum time series. It is worth noting that public online access to Borealis does not allow for multiple squeezing magnitudes within one experiment. Our experiments were therefore performed on-site. The output of Borealis consists of the average photon numbers gathered over 216 iterations. Due to the mapping described in the previous section, only 36 of these average photon numbers can be used. To account for the transient process, the results of the first 10 of these 36 output states are disregarded. The PNR results of the remaining 26 output states are used to estimate the decoder's QCE performance. For the _simulations_, 10 runs (each using a different series of input states) are performed per set of gate parameter values of the QORNNs. The cost is therefore averaged over 260 output states. For the _experiments_ shown in Fig. 12 and Fig. 13, the number of runs per data point is 7 and 2 respectively. In hardware experiments, non-idealities occur that are not considered in the classical simulations. We compensate for two such non-idealities by re-scaling the experimentally obtained average photon numbers. On the one hand, there are optical losses. On the other hand, the optical squeezer suffers from hardware imperfections that lead to effective squeezing magnitudes that decrease over time when the pump power is kept constant. The re-scaling is performed as follows. At each iteration, we calculate the average of the number of detected photons over all experiments. We fit a polynomial of degree 2 to these averages. Given an experimental result, i.e. a series of average photon numbers, we then divide each value by the value of the polynomial at that iteration. The resulting series is rescaled such that the average of the entire series matches the average value of the entire series of input states. ###### Acknowledgements. We would like to thank Filippo M. Miatto for his insights, feedback, and assistance with software-related challenges. We are also grateful to Johannes Nokkala for sharing his expertise, and to Lars S. Madsen, Fabian Laudenbach, and Jonathan Lavoie for making the hardware experiment possible. This work was performed in the context of the Flemish FWO project G006020N and the Belgian EOS project G0H1422N. It was also co-funded by the European Union in the Prometheus Horizon Europe project.
2301.13845
Interpreting Robustness Proofs of Deep Neural Networks
In recent years numerous methods have been developed to formally verify the robustness of deep neural networks (DNNs). Though the proposed techniques are effective in providing mathematical guarantees about the DNNs behavior, it is not clear whether the proofs generated by these methods are human-interpretable. In this paper, we bridge this gap by developing new concepts, algorithms, and representations to generate human understandable interpretations of the proofs. Leveraging the proposed method, we show that the robustness proofs of standard DNNs rely on spurious input features, while the proofs of DNNs trained to be provably robust filter out even the semantically meaningful features. The proofs for the DNNs combining adversarial and provably robust training are the most effective at selectively filtering out spurious features as well as relying on human-understandable input features.
Debangshu Banerjee, Avaljot Singh, Gagandeep Singh
2023-01-31T18:41:28Z
http://arxiv.org/abs/2301.13845v1
# Interpreting Robustness Proofs of Deep Neural Networks ###### Abstract In recent years numerous methods have been developed to formally verify the robustness of deep neural networks (DNNs). Though the proposed techniques are effective in providing mathematical guarantees about the DNNs behavior, it is not clear whether the proofs generated by these methods are human-interpretable. In this paper, we bridge this gap by developing new concepts, algorithms, and representations to generate human understandable interpretations of the proofs. Leveraging the proposed method, we show that the robustness proofs of standard DNNs rely on spurious input features, while the proofs of DNNs trained to be provably robust filter out even the semantically meaningful features. The proofs for the DNNs combining adversarial and provably robust training are the most effective at selectively filtering out spurious features as well as relying on human-understandable input features. Machine Learning, Robustness, Deep Neural Networks ## 1 Introduction The black box construction and lack of robustness of deep neural networks (DNNs) are major obstacles to their real-world deployment in safety-critical applications like autonomous driving (Bojarski et al., 2016) or medical diagnosis (Amato et al., 2013). To mitigate the lack of trust caused by black-box behaviors, there has been a large amount of work on interpreting individual DNN predictions to gain insights into their internal workings. Orthogonally, the field of DNN verification has emerged to formally prove or disprove the robustness of neural networks in a particular region capturing an infinite set of inputs. Verification can be leveraged during training for constructing more robust models. We argue that while these methods do improve trust to a certain degree, the insights and guarantees derived from their independent applications are not enough to build sufficient confidence for enabling the reliable real-world deployment of DNNs. Existing DNN interpretation methods (Sundararajan et al., 2017) explain the model behavior on individual inputs, but they often do not provide human-understandable insights into the workings of the model on an infinite set of inputs handled by verifiers. Similarly, the DNN verifiers (Singh et al., 2019; Zhang et al., 2018) can generate formal proofs capturing complex invariants sufficient to prove network robustness but it is not clear whether these proofs are based on any meaningful input features learned by the DNN that are necessary for correct classification. This is in contrast to standard program verification tasks where proofs capture the semantics of the program and property. In this work, to improve trust, we propose for the first time, the problem of interpreting the invariants captured by DNN robustness proofs. **Key Challenges.** The proofs generated by state-of-the-art DNN verifiers encode high-dimensional complex convex shapes defined over thousands of neurons in the DNN. It is not exactly clear how to map these shapes to human understandable interpretations. Further, certain parts of the proof may be more important for it to hold than the rest. Thus we need to define a notion of importance for different parts of the proof and develop methods to identify them. **Our Contributions.** We make the following contributions to overcome these challenges and develop a new method for interpreting DNN robustness proofs. * We introduce a novel concept of proof features that can be analyzed independently by generating the corresponding interpretations. A priority function is then associated with the proof features that signifies their importance in the complete proof. * We design a general algorithm called _SuPFEx_ (**S**ifficient **P**roof **F**eature **E**xtraction) that extracts a set of proof features that retain only the more important parts of the proof while still proving the property. * We compare interpretations of the proof features for standard DNNs and state-of-the-art robustly trained DNNs for the MNIST and CIFAR10 datasets. We observe that the proof features corresponding to the standard networks rely on spurious input features while the proofs of adversarially trained DNNs (Madry et al., 2018) filter out some of the spurious features. In contrast, the networks trained with certifiable training (Zhang et al., 2020) produce proofs that do not rely on any spurious features but they also miss out on some meaningful features. Proofs for training methods that combine both empirical robustness and certified robustness (Balunovic and Vechev, 2020) provide a common ground. They not only rely on human interpretable features but also selectively filter out the spurious ones. We also empirically show that these observations are not contingent on any specific DNN verifier. ## 2 Related Work We discuss prior works related to ours. **DNN interpretability.** There has been an extensive effort to develop interpretability tools for investigating the internal workings of DNNs. These include feature attribution techniques like saliency maps (Sundararajan et al., 2017; Smilkov et al., 2017), using surrogate models to interpret local decision boundaries (Ribeiro et al., 2016), finding influential (Koh and Liang, 2017), prototypical (Kim et al., 2016), or counterfactual inputs (Goyal et al., 2019), training sparse decision layers (Wong et al., 2021), utilizing robustness analysis (Hsieh et al., 2021). Most of these interpretability tools focus on generating local explanations that investigate how DNNs work on individual inputs. Another line of work, rather than explaining individual inputs, tries to identify specific concepts associated with a particular neuron (Simonyan et al., 2014; Bau et al., 2020). However, to the best of our knowledge, there is no existing work that allows us to interpret DNN robustness proofs. **DNN verification.** Unlike DNN interpretability methods, prior works in DNN verification focus on formally proving whether the given DNN satisfies desirable properties like robustness (Singh et al., 2019; Wang et al., 2021), fairness (Mazzucato and Urban, 2021), etc. The DNN verifiers are broadly categorized into three main categories - (i) sound but incomplete verifiers which may not always prove property even if it holds (Gehr et al., 2018; Singh et al., 2018, 2019; Ma et al., 2018; Xu et al., 2020; Salman et al., 2019), (ii) complete verifiers that can always prove the property if it holds (Wang et al., 2018; Gehr et al., 2018; Bunel et al., 2020; Bak et al., 2020; Ehlers, 2017; Ferrari et al., 2022; Fromherz et al., 2021; Wang et al., 2021; Palma et al., 2021; Anderson et al., 2020; Zhang et al., 2022) and (iii) verifiers with probabilistic guarantees (Cohen et al., 2019). **Robustness and interpretability.** Existing works (Madry et al., 2018; Balunovic and Vechev, 2020; Zhang et al., 2020) in developing robustness training methods for neural networks provide a framework to produce networks that are inherently immune to adversarial perturbations in input. Recent works (Tsipras et al., 2019; Zhang et al., 2019) also show that there may be a robustness-accuracy tradeoff that prevents highly robust models achieve high accuracy. Further, in (Tsipras et al., 2019) authors show that networks trained with adversarial training methods learn fundamentally different input feature representations than standard networks where the adversarially trained networks capture more human-aligned data characteristics. ## 3 Preliminaries In this section, we provide the necessary background on DNN verification and existing works on traditional DNN interpretability with sparse decision layers. While our method is applicable to general architectures, for simplicity, we focus on a \(l\)-layer feedforward DNN \(N:\mathbb{R}^{d_{0}}\rightarrow\mathbb{R}^{d_{l}}\) for the rest of this paper. Each layer \(i\) except the final one applies the transformation \(X_{i}=\sigma_{i}(W_{i}\cdot X_{i-1}+B_{i})\) where \(W_{i}\in\mathbb{R}^{d_{i}\times d_{i-1}}\) and \(B_{i}\in\mathbb{R}^{d_{i}}\) are respectively the weights and biases of the affine transformation and \(\sigma_{i}\)is a non-linear activation like ReLU, Sigmoid, etc. corresponding to layer \(i\). The final layer only applies the affine transformation and the network output is a vector \(Y=W_{l}\cdot X_{l-1}+B_{l}\). **DNN verification.** At a high level, DNN verification involves proving that the network outputs \(Y=N(X)\) corresponding to all inputs \(X\) from an input region specified by \(\phi\), satisfy a logical specification \(\psi\). A common property is - the local robustness where the output specification \(\psi\) is defined as linear inequality over the elements of the output vector of the neural network. The output specification, in this case, is written as \(\psi(Y)=(C^{T}Y\geq 0)\) where \(C\in\mathbb{R}^{d_{i}}\) defines the linear inequality for encoding the robustness property. For the rest of the paper, we refer to the input region \(\phi\) and output specification \(\psi\) together as _property_\((\phi,\psi)\). Next, we briefly discuss how DNN robustness verifiers work. A DNN verifier \(\mathcal{V}\) symbolically computes a possibly over-approximated output region \(\mathcal{A}\subseteq\mathbb{R}^{d_{l}}\) containing all possible outputs of \(N\) corresponding to \(\phi\). Let, \(\Lambda(\mathcal{A})=\min_{Y\in\mathcal{A}}C^{T}Y\) denote the minimum value of \(C^{T}Y\) where \(Y\in\mathcal{A}\). Then \(N\) satisfies property \((\phi,\psi)\) if \(\Lambda(\mathcal{A})\geq 0\). Most existing DNN verifiers (Singh et al., 2018, 2019; Zhang et al., 2018) are exact for affine transformations. However, for non-linear activation functions, these verifiers compute convex regions that over-approximate the output of the activation function. Note that, due to the over-approximations, DNN verifiers are sound but not complete - the verifier may not always prove property even if it holds. For piecewise linear activation functions like ReLU, complete verifiers exist handling the activation exactly, which in theory always prove a property if it holds. Nevertheless, complete verification in the worst case takes exponential time, making them practically infeasible. In the rest of the paper, we focus on deterministic, sound, and incomplete verifiers which are more scalable than complete verifiers. **DNN interpretation with sparse decision layer.** DNNs considered in this paper, have complex multi-layer structures, making them harder to interpret. Instead of interpreting what each layer of the network is doing, recent works (Wong et al., 2021; Liao and Cheung, 2022) treat DNNs as the composition of a _deep feature extractor_ and an affine decision layer_. The output of each neuron of the penultimate layer represents a single deep feature and the final affine layer linearly combines these deep features to compute the network output. This perspective enables us to identify the set of features used by the network to compute its output and to investigate their semantic meaning using the existing feature visualization techniques Ribeiro et al. (2016); Simonyan et al. (2014). However, visualizing each feature is practically infeasible for large DNNs where the penultimate layer can contain hundreds of neurons. To address this, the work of Wong et al. (2021) tries to identify a smaller set of features that are sufficient to maintain the perfomance of the network. This smaller but sufficient feature set retains only the most important features corresponding to a given input. It is shown empirically Wong et al. (2021) that a subset of these features of size less than 10 is sufficient to maintain the accuracy of state-of-the-art models. ## 4 Interpreting DNN Proofs Next, we describe our approach for interpreting DNN robustness proofs. **Proof features.** Similar to traditional DNN interpretation described above, for proof interpretation, we propose to segregate the final decision layer from the network and look at the features extracted at the penultimate layer. However, DNN verifiers work on an input region (\(\phi\)) consisting of infinitely many inputs instead of a single input as handled by existing work. In this case, for a given input region \(\phi\), we look at the symbolic shape (for example - intervals, zonotopes, polytopes, etc.) computed by the verifier at the penultimate layer and then compute its projection on each dimension of the penultimate layer. These projections yield an interval \([l_{n},u_{n}]\) which contains all possible output values of the corresponding neuron \(n\) with respect to \(\phi\). **Definition 1** (Proof Features).: _Given a network \(N\), input region \(\phi\) and neural network verifier \(\mathcal{V}\), for each neuron \(n_{i}\) at the penultimate layer of \(N\), the proof feature \(\mathcal{F}_{n_{i}}\) extracted at that neuron \(n_{i}\) is an interval \([l_{n_{i}},u_{n_{i}}]\) such that \(\forall X\in\phi\), the output of \(n_{i}\) always lies in the range \([l_{n_{i}},u_{n_{i}}]\)._ Note that, the computation of the proof features is verifier dependent, i.e., for the same network and input region, different verifiers may compute different values \(l_{n}\) and \(u_{n}\) for a particular neuron \(n\). For any input region \(\phi\), the first \((l-1)\) layers of \(N\) along with the verifier \(\mathcal{V}\) act as the **proof feature extractor**. For the rest of this paper, we use \(\mathcal{F}\) to denote the set of all proof features at the penultimate layer and \(\mathcal{F}_{S}\) to denote the proof features corresponding to \(S\subseteq[d_{l-1}]\). \[\mathcal{F}_{S}=\{\mathcal{F}_{n_{i}}\mid i\in S\}\] Suppose \(N\) is formally verified by the verifier \(\mathcal{V}\) to satisfy the property (\(\phi\), \(\psi\)). Then in order to gain insights about the proof generated by \(\mathcal{V}\), we can directly investigate (described in section 4.3) the extracted proof features \(\mathcal{F}\). However, the number of proof features for contemporary networks can be very large (in hundreds). Many of these features may be spurious and not important for the proof. Similar to how network interpretations are generated when classifying individual inputs, we want to identify a smaller set of proof features that are more important for the proof of the property (\(\phi\), \(\psi\)). The key challenge here is defining the most important set of proof features w.r.t the property \((\phi,\psi)\). ### Sufficient Proof Features We argue that a _minimum_ set of proof features \(\mathcal{F}_{S_{0}}\subseteq\mathcal{F}\) that can prove the property \((\phi,\psi)\) with verifier \(\mathcal{V}\) contains an important set of proof features w.r.t \((\phi,\psi)\). The minimality of \(\mathcal{F}_{S_{0}}\) enforces that it can only retain the proof features that are essential to prove the property. Otherwise, it would be possible to construct a smaller set of proof features that preserves the property violating the minimality of \(\mathcal{F}_{S_{0}}\). Leveraging this hypothesis, we can model extracting a set of important proof features as computing a minimum proof feature set capable of preserving the property \((\phi,\psi)\) with \(\mathcal{V}\). To identify a minimum proof feature set, we introduce the novel concepts of proof feature pruning and sufficient proof features below: **Definition 2** (Proof feature Pruning).: _Pruning any Proof feature \(\mathcal{F}_{n_{i}}\in\mathcal{F}\) corresponding to neuron \(n_{i}\) in the penultimate layer involves setting weights of all its outgoing connections to 0 so that given any input \(X\in\phi\) the final output of \(N\) no longer depends on the output of \(n_{i}\)._ Once, a proof feature \(\mathcal{F}_{n_{i}}\) is pruned the verifier \(\mathcal{V}\) no longer uses \(\mathcal{F}_{n_{i}}\) to prove the property \((\phi,\psi)\). **Definition 3** (Sufficient proof features).: _For the proof of property \((\phi,\psi)\) on DNN \(N\) with verifier \(\mathcal{V}\), a nonempty set \(\mathcal{F}_{S}\subseteq\mathcal{F}\) of proof features is sufficient if the property still holds with verifier \(\mathcal{V}\) even if all the proof features **not in \(\mathcal{F}_{S}\)** are pruned._ **Definition 4** (Minimum proof features).: _Minimum proof feature set \(\mathcal{F}_{S_{0}}\subseteq\mathcal{F}\) for a network \(N\) verified with \(\mathcal{V}\) on \((\phi,\psi)\) is a sufficient proof feature set containing the minimum number of proof features._ Extracting a minimum set of proof features \(\mathcal{F}_{S_{0}}\) from \(\mathcal{F}\) is equivalent to pruning maximum number of proof features from \(\mathcal{F}\) without violating the property \((\phi,\psi)\). Let, \(W_{l}[\)\(;,i]\in\mathbb{R}^{d_{l}}\) denote the \(i\)-th column of the weight matrix \(W_{l}\) of the final layer \(N_{l}\). Pruning any proof feature \(\mathcal{F}_{n_{i}}\) results in setting all weights in \(W_{l}[\)\(;\)\(;\)\(i]\) to 0. Therefore, to compute \(\mathcal{F}_{S_{0}}\), it is sufficient to devise an algorithm that can prune maximum number of columns from \(W_{l}\) while still preserving the property \((\phi,\psi)\). For any proof feature set \(\mathcal{F}_{S}\subseteq\mathcal{F}\), let \(W_{l}(S)\in\mathbb{R}^{d_{l}\times d_{l-1}}\) be the weight matrix of the pruned final layer that only retains proof features corresponding to \(\mathcal{F}_{S}\). Then columns of \(W_{l}(S)\) are defined as follows where \(\underline{0}\in\mathbb{R}^{d_{l-1}}\) dentoes a constant all-zero vector \[W_{l}(S)[:,i]=\begin{cases}W_{l}[:,i]&i\in S\\ \underline{0}&\text{otherwise}\end{cases} \tag{1}\] The proof feature set \(\mathcal{F}_{S}\) is sufficient iff the property \((\phi,\psi)\) can be verified by \(\mathcal{V}\) on \(N\) with the pruned weight matrix \(W_{l}(S)\). As described in Section 3, for property verification the verifier computes \(\mathcal{V}\) an over-approximated output region \(\mathcal{A}\) of \(N\) over the input region \(\phi\). Given that we never change the input region \(\phi\) and the proof feature extractor composed of the first \(l-1\) layers of \(N\) and the verifier \(\mathcal{V}\), the output region \(\mathcal{A}\) only depends on the pruning done at the final layer. Now let \(\mathcal{A}(W_{l},S)\) denote the over-approximated output region corresponding to \(W_{l}(S)\). The neural network \(N\) can be verified by \(\mathcal{V}\) on the property \((\phi,\psi)\) with \(W_{l}(S)\) iff the lower bound \(\Lambda(\mathcal{A}(W_{l},S))\geq 0\). Therefore, finding \(S_{0}\) corresponding to a minimum proof feature set \(\mathcal{F}_{S_{0}}\) can be formulated as below where for any \(S\subseteq[d_{l-1}]\), \(|S|\) denotes the number of elements in \(S\). \[\operatorname*{arg\,min}_{S\neq\emptyset,\ S\subseteq[d_{l-1}]}|S|\quad\text{ s.t. }\ \Lambda(\mathcal{A}(W_{l},S))\geq 0 \tag{2}\] ### Approximate Minimum Proof Feature Extraction The search space for finding \(\mathcal{F}_{S_{0}}\) is prohibitively large and contains \(2^{d_{l-1}}\) possible candidates. So, computing a minimum solution with an exhaustive search is infeasible. Even checking the sufficiency of any arbitrary proof feature set \(\mathcal{F}_{S}\) (Definition 3) is not trivial and requires expensive verifier invocations. We note that even making \(O(d_{l-1})\) verifier calls is too expensive for the network sizes considered in our evaluation. Given the large DNN size, exponential search space, and high verifier cost, efficiently computing a _minimum_ sufficient proof feature set is computationally intractable. We design a practically efficient approximation algorithm based on a greedy heuristic that can generate a smaller (may not always be minimum) sufficient feature set with only \(O(\log(d_{l-1}))\) verifier calls. At a high level, for each proof feature \(\mathcal{F}_{n_{i}}\) contained in a sufficient feature set, the heuristic tries to estimate whether pruning \(\mathcal{F}_{n_{i}}\) violates the property \((\phi,\psi)\) or not. Subsequently, we prioritize pruning of those proof features \(\mathcal{F}_{n_{i}}\) that, if pruned, will likely preserve the proof of the property (\(\phi\),\(\psi\)) with the verifier \(\mathcal{V}\). For any proof feature \(\mathcal{F}_{n_{i}}\in\mathcal{F}_{S}\) where \(\mathcal{F}_{S}\) is sufficient and proves the property \((\phi,\psi)\), we estimate the change \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) that occurs to \(\Lambda(\mathcal{A}(W_{l},S))\) if \(\mathcal{F}_{n_{i}}\) is pruned from \(\mathcal{F}_{S}\). Let, the over-approximated output region computed by verifier \(\mathcal{V}\) corresponding to \(\mathcal{F}_{S}\setminus\{\mathcal{F}_{n_{i}}\}\) be \(\mathcal{A}(W_{l},S\setminus\{i\})\) then \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) is defined as follows \[\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})=|\Lambda(\mathcal{A}(W_{l},S))- \Lambda(\mathcal{A}(W_{l},S\setminus\{i\}))|\] Intuitively, proof features \(\mathcal{F}_{n_{i}}\) with higher values of \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) for some sufficient feature set \(\mathcal{F}_{S}\subseteq\mathcal{F}\) are responsible for large changes to \(\Lambda(\mathcal{A}(W_{l}(S)))\) and likely to break the proof if pruned. Note, \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) depends on the particular sufficient proof set \(\mathcal{F}_{S}\) and does not estimate the global importance of \(\mathcal{F}_{n_{i}}\) independent of the choice of \(\mathcal{F}_{S}\). To mitigate this issue, while defining the priority \(P(\mathcal{F}_{n_{i}})\) of a proof feature \(\mathcal{F}_{n_{i}}\) we take the maximum of \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) across all sufficient feature set \(\mathcal{F}_{S}\) containing \(\mathcal{F}_{n_{i}}\). Let, \(\mathbb{S}(\mathcal{F}_{n_{i}})\) denote set of all sufficient \(\mathcal{F}_{S}\) containing \(\mathcal{F}_{n_{i}}\). Then, \(P(\mathcal{F}_{n_{i}})\) can be formally defined as follows \[P(\mathcal{F}_{n_{i}})=\max_{\mathcal{F}_{S}\in\mathbb{S}(\mathcal{F}_{n_{i}} )}\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S}) \tag{3}\] Given the set \(\mathbb{S}(\mathcal{F}_{n_{i}})\) can be exponentially large, finding the maximum value of \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) over \(\mathbb{S}(\mathcal{F}_{n_{i}})\) is practically infeasible. Instead, we compute a resonably tight upper bound \(P_{ub}(\mathcal{F}_{n_{i}})\) on \(P(\mathcal{F}_{n_{i}})\) by estimating a global upper bound on \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\), that holds \(\forall\mathcal{F}_{S}\in\mathbb{S}(\mathcal{F}_{n_{i}})\). The proposed upper bound is independent of the choice of \(\mathcal{F}_{S}\in\mathbb{S}(\mathcal{F}_{n_{i}})\) and therefore removes the need to iterate over \(\mathbb{S}(\mathcal{F}_{n_{i}})\) enabling efficient computation. For the network \(N\) and input region \(\phi\), let \(\mathcal{A}_{l-1}\) denote the over-approximate symbolic region computed by \(\mathcal{V}\) at the penultimate layer. Then \(\forall\mathcal{F}_{S}\in\mathbb{S}(\mathcal{F}_{n_{i}})\) the global uppper bound of \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) can be computed as follows where for any vector \(X\in\mathbb{R}^{d_{l-1}}\), \(x_{i}\) denotes its \(i\)-th coordinate: \[\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S}) \leq\max_{X\in\mathcal{A}_{l-1}}|(C^{T}W_{l}(S)X-C^{T}W_{l}(S \setminus\{i\})X)|\] \[=\max_{X\in\mathcal{A}_{l-1}}|(C^{T}W_{l}[:,i])\cdot x_{i})|\] \[P(\mathcal{F}_{n_{i}}) \leq\max_{X\in\mathcal{A}_{l-1}}|(C^{T}W_{l}[:,i])\cdot x_{i})|\] Now, any proof feature \(\mathcal{F}_{n_{i}}=[l_{n_{i}},u_{n_{i}}]\) computed by \(\mathcal{V}\) contains all possible values of \(x_{i}\) where \(X\in\mathcal{A}_{l-1}\). Leveraging this observation, we can further simplify the upper bound \(P_{ub}(\mathcal{F}_{n_{i}})\) of \(P(\mathcal{F}_{n_{i}})\) as shown below. \[P(\mathcal{F}_{n_{i}}) \leq\max_{x_{i}\in[l_{n_{i}},u_{n_{i}}]}|(C^{T}W_{l}[:,i])|\cdot x_{ i})|\] \[P_{ub}(\mathcal{F}_{n_{i}}) =|(C^{T}W_{l}[:,i])|\cdot\max(|l_{n_{i}}|,|u_{n_{i}}|) \tag{4}\] This simplification ensures that \(P_{ub}(\mathcal{F}_{n_{i}})\) for all \(\mathcal{F}_{n_{i}}\) can be computed with \(O(d_{l-1})\) elementary vector operations and a single verifier call that computes the intervals \([l_{n_{i}},u_{n_{i}}]\). Next, we describe how we compute an approximate feature set using the feature priorities \(P_{ub}(\mathcal{F}_{n_{i}})\). For any feature \(\mathcal{F}_{n_{i}}\), \(P_{ub}(\mathcal{F}_{n_{i}})\) estimates the importance of \(\mathcal{F}_{n_{i}}\) in preserving the proof. So, a trivial step is to just prune all the proof features from \(\mathcal{F}\) whose \(P_{ub}\) is 0. These features do not have any contribution to the proof of the property \((\phi,\psi)\) by the verifier \(\mathcal{V}\). This step forms a trivial algorithm. However, this is not enough. We can further prune some more proof features leading to a yet smaller set. For this, we propose an iterative algorithm **SuPFEx** shown in Algorithm 1 (\(\mathbb{A}\)) which maintains two set namely, \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) and \(\mathcal{F}^{(\mathbb{A})}_{S}\). \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) contains the features guaranteed to be included in the final answer computed by SuPFEx and \(\mathcal{F}^{(\mathbb{A})}_{S}\) contains the candidate features to be pruned by the algorithm. At every step, the algorithm ensures that the set \(\mathcal{F}^{(\mathbb{A})}_{S}\cup\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) is sufficient and iteratively reduces its size by pruning proof features from \(\mathcal{F}^{(\mathbb{A})}_{S}\). The algorithm iteratively prunes the feature \(\mathcal{F}_{n_{i}}\) with the lowest value of \(P_{ub}(\mathcal{F}_{n_{i}})\) from \(\mathcal{F}^{(\mathbb{A})}_{S}\) to maximize the likelihood that \(\mathcal{F}^{(\mathbb{A})}_{S}\cup\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) remains sufficient at each step. At Line 8 in the algorithm, \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) and \(\mathcal{F}^{(\mathbb{A})}_{S}\) initialized as \(\{\}\) (empty set) and \(\mathcal{F}\) respectively. Removing a single feature in each iteration and checking the sufficiency of the remaining features in the worst case leads to \(O(d_{l-1})\) verifier calls which are infeasible. Instead, at each step, from \(\mathcal{F}^{(\mathbb{A})}_{S}\) our algorithm greedily picks top-\(|S|/2\) features (line 10) \(\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) based on their priority and invokes the verifier \(\mathcal{V}\) to check the sufficiency of \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) (line 12). If the feature set \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) is sufficient (line 13), \(\mathbb{A}\) removes all features in \(\mathcal{F}^{(\mathbb{A})}_{S}\setminus\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) from \(\mathcal{F}^{(\mathbb{A})}_{S}\) and therefore \(\mathcal{F}^{(\mathbb{A})}_{S}\) is updated as \(\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) in this step (line 14). Otherwise, if \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) does not preserve the property (\(\phi\),\(\psi\)) (line 15), \(\mathbb{A}\) adds all feature in \(\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) to \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) (line 16) and replaces \(\mathcal{F}^{(\mathbb{A})}_{S}\) with \(\mathcal{F}^{(\mathbb{A})}_{S}\setminus\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) (line 17). The algorithm (\(\mathbb{A}\)) terminates after all features in \(\mathcal{F}^{(\mathbb{A})}_{S}\) are exhausted. Since at every step, the algorithm reduces size of \(\mathcal{F}^{(\mathbb{A})}_{S}\) by half, it always terminates within \(O(\log(d_{l-1}))\) verifier calls. **Limitations.** We note that the scalability of our method is ultimately limited by the scalability of the existing verifiers. Therefore, SuPFEx currently cannot handle networks for larger datasets like ImageNet. Nonetheless, SuPFEx is general and compatible with any verification algorithm. Therefore, SuPFEx will benefit from any future advances to enable the neural network verifiers to scale to larger datasets. Next, we derive mathematical guarantees about the correctness and efficacy of Algorithm 1. For correctness, we prove that the feature set \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) is always sufficient (Definition 3). For efficacy, we theoretically find a non-trivial upper bound on the size of \(\mathcal{F}^{(\mathbb{A})}_{S}\). **Theorem 1**.: _If the verifier \(\mathcal{V}\) can prove the property \((\phi,\psi)\) on the network \(N\), then \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) computed by Algorithm 1 is sufficient (Definition 3)._ This follows from the fact that SuPFEx Algorithm ensures at each step that \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F}^{(\mathbb{A})}_{S}\) is sufficient. Hence, at termination the feature set \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) is sufficient. The complete proof of Theorem 1 is in appendix A. Next, we find a non-trivial upper bound on the size of \(\mathcal{F}^{(\mathbb{A})}_{S}\) computed by the algorithm. **Definition 5**.: _For \(\mathcal{F}\), zero proof features set \(Z(\mathcal{F})\) denotes the proof features \(\mathcal{F}_{n_{i}}\in\mathcal{F}\) with \(P_{ub}(\mathcal{F}_{n_{i}})=0\)._ Note, any proof feature \(\mathcal{F}_{n_{i}}\in Z(\mathcal{F})\) can be trivially removed without breaking the proof. Further, we show that some additional proof features will be filtered out from the original proof feature set. So, the size of the proof feature set \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) extracted by SuPFEx is guaranteed to be less than the value computed in Theorem 2. **Theorem 2**.: _Let, \(P_{max}\) denote the maximum of all priorities \(P_{ub}(\mathcal{F}_{n_{i}})\) over \(\mathcal{F}\). Given any network \(N\) is verified on \((\phi,\psi)\) with verifier \(\mathcal{V}\) then \(|\mathcal{F}^{(\mathbb{A})}_{S_{0}}|\leq d_{l-1}-|Z(\mathcal{F})|-\lfloor\frac{ \Lambda(\mathcal{A})}{P_{max}}\rfloor\)_ The exact proof for Theorem 2 can be found in Appendix A ``` 1:Input: network \(N\), property \((\phi,\psi)\), verifier \(\mathcal{V}\). 2:Output: approximate minimal proof features \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\), 3:if\(\mathcal{V}\) can not verify \(N\) on \((\phi,\psi)\)then 4:return 5:endif 6:Calculate all proof features for input region \(\phi\). 7:Calculate priority \(P_{ub}(\mathcal{F}_{n_{i}})\) all proof features \(\mathcal{F}_{n_{i}}\). 8:Initialization:\(\mathcal{F}^{(\mathbb{A})}_{S_{0}}=\{\}\), \(\mathcal{F}^{(\mathbb{A})}_{S}=\mathcal{F}\) 9:while\(\mathcal{F}^{(\mathbb{A})}_{S}\) is not empty do 10:\(\mathcal{F}^{(\mathbb{A})}_{S_{1}}=\) top-\(|S|/2\) features selected based on \(P_{ub}(\mathcal{F}_{n_{i}})\) 11:\(\mathcal{F}^{(\mathbb{A})}_{S_{2}}=\mathcal{F}^{(\mathbb{A})}_{S}\setminus\mathcal{F }^{(\mathbb{A})}_{S_{1}}\) 12: Check sufficiency of \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) with \(\mathcal{V}\) on \((\phi,\psi)\) 13:if\(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) is sufficient then 14:\(\mathcal{F}^{(\mathbb{A})}_{S}=\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) {all features in \(\mathcal{F}_{S_{2}}\) are pruned} 15:else 16:\(\mathcal{F}^{(\mathbb{A})}_{S_{0}}=\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F }^{(\mathbb{A})}_{S_{1}}\) 17:\(\mathcal{F}^{(\mathbb{A})}_{S}=\mathcal{F}^{(\mathbb{A})}_{S_{2}}\) 18:endif 19:endwhile 20:return proof features \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\). ``` **Algorithm 1** Approx. minimum proof feature computation ### Interpreting proof features For interpreting proofs of DNN robustness, we now develop methods to analyze the semantic meaning of the extracted proof features. There exists a plethora of works that compute local DNN explanations (Sundararajan et al., 2017; Smilkov et al., 2017). However, these techniques are insufficient to generate an explanation w.r.t an input region. To mitigate this, we adapt the existing local visualization techniques for visualizing the extracted proof features. Given a proof feature \(\mathcal{F}_{n_{i}}\), we intend to compute \(\mathcal{G}(\mathcal{F}_{n_{i}},\phi)=\mathbb{E}_{X\sim\phi}\,\mathcal{G}(n_{i},X)\) which is the mean gradient of the output of \(n_{i}\) w.r.t the inputs from \(\phi\). For each input dimension (pixel in case of images) \(j\in[d_{0}]\) the \(j\)-th component of \(\mathcal{G}(\mathcal{F}_{n_{i}},\phi)\) estimates its relevance w.r.t proof feature \(\mathcal{F}_{n_{i}}\) - the higher is the gradient value, the higher is its relevance. Considering that the input region \(\phi\) contains infinitely many inputs, exactly computing \(\mathcal{G}(\mathcal{F}_{n_{i}},\phi)\) is impossible. Rather, we statistically estimate \(\mathcal{G}(\mathcal{F}_{n_{i}},\phi)\) by a resonably large sample \(X_{S}\) drawn uniformly from \(\phi\). ## 5 Experimental Evaluation ### Experimental setup For evaluation we use convolutional networks trained on two popular datasets - MNIST (LeCun et al., 1989) CIFAR-10 (Krizhevsky, 2009) shown in Table 1. The networks are trained with standard training and three state-of-the-art robust training methodologies - adversarial training (PGD training) (Madry et al., 2018), certified robust training (CROWN-IBP) (Zhang et al., 2020) and a combination of both adversarial and certified training (COLT) (Balunovic and Vechev, 2020). For our experiments, we use pre-trained publically available networks - the standard and PGD-trained networks are taken from the ERAN project (Singh et al., 2019), COLT-trained networks from COLT website (Balunovic and Vechev, 2020), and CROWN-IBP trained networks from the CROWN-IBP repository (Zhang et al., 2020). Similar to most of the existing works on neural network verification (Carlini and Wagner, 2017; Singh et al., 2019), we use \(L_{\infty}\)-based local robustness properties. Here, the input region \(\phi\) contains all images obtained by perturbing the intensity of each pixel in the input image independently within a bound \(\epsilon\in\mathbb{R}\). \(\psi\) specifies a region where the network output for the correct class is higher than all other classes. We use \(\epsilon_{train}=0.3\) for all robustly trained MNIST networks and \(\epsilon_{train}=8/255\) for all robustly trained CIFAR-10 networks. Unless specified otherwise, the proofs are generated by running the popular DeepZ (Singh et al., 2019) verifier. We perform all our experiments on a 16-core 12th-gen i7 machine with 16 GB of RAM. ### Efficacy of SuPFEx Algorithm In this section, we experimentally evaluate the efficacy of the SuPFEx based on the size of the output sufficient feature sets. Given that there is no existing work for pruning proof feature sets, we use the upper bound computed in Theorem 2 as the baseline. Note that this bound is better than the size of the proof feature set extracted by the Figure 1: Distribution of the size of the proof feature set computed by SuPFEx Algorithm on COLT-trained networks. that only removes only "zero" features which include the proof features (\([l,u]\)) where both \(l=u=0\). (Definition 5) For each network, we use \(500\) randomly picked images from their corresponding test sets. The \(\epsilon\) used for MNIST networks is \(0.02\) and that for CIFAR-10 networks is \(0.2/255\). We note that although the robustly trained networks can be verified robust for higher values of \(\epsilon\), it is not possible to verify standard networks with such high values. To achieve common ground, we use small \(\epsilon\) values for experiments involving standard networks and conduct separate experiments on only robustly trained networks with higher values of \(\epsilon\) (\(0.1\) for MNIST, \(2/255\) for CIFAR-10 networks). As shown in Table 1 we do not observe any significant change in the performance of SuPFEx w.r.t different \(\epsilon\)-values. In Table 1, we show the value of \(\epsilon\) used to define the region \(\phi\) in column 3, and the total number of properties proved out of 500 in column 4. The size of the original proof feature size corresponding to each network is shown in column 5, the mean and median of the proof feature set size computed using Theorem 2 in columns 6 and 7 respectively, and the mean and median of the proof feature set size computed using SuPFEx in columns 8 and 9 respectively. We note that feature sets obtained by SuPFEx are significantly smaller than the upper bound provided by Theorem 2. For example, in the case of the PGD trained MNIST network with \(1000\) neurons in the penultimate layer, the average size computed from Theorem 2 is \(218.02\), while that obtained using SuPFEx is only \(5.57\). In the last two columns of Table 1, we summarise the percentage of cases where we are able to achieve a proof feature set of size less than or equal to \(5\) and \(10\) respectively. Figures 0(a) and 0(b) display a histogram where the x-axis is the size of the extracted proof feature set using SuPFEx and y-axis is the number of local robustness properties for COLT-trained DNNs. Histograms for other DNNs are presented in Appendix B. These histograms are skewed towards the left which means that for most of the local properties, we are able to generate a small set of proof features using SuPFEx. ### Qualititive comparison of robustness proofs It has been observed in (Tsipras et al., 2019) that the standardly trained networks rely on some of the spurious features in the input in order to gain a higher accuracy and as a result, are not very robust against adversarial attacks. On the other hand, the empirically robustly trained networks rely more on human-understandable features and are, therefore, more robust against attacks. This empirical robustness comes at cost of reduced accuracy. So, there is an inherent dissimilarity between the types of input features that the standard and adversarially trained networks rely on while classifying a single input. Also, certified robust trained networks are even more robust than the empirically trained ones, however, they report even less accuracy (Muller et al., 2021). In this section, we interpret proof features obtained with SuPFEx and use these interpretations to qualitatively check whether the dissimilarities are also evident in the invariants captured by the different proofs of the same robustness property on standard and robustly trained networks. We also study the effect of certified robust training methods like CROWN-IBP (Zhang et al., 2020), empirically robust training methods like PGD (Madry et al., 2018) and training methods that combine both adversarial and certified training like COLT (Balunovic and Vechev, 2020) on the proof features. For a local input region \(\phi\), we say that a robustness proof is semantically meaningful if it focuses on the relevant features of the output class for images contained inside \(\phi\) and not on the spurious features. In the case of MNIST or CIFAR-10 images, spurious features are the pixels that form a part of the background of the image, whereas important features are the pixels that are a part of the actual object being identified by the network. Gradient map of the extracted proof features w.r.t. to the input region \(\phi\) gives us an idea of the input pixels that the network focuses on. We obtain the gradient maps by calculating the mean gradient over 100 uniformly drawn samples from \(\phi\) as described in Section 4.3. As done in (Tsipras et al., 2019), to avoid introducing any inherent bias in proof feature visualization, no preprocessing (other than scaling and clipping for visualization) is applied to the gradients obtained for each individual sample. In Fig. 2, we compare the gradient maps corresponding to the top proof feature (the one having the highest prior Figure 2: The top proof feature corresponding to DNNs trained using different methods rely on different input features. ity \(P_{ub}(\mathcal{F}_{n_{i}})\)) on networks from Table 1 on representative images of different output classes in the MNIST and CIFAR10 test sets. The experiments leads us to interesting observations - even if some property is verified for both the standard network and the robustly trained network, there is a difference in the human interpretability of the types of input features that the proofs rely on. The standard networks and the provably robust trained networks like CROWN-IBP are the two extremes of the spectrum. For the networks obtained with standard training, we observe that although the top-proof feature does depend on some of the semantically meaningful regions of the input image, the gradient at several spurious features is also non-zero. On the other hand, the top proof feature corresponding to state-of-the-art provably robust training method CROWN-IBP filters out most of the spurious features, but it also misses out on some meaningful features. The proofs of PGD-trained networks filter out the spurious features and are, therefore, more semantically aligned than the standard networks. The proofs of the training methods that combine both empirical robustness and provable robustness like COLT in a way provide the best of both worlds by not only selectively filtering out the spurious features but also highlighting the more human interpretable features, unlike the certifiably trained networks. So, as the training methods tend to regularize more for robustness, their proofs become more conservative in relying on the input features. To further support our observation, we show additional plots for the top proof feature visualization in Appendix B.2 and visualization for multiple proof features in Appendix B.4. We also conduct experiments for different values of \(\epsilon\) used for defining \(\phi\). The extracted proof features set w.r.t high \(\epsilon\) values (\(\epsilon=0.1\) for MNIST and \(\epsilon=2/255\) for CIFAR-10) are similar to those generated with smaller \(\epsilon\). The gradient maps corresponding to the top feature for higher \(\epsilon\) values are also similar as shown in Appendix B.3. For COLT-trained MNIST networks, in -B.5 we compare the gradients of top proof features retained by SuPFex with the pruned proof features with low priority. As expected, gradients of the pruned proof features with low priority contain spurious input features. ### Sensitivity analysis on training parameters It is expected that DNNs trained with larger \(\epsilon_{train}\) values are more robust. So, we analyze the sensitivity of the extracted proof features to \(\epsilon_{train}\). We use the DNNs trained with PGD and COLT and \(\epsilon_{train}\in\{0.1,0.3\}\) on the MNIST dataset. Fig 3 visualize proof features for the DNNs with additional plots are available in Appendix B.6. We observe that by increasing the value of \(\epsilon_{train}\), the top proof feature filters out more input features. This is aligned with our observation in Section 5.3 that a more robustly trained neural networks are more conservative in using the input features. ### Comparing proofs of different verifiers The proof features extracted by SuPFex are specific to the proof generated by the verifier. In this experiment, we compare proof features generated by two popular verifiers IBP (Zhang et al., 2020; an, 2018) and DeepZ on networks shown in Table 1 for the same properties as before. Note that, although IBP is computationally efficient, it is less precise than DeepZ. For standard DNNs, most of the properties cannot be proved by IBP. Hence, in this experiment, we omit standard DNNs and also, consider only the properties that can be verified by both DeepZ and IBP. Table 2 presents the % cases where the top proof feature computed by both the verifiers is the same (column 2), the % cases where the top 5 proof features computed by both the verifiers are the same and the % cases where the complete proof feature sets computed by both the verifiers are same. We observe that for the MNIST dataset, in 100% of the cases for PGD-trained and COLT-trained networks and in 99.79% cases for the CROWN-IBP trained networks, the top feature computed by both the verifiers is the same. Detailed table is available in Appendix B.7. ## 6 Conclusion In this work, we develop a novel method called SuPFex to interpret neural network robustness proofs. We empirically establish that even if a property holds for a DNN, the proof for the property may rely on spurious or semantically meaningful features depending on the training method used to train the DNNs. We believe that SuPFex can be applied for diagnosing the trustworthiness of DNNs inside their development pipeline. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Training & \% proofs with the & \% proofs with the & \% proofs with the & \% proofs with the \\ Method & same top feature & same top-5 feature & same feature set & \\ & MNIST & CIFAR10 & MNIST & CIFAR10 & MNIST & CIFAR10 \\ \hline PGD Trained & 100 \% & 100 \% & 92.0 \% & 98.31 \% & 92.0 \% & 96.87 \% \\ COLT & 100 \% & 97.87 \% & 87.17 \% & 92.53 \% & 82.05 \% & 89.36 \% \\ CROWN-IBP & 99.79 \% & 100 \% & 96.26 \% & 97.92 \% & 93.15 \% & 95.89 \% \\ \hline \hline \end{tabular} \end{table} Table 2: Comparing proofs of IBP & DeepZ Figure 3: Visualization of gradients of the top proof feature for PGD and COLT networks trained using different values of \(\epsilon_{train}\).
2309.07948
Complex-Valued Neural Networks for Data-Driven Signal Processing and Signal Understanding
Complex-valued neural networks have emerged boasting superior modeling performance for many tasks across the signal processing, sensing, and communications arenas. However, developing complex-valued models currently demands development of basic deep learning operations, such as linear or convolution layers, as modern deep learning frameworks like PyTorch and Tensor flow do not adequately support complex-valued neural networks. This paper overviews a package built on PyTorch with the intention of implementing light-weight interfaces for common complex-valued neural network operations and architectures. Similar to natural language understanding (NLU), which as recently made tremendous leaps towards text-based intelligence, RF Signal Understanding (RFSU) is a promising field extending conventional signal processing algorithms using a hybrid approach of signal mechanics-based insight with data-driven modeling power. Notably, we include efficient implementations for linear, convolution, and attention modules in addition to activation functions and normalization layers such as batchnorm and layernorm. Additionally, we include efficient implementations of manifold-based complex-valued neural network layers that have shown tremendous promise but remain relatively unexplored in many research contexts. Although there is an emphasis on 1-D data tensors, due to a focus on signal processing, communications, and radar data, many of the routines are implemented for 2-D and 3-D data as well. Specifically, the proposed approach offers a useful set of tools and documentation for data-driven signal processing research and practical implementation.
Josiah W. Smith
2023-09-14T16:55:28Z
http://arxiv.org/abs/2309.07948v1
# Complex-Valued Neural Networks for Data-Driven Signal Processing and Signal Understanding ###### Abstract Complex-valued neural networks have emerged boasting superior modeling performance for many tasks across the signal processing, sensing, and communications arenas. However, developing complex-valued models currently demands development of basic deep learning operations, such as linear or convolution layers, as modern deep learning frameworks like PyTorch and Tensor flow do not adequately support complex-valued neural networks. This paper overviews a package built on PyTorch with the intention of implementing light-weight interfaces for common complex-valued neural network operations and architectures. Similar to natural language understanding (NLU), which as recently made tremendous leaps towards text-based intelligence, _RF Signal Understanding (RFSU)_ is a promising field extending conventional signal processing algorithms using a hybrid approach of signal mechanics-based insight with data-driven modeling power. Notably, we include efficient implementations for linear, convolution, and attention modules in addition to activation functions and normalization layers such as batchnorm and layernorm. Additionally, we include efficient implementations of manifold-based complex-valued neural network layers that have shown tremendous promise but remain relatively unexplored in many research contexts. Although there is an emphasis on 1-D data tensors, due to a focus on signal processing, communications, and radar data, many of the routines are implemented for 2-D and 3-D data as well. Specifically, the proposed approach offers a useful set of tools and documentation for data-driven signal processing research and practical implementation. Complex-Valued Neural Networks (CVNNs) Deep Learning Artificial Intelligence Signal Processing Radar Synthetic Aperture Radar ## 1 Introduction Although much work as already been done for complex-valued neural networks (CVNNs), starting back in the 1990s [1], many common operations for complex-valued tensors remain unsupported by modern deep learning frameworks like PyTorch and TensorFlow. In this paper, we introduce a lightweight wrapper built on PyTorch with two main objectives as follows * Provide an efficient interface for complex-valued deep learning using PyTorch. * Provide open-source implementations and documentation for complex-valued operation such activation functions, normalization layers, and attention modules to increase the research rate for CVNNs. Fundamentally, complex-valued data have two degrees of freedom (DOF), which are commonly modeled as real and imaginary parts or magnitude and phase as \[\mathbf{z}=\mathbf{x}+j\mathbf{y}=|\mathbf{z}|e^{j\angle\mathbf{z}}, \tag{1}\] where \(j=\sqrt{-1}\) is known as the complex unit, \(\mathbf{x}\) and \(\mathbf{y}\) are the real and imaginary parts of \(\mathbf{z}\), and \(|\mathbf{z}|\) and \(\angle\mathbf{z}\) are the magnitude and phase of \(\mathbf{z}\). ### Existing CVNN Work Recent surveys of CVNNs and their history can be found in [2, 3]. Starting in 2018, Trabelsi _et al._ introduced _deep complex networks_ (DCN) extending many common deep learning approaches to complex-valued data [4]. As mentioned in [5], some attempts have been made by PyTorch and TensorFlow to incorporate complex-values into the core architecture; however, much work remains to be done. The authors of [5] implement a CVNN framework in TensorFlow, with impressive functionality. However, recent studies have shown a drastic decline in TensorFlow usage among engineers, while PyTorch dominates industry and academia. The code publicized as part of this work was used extensively throughout the dissertation [6] in addition to [7, 8, 9, 10, 11, 12, 13, 14, 15]. One paramount issue for CVNNs is complex-valued backpropagation and computation of complex-valued gradients. Mathematical investigations for Liouville Theorem and Wirtinger Calculus can be found in [5, 4], and are omitted here for brevity. Whereas, [5] develops Wirtinger-based proper backpropagation for TensorFlow, PyTorch natively supports complex-valued backpropagation. ### Installation Notes **IMPORTANT:** Prior to installation, install PyTorch to your environment using your preferred method using the compute platform (CPU/GPU) settings for your machine. PyTorch will not be automatically installed with the installation of complextorch and MUST be installed manually by the user. #### 1.2.1 Install using PyPI complextorch is available on the Python Package Index (PyPI) and can be installed using the following command: pip install complextorch #### 1.2.2 Install using GitHub Useful if you want to modify the source code. git clone [https://github.com/josiahwsmith10/complextorch.git](https://github.com/josiahwsmith10/complextorch.git) ## 2 Complex-Valued Layers In this section, we overview the complex-valued layers supported by complextorch, which can be found at [https://complextorch.readthedocs.io/en/latest/nn/modules.html](https://complextorch.readthedocs.io/en/latest/nn/modules.html). Nearly all modules introduced in this section closely follow the PyTorch to improve integration and streamline user experience (UX) for new users. ### Gauss' Multiplication Trick As pointed out in [16, 17, 18], many complex-valued operations can be more efficiently implemented by leveraging Gauss' multiplication trick. Suppose \(\mathcal{L}(\cdot)=\mathcal{Z}_{\mathbb{R}}(\cdot)+j\mathcal{Z}_{\mathbb{I}}(\cdot)\) is a linear operator, such as multiplication or convolution, and \(\mathbf{z}=\mathbf{x}+j\mathbf{y}\). Hence, \[\mathcal{L}(\mathbf{z})=\mathcal{L}(\mathbf{x})+j\mathcal{L}(\mathbf{y})= \mathcal{L}_{\mathbb{R}}(\mathbf{x})-\mathcal{L}_{\mathbb{I}}(\mathbf{y})+j \left(\mathcal{L}_{\mathbb{R}}(\mathbf{y})+\mathcal{L}_{\mathbb{I}}(\mathbf{ x})\right). \tag{2}\] This is the common implementation of complex-valued operations in deep learning applications. However, it requires four computations, such as multiplications or convolutions, which can become computationally costly. Gauss' trick reduces the number of computations down to 3 as \[t_{1} \triangleq\mathcal{Z}_{\mathbb{R}}(\mathbf{x}),\] \[t_{2} \triangleq\mathcal{L}_{\mathbb{I}}(\mathbf{y}), \tag{3}\] \[t_{3} \triangleq(\mathcal{Z}_{\mathbb{R}}+\mathcal{L}_{\mathbb{I}})( \mathbf{x}+\mathbf{y}),\] \[\mathcal{L}(\mathbf{z})=t_{1}-t_{2}+j(t_{3}-t_{2}-t_{1}). \tag{4}\] This technique is leveraged throughout complextorch to improve computational efficiency whenever applicable. ### Complex-Valued Linear Layers We extend linear layers, the backbone of perceptron neural networks, for CVNNs. Similar to Section 2.1, we define the complex-valued linear layer \(\mathsf{CVLinear}\) as \[H_{\mathsf{CVLinear}}(\cdot)=H_{\mathsf{CVLinear}\cdot\mathbb{R}}(\cdot)+jH_{ \mathsf{CVLinear}\cdot\mathbb{I}}(\cdot), \tag{5}\] where \(H_{\mathsf{CVLinear}\cdot\mathbb{R}}(\cdot)\) and \(H_{\mathsf{CVLinear}\cdot\mathbb{I}}(\cdot)\) can be implemented in PyTorch as real-valued Linear layers, as detailed in Table 1. Using (4), the linear layer can be efficiently computed with the composition \(H_{\mathsf{CVLinear}\cdot\mathbb{R}}(\cdot)+H_{\mathsf{CVLinear}\cdot \mathbb{I}}(\cdot)\) being a linear layer with the weights and bias of \(H_{\mathsf{CVLinear}\cdot\mathbb{R}}(\cdot)\) and \(H_{\mathsf{CVLinear}\cdot\mathbb{I}}(\cdot)\) summed. ### Complex-Valued Convolution Layers Similarly, we define a general complex-valued convolution layer \(\mathsf{CVConv}\) as \[H_{\mathsf{CVConv}}(\cdot)=H_{\mathsf{CVConv}\cdot\mathbb{R}}(\cdot)+jH_{ \mathsf{CVConv}\cdot\mathbb{I}}(\cdot), \tag{6}\] where \(H_{\mathsf{CVConv}\cdot\mathbb{R}}(\cdot)\) and \(H_{\mathsf{CVConv}\cdot\mathbb{I}}(\cdot)\) can be implemented in PyTorch as various real-valued convolution layers, as detailed in Table 1. Using (4), the convolution layers layer can be efficiently computed with the composition \(H_{\mathsf{CVConv}\cdot\mathbb{R}}(\cdot)+H_{\mathsf{CVConv}\cdot\mathbb{I}}(\cdot)\) being a convolution layer with the kernel weights and bias of \(H_{\mathsf{CVConv}\cdot\mathbb{R}}(\cdot)\) and \(H_{\mathsf{CVConv}\cdot\mathbb{I}}(\cdot)\) summed. ### Complex-Valued Attention Layers Whereas attention-based models, such as transformers, have gained significant attention for natural language processing (NLP) and image processing, their potential for implementation in complex-valued problems such as signal processing remains relatively untapped. Here, we include complex-valued variants of several attention-based techniques. #### 2.4.1 Complex-Valued Scaled Dot-Product Attention The ever-popular scaled dot-product attention is the backbone of many attention-based methods [19], most notably the transformer [20, 21, 22, 23, 24, 11]. Given complex-valued _query, key, and value_ tensors \(Q,K,V\), the complex-valued scaled dot-product attention can be computed as \[\text{Attention}(Q,K,V)=\mathcal{S}(QK^{T}/t)V, \tag{7}\] where \(t\) is known as the temperature (typically \(t=\sqrt{d_{attn}}\)) and \(\mathcal{S}\) is the softmax function. It is important to note that unlike real-valued scaled dot-product attention, the complex-valued version detailed above must employ a complex-valued version of the soft-max function as the real-valued softmax is unsuited for complex-valued data. We implemented several complex-valued softmax function options detailed in Section 2.5. Additionally, we include an implementation of multi-head attention, which is commonly employed in transformer and attention-based models [22]. \begin{table} \begin{tabular}{c|c} complextorch & PyTorch \\ \hline \(\mathsf{CVConv}\) & Linear \\ \end{tabular} \end{table} Table 1: PyTorch equivalent of the complex-valued linear layer. \begin{table} \begin{tabular}{c|c} complextorch & PyTorch \\ \hline \(\mathsf{CVConv}\) & Conv1d \\ \(\mathsf{CVConv}\) & Conv2d \\ \(\mathsf{CVConv}\) & Conv3d \\ \(\mathsf{CVConv}\) & Conv1d \\ \(\mathsf{CVConv}\) & Conv1d \\ \(\mathsf{CVConv}\) & Conv1d \\ \(\mathsf{CVConv}\) & Conv1d \\ \(\mathsf{CVConv}\) & Conv1d \\ \(\mathsf{CVConv}\) & Conv1d \\ \end{tabular} \end{table} Table 2: PyTorch equivalent of complex-valued convolution layers. #### 2.4.2 Complex-Valued Efficient Channel Attention (CV-ECA) Efficient Channel Attention (ECA) was first introduced in [25]. Here, we extend ECA to complex-valued data for CV-ECA. Following the construction of ECA, we define CV-ECA as \[\texttt{CV-ECA}=\mathcal{M}\left(H_{\texttt{CVConv1d}}(H_{\texttt{CVModuleViewAvgPoolId}}( \mathbf{z}))\right)\odot\mathbf{z}, \tag{8}\] where \(\mathcal{M}(\cdot)\) is the masking function (some options are implemented in Sections 2.5 and 2.6), \(H_{\texttt{CVConv1d}}(\cdot)\) is the 1-D complex-valued convolution layer defined in Section 2.3, and \(H_{\texttt{CVAdaptiveAvgPoolId}}(\cdot)\) is the complex-valued global adaptive average pooling layer for the \(N\)-D input tensor detailed in Section 2.11. It is important to note that the 1-D convolution is computed along the channel dimension of pooled data. This is the notable difference between ECA and MCA. #### 2.4.3 Complex-Valued Masked Channel Attention (CV-MCA) Complex-valued masked channel attention (CV-MCA) is similar to CV-ECA but employs a slightly different implementation. Generally, the masked attention module implements the following operation \[\texttt{CV-MCA}(\mathbf{z})=\mathcal{M}(H_{\texttt{ConvUp}}(\mathcal{A}(H_{ \texttt{ConvDown}}(\mathbf{z}))))\odot\mathbf{z}, \tag{9}\] where \(\mathcal{M}(\cdot)\) is the masking function (some options are implemented in Sections 2.5 and 2.6), \(H_{\texttt{ConvUp}}(\cdot)\) and \(H_{\texttt{ConvDown}}(\cdot)\) are \(N\)-D convolution layers with kernel sizes of 1 that reduce the channel dimension by a factor \(r\), and \(\mathcal{A}(\cdot)\) is the complex-valued non-linear activation layer (see Section 2.8). The implementations of complex-valued masked attention are included for 1-D, 2-D, and 3-D data. For more information on complex-valued masked channel attention, see the paper that introduced it [26]. ### Complex-Valued Softmax Layers Softmax is an essential function for several tasks throughout deep learning. Complex-valued softmax functions have not been explored thoroughly in the literature at the time of this paper. However, a similar route, know as masking, has seen some attention in recent research [26]. Here, we introduce several softmax layer suitable for complex-valued tensors. #### 2.5.1 Split Type-A Complex-Valued Softmax Layer Using the definition of a split Type-A function from Section 2.8.1, we define the complex-valued split Type-A softmax layer (\(\texttt{CVSoftMax}\)) as \[\texttt{CVSoftMax}(\mathbf{z})=\texttt{SoftMax}(\mathbf{x})+j\texttt{SoftMax }(\mathbf{y}), \tag{10}\] where \(\mathbf{z}=\mathbf{x}+j\mathbf{y}\) and \(\texttt{SoftMax}\) is the PyTorch real-valued softmax function. #### 2.5.2 Phase Preserving Softmax Similar to a Type-B function from Section 2.8.2, we define the complex-valued phase preserving softmax layer (PhaseSoftmax) as \[\texttt{CVSoftMax}(\mathbf{z})=\texttt{SoftMax}(|\mathbf{z}|)\odot\frac{ \mathbf{z}}{|\mathbf{z}|}, \tag{11}\] where \(\mathbf{z}=\mathbf{x}+j\mathbf{y}\) and \(\texttt{SoftMax}\) is the PyTorch real-valued softmax function. #### 2.5.3 Magnitude Softmax We define the magnitude softmax layer, which simply computes the softmax over the magnitude of the input and ignores the phase, (\(\texttt{MagSoftmax}\)) as \[\texttt{CVSoftMax}(\mathbf{z})=\texttt{SoftMax}(|\mathbf{z}|), \tag{12}\] where \(\mathbf{z}=\mathbf{x}+j\mathbf{y}\) and \(\texttt{SoftMax}\) is the PyTorch real-valued softmax function. ### Complex-Valued Masking Layers #### 2.6.1 Complex Ratio Mask (cRM) or Phase Preserving Sigmoid Detailed in Eq. (23) of [26], the complex ratio mask (cRM) applies the traditional sigmoid function to the magnitude of the signal while leaving the phase information unchanged as \[\texttt{ComplexRatioMask}(\mathbf{z})=\texttt{Sigmoid}(|\mathbf{z}|)\odot \frac{\mathbf{z}}{|\mathbf{z}|} \tag{13}\] where \(\mathbf{z}=\mathbf{x}+j\mathbf{y}\) and Sigmoid is the PyTorch real-valued softmax function. #### 2.6.2 Magnitude Min-Max Normalization Layer The min-max norm, which has proven useful in complex-value data normalization [13], is another option for a complex-valued softmax function as \[\texttt{MagMinMaxNorm}(\mathbf{z})=\frac{\mathbf{z}-\mathbf{z}_{\text{min}}}{ \mathbf{z}_{\text{max}}-\mathbf{z}_{\text{min}}} \tag{14}\] where \(\mathbf{z}=\mathbf{x}+j\mathbf{y}\) and \(\mathbf{z}_{\text{min}}\) and \(\mathbf{z}_{\text{max}}\) indicate the minimum and maximum of \(|\mathbf{z}|\). ### Complex-Valued Normalization Layers Normalization layers are a crucial aspect of modern deep learning algorithms facilitating improved convergence and model robustness. As discussed in [4], traditional standardization of complex-valued data is not sufficient to translate and scale the data to unit variance and zero mean. Rather, a whitening procedure is necessary to ensure a circular distribution with equal variance for the real and imaginary parts of the signal. The whitening algorithm is derived in greater detail in [4], but we employ the same procedure for both batch normalization and layer normalization. ### Complex-Valued Activation Layers Complex-valued activation functions are a key element of CVNNs that different significantly from real-valued neural networks. The popular real-valued activation layers (such as ReLU and GeLU) cannot be directly applied to complex-valued data without some modification. Complex-valued activation functions must take into account the 2 degrees-of-freedom inherent to complex-valued data, typically represented as real and imaginary parts or magnitude and phase. We highlight four categories of complex-valued activation layers: * Split Type-A Activation Layers * Split Type-B Activation Layers * Fully Complex Activation Layers * ReLU-Based Complex-Valued Activation Layers Split _Type-A_ and _Type-B_ activation layers apply real-valued activation functions to either the real and imaginary or magnitude and phase, respectively, of the input signal [2, 5]. Fully complex activation layers are entirely complex-valued, while ReLU-based complex-valued activation layers are the family of complex-valued activation functions that extend the ever-popular ReLU to the complex plane. #### 2.8.1 Split Type-A Complex-Valued Activation Layers _Type-A_ activation functions consist of two real-valued functions, \(G_{\mathbb{R}}(\cdot)\) and \(G_{\mathbb{I}}(\cdot)\), which are applied to the real and imaginary parts of the input tensor, respectively, as \[G(\mathbf{z})=G_{\mathbb{R}}(\mathbf{x})+jG_{\mathbb{I}}(\mathbf{y}). \tag{15}\] In most cases, \(G_{\mathbb{R}}(\cdot)=G_{\mathbb{I}}(\cdot)\); however, \(G_{\mathbb{R}}(\cdot)\) and \(G_{\mathbb{I}}(\cdot)\) can also be distinct functions. Table 3 details the Type-A activation functions included in complextorch. Additionally, a generalized Type-A activation function is included allowing the user to adopt any set of \(G_{\mathbb{R}}(\cdot)\) and \(G_{\mathbb{I}}(\cdot)\) desired. #### 2.8.2 Polar Type-B Complex-Valued Activation Layers Similarly, _Type-B_ activation functions consist of two real-valued functions, \(G_{||}(\cdot)\) and \(G_{\angle}(\cdot)\), which are applied to the magnitude (modulus) and phase (angle, argument) of the input tensor, respectively, as \[G(\mathbf{z})=G_{||}(|\mathbf{z}|)\odot\exp(jG_{\angle}(\angle\mathbf{z})). \tag{16}\] Table 4 details the Type-B activation functions included in \(\mathtt{complextorch}\). Where \(G_{\angle}(\cdot)\) is omitted, the phase information is unchanged and the activation is effectively a masking function as in Section 2.6. Additionally, a generalized Type-B activation function is included allowing the user to adopt any set of \(G_{||}(\cdot)\) and \(G_{\angle}(\cdot)\) desired. #### 2.8.3 Fully Complex Activation Layers Fully complex activation layers employ activation functions specifically designed for complex-valued data and hence do not have a general form. Table 5 details the fully-complex activation functions included in \(\mathtt{complextorch}\). #### 2.8.4 ReLU-Based Complex-Valued Activation Layers The ReLU is the most popular activation function in modern deep learning, and it has garnered significant attention in its extension to the complex domain. Most take a similar form to Type-A activation functions as in Section 2.8.1, operating on the real and imaginary parts of the input signal. However, some functions, like the Type-B \(\mathtt{modReLU}\) and fully-complex \(\mathtt{Guberman ReLU}\) (\(\mathtt{zReLU}\)) apply ReLU-like operations. Some efforts have been made to develop insights into how ReLU-based complex-valued activation functions "activate" across different regions and quadrants of the complex plane [35, 32]. Table 6 details the fully-complex activation functions included in \(\mathtt{complextorch}\), where \(\mathbf{z}=\mathbf{x}+j\mathbf{y}\). ### Complex-Valued Loss Functions In this section, we overview some common complex-valued loss functions. Whereas we emphasize regression loss, complex-valued classification and other loss functions are further explored in [5, 2]. Similar to activation functions, two general types of loss functions have similar forms to Type-A and Type-B activations, operating on the real and imaginary or magnitude and phase, respectively. \begin{table} \begin{tabular}{c|c|c|c} \(\mathtt{complextorch}\) Activation Layer & \(G_{||}(|\mathbf{z}|)\) & \(G_{\angle}(\angle\mathbf{z})\) & Reference \\ \hline \(\mathtt{CVPolarTanh}\) & \(\tanh(|\mathbf{z}|)\) & - & Eq. (8) [27] \\ \hline \(\mathtt{CVPolarSquash}\) & \(\frac{|\mathbf{z}|^{2}}{(1+|\mathbf{z}|^{2})}\) & - & Section III-C [27] \\ \hline \(\mathtt{CVPolarLog}\) & \(\ln(|\mathbf{z}|+1)\) & - & Section III-C [29] \\ \hline \(\mathtt{modReLU}\) & \(\mathtt{ReLU}(|\mathbf{z}|+b)\) & - & Eq. (8) [30] \\ \hline \end{tabular} \end{table} Table 4: Type-B activation functions. \begin{table} \begin{tabular}{c|c|c} \(\mathtt{complextorch}\) Activation Layer & \(G_{\mathbb{R}}(\mathbf{z})=G_{\mathbb{I}}(\mathbf{z})\) & Reference \\ \hline \(\mathtt{CVSplitTanh}\) & \(\tanh(\mathbf{z})\) & Eq. (15) [27] \\ \hline \(\mathtt{CTanh}\) & \(\tanh(\mathbf{z})\) & Eq. (15) [27] \\ \hline \(\mathtt{CVSplitSigmoid}\) & \(\sigma(\mathbf{z})\) & - \\ \hline \(\mathtt{CSigmoid}\) & \(\sigma(\mathbf{z})\) & - \\ \hline \(\mathtt{CVSplitAbs}\) & \(|\mathbf{z}|\) & Section III-C [28] \\ \hline \end{tabular} \end{table} Table 3: Type-A activation functions. \begin{table} \begin{tabular}{c|c|c} \(\mathtt{complextorch}\) Activation Layer & \(G(\mathbf{z})\) & Reference \\ \hline \(\mathtt{CVSigmoid}\) & \(\frac{1}{1+\exp(\mathbf{z})}\) & Eq. (71) [31] \\ \hline \(\mathtt{zReLU}\) & \(\begin{cases}\mathbf{z}\ \ #### 2.9.1 Split Loss Functions Split loss functions apply two real-valued loss functions to the real and imaginary parts of the estimated (\(\mathbf{x}\))) and ground truth (\(\mathbf{y}\))) labels as \[\mathcal{L}(\mathbf{x},\mathbf{y})=\mathcal{L}_{\mathbb{R}}(\mathbf{x}_{ \mathbb{R}},\mathbf{y}_{\mathbb{R}})+\mathcal{L}_{\mathbb{I}}(\mathbf{x}_{ \mathbb{I}},\mathbf{y}_{\mathbb{I}}), \tag{17}\] where the total loss is computed as the sum of the real and imaginary losses. Generally \(\mathcal{L}_{\mathbb{R}}(\cdot,\cdot)=\mathcal{L}_{\mathbb{I}}(\cdot,\cdot)\); however, we include an generalized split loss function allowing the user to specify any combination of \(\mathcal{L}_{\mathbb{R}}(\cdot,\cdot)\) and \(\mathcal{L}_{\mathbb{I}}(\cdot,\cdot)\). Table 7 details the split loss functions included in complextorch. #### 2.9.2 Polar Loss Functions Similarly, polar loss functions apply two real-valued loss functions to the magnitude and phase of the estimated and ground truth labels as \[G(\mathbf{x},\mathbf{y})=w_{||}G_{||}(|\mathbf{x}|,|\mathbf{y}|)+w_{\angle}G_{ \angle}(\angle\mathbf{x},\angle\mathbf{y}), \tag{18}\] where \(w_{||}\) and \(w_{\angle}\) are scalar weights applied based on _a priori_ understanding of the problem to scale the magnitude and phase losses, particularly as the phase loss will always be less than \(2\pi\) by definition. Tuning the loss weights may improve modeling performances To the author's understanding, at the time of this paper, there have been no efforts to apply polar loss functions. However, they may, in conjunction with split loss functions, improve modeling performance by imposing additional loss on the phase-accuracy of the algorithm. #### 2.9.3 Other Complex-Valued Loss Functions Several complex-valued loss functions are implemented and detailed in Table 8 Additionally, PerpLossSSIM (Eq. (5), Fig. 1[38]) is implemented. Its mathematical formulation is omitted here, but can be found in [38]. ### Complex-Valued Manifold-Based Layers In [16, 18] a complex-valued convolution operator offering similar equivariance properties to the spatial equivariance of the traditional real-valued convolution operator is introduced. By approaching the complex domain as a Riemannian homogeneous space consisting of the product of planar rotation and non-zero scaling, they define a convolution operator equivariant to phase shift and amplitude scaling. Although their paper shows promising results in reducing the number of parameters of a complex-valued network for several problems, their work has not gained mainstream support. However, some initial work has shown significant promise in reducing model sizes and improving modeling capacity \begin{table} \begin{tabular}{c|c|c} complextorch Activation Layer & \(G(\mathbf{z})\) & Reference \\ \hline CVSplitReLU & \(\text{ReLU}(\mathbf{x})+j\text{ReLU}(\mathbf{y})\) & Eq. (5) [36] \\ \hline CReLU & \(\text{ReLU}(\mathbf{x})+j\text{ReLU}(\mathbf{y})\) & Eq. (5) [36] \\ \hline CPReLU & \(\text{PReLU}(\mathbf{x})+j\text{PReLU}(\mathbf{y})\) & Eq. (2) [35] \\ \hline \end{tabular} \end{table} Table 6: ReLU-based activation functions. \begin{table} \begin{tabular}{c|c} complextorch Loss Function & \(\mathcal{L}_{\mathbb{R}}(\cdot,\cdot)=\mathcal{L}_{\mathbb{I}}(\cdot,\cdot)\) \\ \hline SplitL1 & \(\text{L1}(\cdot,\cdot)\) \\ \hline SplitMSE & \(\text{MSE}(\cdot,\cdot)\) \\ \hline SplitSSIM & \(\text{SSIM}(\cdot,\cdot)\) \\ \hline \end{tabular} \end{table} Table 7: Split Activation Functions. \begin{table} \begin{tabular}{c|c|c} complextorch Activation Layer & \(G\mathcal{L}(\mathbf{x},\mathbf{y})\) & Reference \\ \hline CVQuadError & \(\frac{1}{5}\text{sum}(|\mathbf{x}-\mathbf{y}|^{2})\) & Eq. (11) [37] \\ \hline CVFourthPowError & \(\frac{1}{5}\text{sum}(|\mathbf{x}-\mathbf{y}|^{4})\) & Eq. (12) [37] \\ \hline CVCauchyError & \(\frac{1}{2}\text{sum}(c^{2}/2\ln(1+|\mathbf{x}-\mathbf{y}|^{2}/c^{2}))\) & Eq. (13) [37] \\ \hline CVLogCoshError & sum\((\ln(\cosh(|\mathbf{x}-\mathbf{y}|^{2}))\) & Eq. (14) [37] \\ \hline CVLogError & sum\((|\ln(\mathbf{x})-\ln(\mathbf{y})|^{2})\) & Eq. (10) [3] \\ \hline \end{tabular} \end{table} Table 8: Other complex-valued loss functions. for smaller models [39, 40, 41]. Incorporating manifold-based complex-valued deep learning is a promising research area for future efforts. For full derivation and alternative implementations, please refer to [16, 17, 18]. As the authors mention in the final bullet point in Section IV-A1, * "If \(d\) is the manifold distance in (2) for the Euclidean space that is also Riemannian, then wFM has exactly the weighted average as its closed-form solution. That is, our wFM convolution on the Euclidean manifold is reduced to the standard convolution, although with the additional convexity constraint on the weights." Hence, the implementation closely follows the conventional convolution operator with the exception of the weight normalization. We would like to not that the weight normalization, although consistent with the authors' implementation, lacks adequate explanation from the literature and could be improved for further clarity. complextorch contains implementations for 1-D and 2-D versions of the proposed wFM-based convolution operated introduced in [16, 18], dubbed wFMConv1d and wFMConv2d, respectively. ### Complex-Valued Pooling Layers Complex-valued average pooling can be computed similarly to real-valued pooling where the average can be computed over the real and imaginary parts of the signal separately as \[\texttt{CVAdaptiveAvgPoolingNd}(\mathbf{z})=\texttt{AdaptiveAvgPoolingNd}( \mathbf{x})+j\texttt{AdaptiveAvgPoolingNd}(\mathbf{y}), \tag{19}\] where \(\mathbf{z}=\mathbf{x}+j\mathbf{y}\). We include implementations for \(\texttt{CVAdaptiveAvgPooling1d}\), \(\texttt{CVAdaptiveAvgPooling2d}\), and \(\texttt{CVAdaptiveAvgPooling3d}\). ### Complex-Valued Dropout Layers Similarly, complex-valued dropout can be computed similarly to real-valued dropout where dropout can be computed over the real and imaginary parts of the signal separately as \[\texttt{CVDDropout}(\mathbf{z})=\texttt{Dropout}(\mathbf{x})+j\texttt{Dropout }(\mathbf{y}), \tag{20}\] where \(\mathbf{z}=\mathbf{x}+j\mathbf{y}\). ## 3 Conclusion In this paper, we introduced a PyTorch wrapper for complex-valued neural network modeling. The proposed framework enables rapid development of deep learning models for signal processing and signal understanding tasks relying on complex-valued data. We detailed the implementation of deep learning layers spanning convolution, linear, activation, attention, and loss functions.
2301.00636
New Designed Loss Functions to Solve Ordinary Differential Equations with Artificial Neural Network
This paper investigates the use of artificial neural networks (ANNs) to solve differential equations (DEs) and the construction of the loss function which meets both differential equation and its initial/boundary condition of a certain DE. In section 2, the loss function is generalized to $n^\text{th}$ order ordinary differential equation(ODE). Other methods of construction are examined in Section 3 and applied to three different models to assess their effectiveness.
Xiao Xiong
2022-12-29T11:26:31Z
http://arxiv.org/abs/2301.00636v1
# New Designed Loss Functions to Solve Ordinary Differential Equations with Artificial Neural Network ###### Abstract This paper investigates the use of artificial neural networks (ANNs) to solve differential equations (DEs) and the construction of the loss function which meets both differential equation and its initial/boundary condition of a certain DE. In section 2, the loss function is generalized to \(n^{\text{th}}\) order ordinary differential equation(ODE). Other methods of construction are examined in Section 3 and applied to three different models to assess their effectiveness. **Keywords:** loss function; artificial neural network; ordinary differential equations; models; function reconstruction ## 1 Introduction Differential equations are used in the modeling of various phenomena in academic fields. Most differential equations do not have analytical solutions and are instead solved using domain-discretization methods[1][2][3] such as boundary-element, finite-differences, or finite-volumes to obtain approximated solutions. However, discretization of the domain into mesh points is only practical for low-dimensional differential equations on regular domains. Furthermore, approximate solutions at points other than mesh points must be obtained through additional techniques such as interpolation. Monte Carlo methods and radial basis functions[4] have also been proposed as alternatives for solving differential equations without the need for mesh discretization. These methods allow for the easy generation of collocation points within the domain, but they are not as stable or efficient as mesh-based methods. In this article, we introduce the use of artificial neural networks (ANNs) as an alternative method for solving differential equations. This approach does not require complex meshing and can be used as a universal function approximator[5] to produce a continuous and differentiable solution over the entire domain. To obtain an exact solution for a differential equation, both the main equation and the constraint equations (initial/boundary conditions) must be taken into account. The key challenge is figuring out how to use one single network to satisfy these equations at the same time. One method is the DGM algorithm[6] shown in Figure 1, which minimizes the direct sum of three individual losses from the main differential equation, the initial conditions, and the boundary conditions in a single neural network. However, the solution obtained through this algorithm is not an accurate approximation to the solution of the main and constraint equations, as summing the losses can affect the accuracy of each other. Figure 1: DGM algorithm for solving PDE Lagaris[7] proposed a method in which the loss of the neural network can be reconstructed to obtain an exact solution to both the differential equation and its constraint conditions. The details of this method will be introduced and modified in the next section where the general formula of the loss functions for ordinary differential equations (ODEs) of any order is also established. In Section 3, different forms of loss functions are designed and applied to three practical models with related code in GitHub. Some possible future works based on this paper are provided in the final session. ## 2 Construction of Loss Function This section explains the concept of differential equations and the process of solving them using multi-layer perceptrons. It also describes how to construct loss functions for these equations during the training of neural networks, including examples for first- and second-order ODEs. Finally, the section discusses how the formula for the loss function can be generalized to nth-order ODEs. Firstly, the definition of a general differential equation is \[F(\mathbf{x},u,Du,D^{2}u,...,D^{m}u)=0,\quad\mathbf{x}\in\Omega\subset\mathbb{ R}^{n} \tag{1}\] where \(u\) is the unknown solution to be determined and \[D^{m}u=\left\{\frac{\partial^{|\alpha|}u}{\partial x_{1}^{\alpha_{1}}\dots \partial x_{n}^{\alpha_{n}}}\middle|\alpha\in\mathbb{N}^{n},\middle|\alpha \right|=m\middle|\right\}\] The inputs to the neural network for solving the differential equation are discretized points of the domain \(\Omega\). Then the differential equation becomes a system of equations \(F(\mathbf{x}_{i},u(\mathbf{x}_{i}),Du,D^{2}u,...,D^{m}u)=0\) for all \(\mathbf{x}_{i}\in\hat{\Omega}\). When a specific neural network is trained, \(u_{NN}(\mathbf{x},\mathbf{p})\) is used to present the output where \(\mathbf{p}\) are weights and biases and \(\mathbf{x}\) are inputs. The general loss used to do gradient descent is \(\text{min}_{\mathbf{p}}\sum_{\mathbf{x}_{i}\in\hat{\Omega}}(F(\mathbf{x}_{i}, u_{NN}(\mathbf{x}_{i}),Du_{NN},D^{2}u_{NN},...,D^{m}u_{NN}))^{2}\), subject to initial and boundary conditions. In Lagaris's approach[7] mentioned in section 1, he constructed the solution \(u_{NN}(\mathbf{x},\mathbf{p})\) in the loss function to satisfy the differential equation and its initial or boundary conditions as \[u_{NN}(\mathbf{x},\mathbf{p})=A(\mathbf{x})+G(\mathbf{x},N(\mathbf{x},\mathbf{ p})) \tag{2}\] where N is an output of a feed-forward neural network, term \(A(\mathbf{x})\) corresponds to constraint conditions and term G satisfies the differential equation. After implementing the proposed method, solving certain differential equations becomes an unconstrained problem that only involves a single equation, which means that we only need to calculate a single loss in the neural network compared with DGM algorithm to obtain the solution. Note that the gradient computation in this neural network process involves derivatives of the output with respect to any of its inputs (which is used in the calculation of losses) and its parameters (which is used in gradient descent). The procedure of gradient computation is deduced in paper[7]. ### First order ODE To specialize equation (2) in the first order ODE \[\frac{du}{dt}=f(t,u) \tag{3}\] with an initial condition \(u(a)=A\), we let \[u_{NN}(t)=A+(t-a)N(t,\mathbf{p}) \tag{4}\] in which \(u_{NN}(a)=A\), thus \(u_{NN}\) satisfies the initial condition. After generating n inputs \(t_{i}\) in the domain, we get n corresponding outputs by training a feed-forward neural network with the same weights and bias for each input value in a single epoch. Then the loss we need to minimize for gradient descent is \(L[\mathbf{p}]=\sum_{i}\left\{\frac{du_{NN}(t_{i})}{dt}-f(t_{i},u_{NN}(t_{i})) \right\}^{2}\). ### Second order ODE There are three loss construction cases corresponding to three different constraint condition cases in second-order ODE: \[\frac{d^{2}u}{dx^{2}}=f(x,u,\frac{du}{dx}) \tag{5}\] The constraints in the first case are \(u(a)=A\quad\text{and}\quad\frac{du}{dx}\bigg{|}_{x=b}=B\) which results in the constructed function \[u_{NN}(x)=A+B(x-a)+(x-a)(x-b)^{2}N(x,\mathbf{p}) \tag{6}\] \(u^{\prime}_{NN}(b)=B\) can be verified by differentiating the above function with respect to t, getting \(u^{\prime}_{NN}(x)=B+[(x-b)^{2}+2(x-b)(x-a)]N(x,\mathbf{p})+(x-a)(x-b)^{2}N^{\prime}\). The second case involves two conditions \(u(a)=A\) and \(u(b)=B\), while the third case contains conditions on \(\frac{du}{dx}\), namely \(\frac{du}{dx}\bigg{|}_{x=a}=A\) and \(\left.\frac{du}{dx}\right|_{x=b}=B\) with corresponding parts of loss function \[u_{NN}(x)=A\Big{(}\frac{x-b}{a-b}\Big{)}+B\Big{(}\frac{x-a}{b-a}\Big{)}+(x-a )(x-b)N(x,\mathbf{p}) \tag{7}\] and \[u_{NN}(x)=\frac{A(x-b)^{2}}{2(a-b)}+\frac{B(x-a)^{2}}{2(b-a)}+(x-a)^{2}(x-b)^ {2}N(x,\mathbf{p}). \tag{8}\] Finally, the loss function derived from these constructed \(u_{NN}\) function for second-order ODE is similar to the first-order one. ### \(n^{\text{th}}\) order ODE After clarifying how to construct functions in \(1^{\text{st}}\) and \(2^{\text{nd}}\) order cases, I extend formula to \(n^{\text{th}}\) order ODE. The general \(n^{\text{th}}\) order ODE is written as \[\frac{d^{n}u}{dx^{n}}=f(x,u,\frac{du}{dx},...,\frac{d^{n-1}u}{dx^{n-1}}) \tag{9}\] with different constraint conditions listed as follows: #### Case 1 In this case, conditions on \(u\) are considered, equivalently \[u(x_{i})=C_{i}\quad\text{for}\quad\{i\in\mathbb{Z}|1\leq i\leq n\} \tag{10}\] where \(x_{i}\neq x_{j}\) if \(i\neq j\). We reconstruct the solution function as \[u_{NN}(x)=\sum_{i=1}^{n}(C_{i}\prod_{j\neq i}^{j\in 1,2,3...,n}\frac{x-x_{j}}{x _{i}-x_{j}})+\prod_{i=1}^{n}(x-x_{i})N(x,\mathbf{p}) \tag{11}\] #### Case 2 The conditions on \(n-1^{\text{th}}\) order derivatives are given in the forms \[\frac{d^{n-1}u}{dx^{n-1}}\bigg{|}_{x=x_{i}}=C_{i}\quad\text{for}\quad\{i\in \mathbb{Z}|1\leq i\leq n\} \tag{12}\] which leads to the constructed solution \[u_{NN}(x)=\sum_{i=1}^{n}C_{i}\frac{M_{i}(x)}{\frac{d^{n-1}M_{i}}{dx^{n-1}} \big{|}_{x=x_{i}}}+\prod_{i=1}^{n}(x-x_{i})^{n}N(x,\mathbf{p}) \tag{13}\] where \(M_{i}(x)=\prod_{j\neq i}^{j\in 1,2,...,n}(x-x_{j})^{n}\). ### Case 3 Case 3 involves conditions on derivatives up to order \(n-1\): \[\frac{d^{i}u}{dx^{i}}\bigg{|}_{x=x_{i}}=C_{i}\quad\text{for}\quad\{i\in\mathbb{Z} |0\leq i\leq n-1\} \tag{14}\] We first define a series of functions \(M_{i}(x)=M_{i-1}(x-x_{i-1})^{i}\) with \(M_{0}(x)=1\) and some coefficients \(N_{i}=\bigg{(}C_{i}-\sum_{j=o}^{i-1}(M_{j}^{(i)}(x)|_{x=x_{i}}\times N_{j}) \bigg{)}{\times}\frac{1}{M_{i}^{(i)}|_{x=x_{i}}}\) with \(N_{0}=C_{0}\). Then the reconstructed part of the loss function becomes \[u_{NN}=\sum_{i=0}^{n-1}N_{i}\times M_{i}(x)+M_{n}N(x,\mathbf{p}) \tag{15}\] ### Case 4: General Condition Case In this part, I will introduce the general formula for the general condition cases. The format of the condition is \[\frac{d^{i}u}{dx^{i}}\bigg{|}_{x=x_{i\alpha}}=C_{i\alpha}\quad\text{with} \quad\{i\in\mathbb{Z}|0\leq i\leq n-1\}\quad\text{and}\quad\{\alpha\in\mathbb{ Z}|1\leq\alpha\leq\alpha_{i}\} \tag{16}\] where \(\sum_{i=0}^{n-1}\alpha_{i}=n\). The differential order of conditions in this scenario can vary and the same order differentiated condition can include multiple cases. However, the total number of conditions must be equal to n. Similar to case 3, functions and coefficients are defined in advance by \[M_{i\alpha}(x)=(x-x_{i\alpha})^{i+1},\quad M_{i}(x)=\prod_{\alpha=1}^{\alpha_ {i}}M_{i\alpha},\quad F_{i\alpha}(x)=\frac{1}{M_{i\alpha}}\prod_{j=0}^{i}M_{j}\] and \[N_{0\alpha}=C_{0\alpha}\times\frac{1}{F_{0\alpha}(x)|_{x=x_{0\alpha}}},\quad N _{i\alpha}=\bigg{(}C_{i\alpha}-\sum_{j=0}^{i-1}\sum_{\beta=1}^{\alpha_{j}}N_{ j\beta}F_{j\beta}^{(i)}|_{x=x_{i\alpha}}-\sum_{\beta\neq\alpha}^{\beta\in 1,...,\alpha_{i}}N_{i\beta}F_{i\beta}^{(i)}|_{x=x_{i\alpha}}\bigg{)}{\times} \frac{1}{F_{i\alpha}^{(i)}(x)|_{x=x_{i\alpha}}}\] Finally, the general reconstructed solution function formula for \(n^{\text{th}}\) order ODE with general constraint conditions is \[u_{NN}=\sum_{i=0}^{n-1}\sum_{\alpha=1}^{\alpha_{i}}N_{i\alpha}\times F_{i \alpha}(x)+\prod_{i=0}^{n-1}M_{i}(x)N(x,\mathbf{p}) \tag{17}\] ### System of K ODEs Now, the system of ODE involving K equations is explored. We start from the first-order system ODE \[\frac{du_{k}}{dx}=f_{k}(x,u_{1},...,u_{K})\quad\text{for}\quad k\in 1,2,...,K \tag{18}\] together with conditions \(u_{k}(x_{k})=C_{k}\). In the ANN approach for solving the ODE system, total K multi-layer perceptrons work in parallel to process K equations. For each first-order equation, we get a reconstructed function \[u_{NN_{k}}(x)=A_{k}+xN_{k}(x,\mathbf{p}_{k}) \tag{19}\] and the loss for each neural network is \(L[\mathbf{p}_{k}]=\sum_{i}\bigg{(}\frac{du_{NN_{k}}}{dx}\bigg{|}_{x=x_{i}}-f_{ k}(x_{i},u_{NN_{1}},u_{NN_{2}},...,u_{NN_{K}})\bigg{)}^{2}\). To generalization, the system of K nth-order ODEs is shown as \[\frac{d^{n}u_{k}}{dx^{n}}=f_{k}(x,S_{0},...,S_{n-1})\quad\text{for}\quad k\in 1,2,...,K \tag{20}\] with the notation \(S_{i}=\frac{d^{i}u_{1}}{dx^{i}}\), \(\frac{d^{i}u_{2}}{dx^{i}},...,\frac{d^{i}u_{K}}{dx^{i}}\) for \(\quad i\in 0,...,n-1\). The constraint conditions are \(\frac{d^{i}u_{k}}{dx^{i}}\bigg{|}_{x=x_{k\alpha}}=C_{k\alpha}\) where \(i\)'s are integers in the range \([0\leq i\leq n-1]\) and \(\alpha\)'s are integers in the range \([1\leq\alpha\leq\alpha_{ki}]\) with \(\sum_{i=0}^{n-1}\alpha_{ki}=n\). The reconstructed function for each equation in the system is exactly the same as \(n^{\text{th}}\) order single ODE in equation(17). Application in models In this section, we will look at different methods for reconstructing \(u_{NN}\) in loss functions for first-order ordinary differential equations. Based on these methods, We can reconstruct solutions to higher-order ODEs, similar to how we did in section 2. There are a total of seven different constructions shown in Table 1. The polynomial construction is an extension of the one presented in equation(4), where an additional coefficient c is included and adjusted based on the specific ODE being considered. The exponential construction was introduced by Chen in [8]. Further to the above two constructions, I propose five more forms. The impact of the base of logarithm will also be examined in the next part of the analysis. For second-order ODE with initial condition \(u(a)=A\quad\text{and}\quad\frac{du}{dx}\bigg{|}_{x=b}=B\), the exponential construction is \[u_{NN}(t)=A+B(t-b)+(1-e^{-t-a})^{2}N(x,\mathbf{p})\] In the following model analysis, the performance of the seven functions mentioned above will be evaluated when they are implemented using a neural network approach in three different models. The average of the 100 lowest losses within a certain number of epochs will be calculated to evaluate their performance. Additionally, the trends of loss value decreasing during epochs will be plotted and compared. Finally, the solution of the reconstructed function with the lowest loss will be compared with the analytical solution to state the accuracy of the ANN approach. ### Newton's Law of Cooling #### Model Description Newton's law of cooling[9] states that the rate of change of the temperature of an object is proportional to the difference between its own temperature and the temperature of its surroundings. This relationship is described by the equation: \[\frac{dT}{dt}=r(T_{env}-T(t)) \tag{21}\] where * \(\frac{dT}{dt}\) is the rate of change of temperature with respect to time (also known as the cooling rate) * \(T\) is the temperature of an object * \(T_{env}\) is the temperature of the environment * \(r\) is the cooling coefficient, which is a constant that depends on the characteristics of the object and the environment it is in In this study, we will predict how the temperature of the boiling water will change over time, given an initial temperature of \(100^{\circ}C\) and a cooling coefficient of 0.5, in an environment with a temperature of \(10^{\circ}C\). The analytical solution to the initial-value problem described above, solved using separation of variables, is as follows: \[T(t)=10+90e^{-0.5t} \tag{22}\] #### Evaluation of Function Performance In this model, the specific neural network was trained for 200000 epochs in order to obtain the average losses shown in Table 2. The loss trends are also plotted in Figure 2. Based on the data presented in the table and \begin{table} \begin{tabular}{||c|c||} \hline Function name & Formula \\ \hline \hline Polynomial & \(u_{NN}(x)=A+c(x-a)N(x,\mathbf{p})\) \\ \hline Exponential & \(u_{NN}(x)=A+(1-e^{-(x-a)})N(x,\mathbf{p})\) \\ \hline Hyperbolic & \(u_{NN}(x)=A+\frac{e^{(t-a)}-e^{(a-t)}}{e^{(t-a)}-e^{(a-t)}}N(x,\mathbf{p})\) \\ \hline Logarithmic & \(u_{NN}(x)=A+log_{c}(t+1-a)N(x,\mathbf{p})\) \\ \hline Logistic & \(u_{NN}(x)=A+(\frac{1}{1+e^{(t-a)}}-\frac{1}{2})N(x,\mathbf{p})\) \\ \hline Sigmoid & \(u_{NN}(x)=A+(\frac{t-a}{1+e^{(t-a)}})N(x,\mathbf{p})\) \\ \hline Softplus & \(u_{NN}(x)=A+(ln(1+e^{(t-a)})-ln(2))N(x,\mathbf{p})\) \\ \hline \end{tabular} \end{table} Table 1: 7 forms of solution reconstruction graph, it can be seen that the exponential function performs the best in this model due to its ability to quickly reach the lower loss value and to achieve the lowest at the end of epochs. The logistic function also performs well in this model and reaches its own lowest loss value at an early point in the epochs. Finally, we compare our ANN solution based on the exponential function and the analytical solution in Figure 3, which demonstrates the effectiveness of the ANN approach. ### Motor Suspension System #### Model Description A motor suspension system[10] is a mechanical system that is used to support and isolate the motorcycle wheels from the rest of a vehicle. A mass-spring-damper model can be used to model the behavior of a motor suspension system by representing the wheels of a motorcycle as a mass, the suspension system as a spring, any damping effects as a damper, and the shock absorber attached to the suspension system as a fixed place. When the motorcycle wheels are resting on the ground and the rider gives extra force to them by sitting on the motor, the spring becomes compressed, bringing the system into an equilibrium position. What we are investigating now is the motion (displacement from the equilibrium position)of the wheels after the motor has a jump. The modeling differential equation is \[m\frac{dx^{2}}{dt}+c\frac{dx}{dt}+kx=0 \tag{23}\] where * \(m\) is the mass of the motorcycle wheels * \(c\) is the damping coefficient * \(k\) is the spring constant * \(x\) is the displacement of the motor from its equilibrium position * \(\frac{dx}{dt}\) is the velocity of the motor when hitting the ground * \(\frac{dx^{2}}{dt}\) is the acceleration of the motor In the English system, the total mass of wheels and a rider is chosen to be \(m=12\) slogs, the spring constant is \(k=1152\), the damping constant is \(c=240\), the initial displacement is \(x_{0}=\frac{1}{3}\)ft and the initial velocity is \(\frac{dx_{0}}{dt}=\) 10ft/sec. \begin{table} \begin{tabular}{||c|c|c|c|c|c|c|c||} \hline & polynomial & exponential & hyperbolic & logarithm & logistic & sigmoid & softplus \\ \hline \hline Average Loss & 2.19e-05 & 1.55e-07 & 1.03e-06 & 1.75e-06 & 6.67e-07 & 1.84e-05 & 1.78e-05 \\ \hline \end{tabular} \end{table} Table 2: Average losses of 7 constructed functions in the model of Newton’s law of cooling Figure 2: Comparing losses of 7 constructed func-Figure 3: Comparing ANN and analytical solutions in model of Newton’s law of cooling Solving the above differential equation using a characteristic equation gives an analytical solution \[x(t)=3.5e^{-8t}-\frac{19}{6}e^{-12t} \tag{24}\] #### Evaluation of Function Performance In this model, the neural network is run for 200000 epochs, giving the average lowest losses for seven reconstructed functions in Table 3. According to the table, logarithmic and polynomial functions have lower losses compared to the other functions shown. This is also evident in Figure 4. Figure 5 demonstrates that the numerical solution produced by our artificial neural network aligns perfectly with the analytical solution in this model. #### Evaluation of polynomial functions with different coefficients In this section, we will investigate how the coefficient of the polynomial function impacts the loss performance of a neural network in this motor suspension system. Notice in Figure 6, when the coefficients are very small or large, the loss that is achieved after completing all epochs tends to be higher. The impact of coefficient values between 5 and 30 is roughly the same. In other words, we can select any coefficient within the above range in order to obtain an accurate solution. ### Home Heating #### Model Description The heat transfer in a house can be modeled using a system of first-order ordinary differential equations[11]. This is useful for understanding how the temperature of a house changes over time, and for designing heating systems that maintain a comfortable temperature inside the house. Here, we examine the variations in temperature of the attic, basement, and insulated main floor in a house using Newton's cooling law. It is assumed that the temperature outside, in the attic, and on the main floor is constantly \(35^{\circ}F\) during the day in the winter, and the temperature in the basement is \(45^{\circ}F\) before the heater is turned on. When a heater starts to work at noon (t=0) and is set the temperature to \(100^{\circ}F\), it increases the temperature by \(20^{\circ}F\) per hour. This could be modeled as following differential equations: \[\frac{dx}{dt} =k_{0}(T_{earth}-x)+k_{1}(y-x) \tag{25}\] \[\frac{dy}{dt} =k_{1}(x-y)+k_{2}(T_{out}-x)+k_{3}(z-y)+Q_{heater}\] (26) \[\frac{dz}{dt} =k_{3}(y-z)+k_{2}(T_{out}-x) \tag{27}\] where * \(x\), \(y\), and \(z\) represent the temperatures of the basement, main living area, and attic, respectively * \(T_{earth}\),\(T_{out}\) represent the initial temperatures of the basement and outside respectively. * The variable \(Q_{heater}\) represents the heating rate of the heater * The k is called the cooling constant which describes the rate of change of the temperatures of each area over time. As the model described above, the initial temperatures at noon (t = 0) are \(T_{earth}\) = x(0) = 45, \(T_{out}\)= y(0) = z(0) = 35 and \(Q_{heater}=20\). And we set cooling constant as \(k_{0}\) = \(\frac{1}{2}\), \(k_{1}\) = \(\frac{1}{2}\), \(k_{2}\) = \(\frac{1}{4}\), \(k_{3}\) = \(\frac{1}{4}\), \(k_{4}\) = \(\frac{3}{4}\). ### Evaluation of Function Performance This time, we train a certain neural network 20000 times to achieve a low loss value at the end. The average of the 100 lowest losses and how losses decrease during the training process for different constructed functions are displaced in table 4 and Figure 7. As shown in the table and graph, the exponential, logarithmic, and hyperbolic functions all have small losses, with the hyperbolic and exponential functions converging the fastest in the first 5000 epochs. In Figure 8, the analytical solution of this system of ordinary differential equations is calculated using the "odeint" Python module, which is then compared to the solution obtained using artificial neural networks. ### Evaluation of logarithmic functions with different basis In this section, the effect of the base of logarithm on the loss performance of the ANN approach in the home heating model will be explored. To clearly demonstrate the effect of the base number on the performance, we trained the neural network for 50000 epochs and displayed the results in Figure 9. It can be observed that the convergence speed is faster when the base number is smaller in the first 10000 epochs. Additionally, logarithm with base 4 results in the lowest loss value in this model, thus being the best choice of base. \begin{table} \begin{tabular}{||c|c|c|c|c|c|c|c||} \hline & polynomial & exponential & hyperbolic & logarithm & logistic & sigmoid & softplus \\ \hline \hline Average Loss & 3.41e-04 & 1.23e-06 & 1.19e-05 & 4.48e-06 & 2.36e-04 & 2.69e-03 & 2.12e-03 \\ \hline \end{tabular} \end{table} Table 4: Average losses of 7 constructed functions in Home Heating model Figure 6: Comparing losses of polynomial functions with different coefficients ## 4 Conclusion and Future Research Based on the artificial neural network, solving differential equations becomes more accurate and efficient due to the easy generation of domain points and the good approximation capabilities of the neural network. In the paper, the loss function is reconstructed to meet initial/boundary conditions during the artificial neural network process, transforming the constrained problem into an unconstrained one and resulting in a good approximation of the solution. This paper explains how to reconstruct the solution function and extends the construction formula to \(n^{\text{th}}\) order ODEs. In addition, the different forms of construction are tested in three realistic models to evaluate their effectiveness. The general \(n^{\text{th}}\) order formula is only based on the polynomial function for the ordinary differential equations. As future work, the other six forms of reconstruction mentioned in section 3 could be extended to \(n^{\text{th}}\) order ODEs. Meanwhile, the development of a general construction formula for partial differential equations could be pursued to better fit a wider range of models in various academic fields.
2309.08474
VulnSense: Efficient Vulnerability Detection in Ethereum Smart Contracts by Multimodal Learning with Graph Neural Network and Language Model
This paper presents VulnSense framework, a comprehensive approach to efficiently detect vulnerabilities in Ethereum smart contracts using a multimodal learning approach on graph-based and natural language processing (NLP) models. Our proposed framework combines three types of features from smart contracts comprising source code, opcode sequences, and control flow graph (CFG) extracted from bytecode. We employ Bidirectional Encoder Representations from Transformers (BERT), Bidirectional Long Short-Term Memory (BiLSTM) and Graph Neural Network (GNN) models to extract and analyze these features. The final layer of our multimodal approach consists of a fully connected layer used to predict vulnerabilities in Ethereum smart contracts. Addressing limitations of existing vulnerability detection methods relying on single-feature or single-model deep learning techniques, our method surpasses accuracy and effectiveness constraints. We assess VulnSense using a collection of 1.769 smart contracts derived from the combination of three datasets: Curated, SolidiFI-Benchmark, and Smartbugs Wild. We then make a comparison with various unimodal and multimodal learning techniques contributed by GNN, BiLSTM and BERT architectures. The experimental outcomes demonstrate the superior performance of our proposed approach, achieving an average accuracy of 77.96\% across all three categories of vulnerable smart contracts.
Phan The Duy, Nghi Hoang Khoa, Nguyen Huu Quyen, Le Cong Trinh, Vu Trung Kien, Trinh Minh Hoang, Van-Hau Pham
2023-09-15T15:26:44Z
http://arxiv.org/abs/2309.08474v1
VulnSense: Efficient Vulnerability Detection in Ethereum Smart Contracts by Multimodal Learning with Graph Neural Network and Language Model ###### Abstract This paper presents VulnSense framework, a comprehensive approach to efficiently detect vulnerabilities in Ethereum smart contracts using a multimodal learning approach on graph-based and natural language processing (NLP) models. Our proposed framework combines three types of features from smart contracts comprising source code, opcode sequences, and control flow graph (CFG) extracted from bytecode. We employ Bidirectional Encoder Representations from Transformers (BERT), Bidirectional Long Short-Term Memory (BiLSTM) and Graph Neural Network (GNN) models to extract and analyze these features. The final layer of our multimodal approach consists of a fully connected layer used to predict vulnerabilities in Ethereum smart contracts. Addressing limitations of existing vulnerability detection methods relying on single-feature or single-model deep learning techniques, our method surpasses accuracy and effectiveness constraints. We assess VulnSense using a collection of 1.769 smart contracts derived from the combination of three datasets: Curated, SolidiFI-Benchmark, and Smartbugs Wild. We then make a comparison with various unimodal and multimodal learning techniques contributed by GNN, BiLSTM and BERT architectures. The experimental outcomes demonstrate the superior performance of our proposed approach, achieving an average accuracy of 77.96% across all three categories of vulnerable smart contracts. keywords: Vulnerability Detection, Smart Contract, Deep Learning, Graph Neural Networks, Multimodal + Footnote †: journal: Knowledge-Based Systems ## 1 Introduction The Blockchain keyword has become increasingly more popular in the era of Industry 4.0 with many applications for a variety of purposes, both good and bad. For instance, in the field of finance, Blockchain is utilized to create new, faster, and more secure payment systems, examples of which include Bitcoin and Ethereum. However, Blockchain can also be exploited for money laundering, as it enables anonymous money transfers, as exemplified by cases like Silk Road [1]. The number of keywords associated with blockchain is growing rapidly, reflecting the increasing interest in this technology. A typical example is the smart contract deployed on Ethereum. Smart contracts are programmed in Solidity, which is a new language that has been developed in recent years. When deployed on a blockchain system, smart contracts often execute transactions related to cryptocurrency, specifically the ether (ETH) token. However, the smart contracts still have many vulnerabilities, which have been pointed out by Zou et al. [2]. Alongside the immutable and transparent properties of Blockchain, the presence of vulnerabilities in smart contracts deployed within the Blockchain ecosystem enables attackers to exploit flawed smart contracts, thereby affecting the assets of individuals, organizations, as well as the stability within the Blockchain ecosystem. In more detail, the DAO attack [3] presented by Mehar et al. is a clear example of the severity of the vulnerabilities, as it resulted in significant losses up to $50 million. To solve the issues, Kushwaha et al. [4] conducted a research survey on the different types of vulnerabilities in smart contracts and provided an overview of the existing tools for detecting and analyzing these vulnerabilities. Developers have created a number of tools to detect vulnerabilities in smart contracts source code, such as Oyente [5], Slither [4], Conkas [6], Mythriri [7], Securify [8], etc. These tools use static and dynamic analysis for seeking vulnerabilities, but they may not cover all execution paths, leading to false negatives. Additionally, exploring all execution paths in complex smart contracts can be time-consuming. Current endeavors in contract security analysis heavily depend on pre-defined rules established by specialists, a process that demands significant labor and lacks scalability. Meanwhile, the emergence of Machine Learning (ML) methods in the detection of vulnerabilities in software has also been explored. This is also applicable to smart contracts, where numerous tools and research are developed to identify security bugs, such as ESCORT by Lutz [9], ContractWard by Wang [10] and Qian [11]. The ML-based methods have significantly improved performance over static and dynamic analysis methods, as indicated in the study by Jiang [12]. However, the current studies do exhibit certain limitations, primarily centered around the utilization of only a singular type of feature from the smart contract as the input for ML models. To elaborate, it is noteworthy that a smart contract's representation and subsequent analysis can be approached through its source code, employing techniques such as NLP, as demonstrated in the study conducted by Khodadadi et al. [13]. Conversely, an alternative approach, as showcased by Chen et al. [14], involves the usage of the runtime bytecode of a smart contract published on the Ethereum blockchain. Additionally, Wang and colleagues [10] addressed vulnerability detection using opcodes extracted through the employment of the Solc tool [15] (Solidity compiler), based on either the contract's source code or bytecode. In practical terms, these methodologies fall under the categorization of unimodal or monomodal models, designed to exclusively handle one distinct type of data feature. Extensively investigated and proven beneficial in domains such as computer vision, natural language processing, and network security, these unimodal models do exhibit impressive performance characteristics. However, their inherent drawback lies in their limited perspective, resulting from their exclusive focus on singular data attributes, which lack the potential characteristics for more in-depth analysis. This limitation has prompted the emergence of multimodal models, which offer a more comprehensive outlook on data objects. The works of Jabeen and colleagues [16], Tadas et al. [17], Nam et al. [18], and Xu [19] underscore this trend. Specifically, multimodal learning harnesses distinct ML models, each accommodating diverse input types extracted from an object. This approach facilitates the acquisition of holistic and intricate representations of the object, a concerted effort to surmount the limitations posed by unimodal models. By leveraging multiple input sources, multimodal models endeavor to enrich the understanding of the analyzed data objects, resulting in more comprehensive and accurate outcomes. Recently, multimodal vulnerability detection models for smart contracts have emerged as a new research area, combining different techniques to process diverse data, including source code, bytecode and opcodes, to enhance the accuracy and reliability of AI systems. Numerous studies have demonstrated the effectiveness of using multimodal deep learning models to detect vulnerabilities in smart contracts. For instance, Yang et al. [20] proposed a multimodal AI model that combines source code, bytecode, and execution traces to detect vulnerabilities in smart contracts with high accuracy. Chen et al. [13] proposed a new hybrid multimodal model called HyMo Framework, which combines static and dynamic analysis techniques to detect vulnerabilities in smart contracts. Their framework uses multiple methods and outperforms other methods on several test datasets. Recognizing that these features accurately reflect smart contracts and the potential of multimodal learning, we employ a multimodal approach to build a vulnerability detection tool for smart contracts called VulnSense. Different features can provide unique insights into vulnerabilities on smart contracts. Source code offers a high-level understanding of contract logic, bytecode reveals low-level execution details, and opcode sequences capture the execution flow. By fusing these features, the model can extract a richer set of features, potentially leading to more accurate detection of vulnerabilities. The main contributions of this paper are summarized as follows: * First, we propose a multimodal learning approach consisting of BERT, BiLSTM and GNN to analyze the smart contract under multi-view strategy by leveraging the capability of NLP algorithms, corresponding to three type of features, including source code, opcodes, and CFG generated from bytecode. * Then, we extract and leverage three types of features from smart contracts to make a comprehensive feature fusion. More specifics, our smart contract representations which are created from real-world smart contract datasets, including Smartbugs Curated, SolidiFI-Benchmark and Smartbugs Wild, can help model to capture semantic relationships of characteristics in the phase of analysis. * Finally, we evaluate the performance of VulnSense framework on the real-world vulnerable smart contracts to indicate the capability of detecting security defects such as Reentrancy, Arithmetic on smart contracts. Additionally, we also compare our framework with a unimodal models and other multimodal ones to prove the superior effectiveness of VulnSense. The remaining sections of this article are constructed as follows. Section 3 introduces some related works in adversarial attacks and countermeasures. The section 2 gives a brief background of applied components. Next, the threat model and methodology are discussed in section 4. Section 5 describes the experimental settings and scenarios with the result analysis of our work. Finally, we conclude the paper in section 6. ## 2 Background ### Bytecode of Smart Contracts Bytecode is a sequence of hexadecimal machine instructions generated from high-level programming languages such as C/C++, Python, and similarly, Solidity. In the context of deploying smart contracts using Solidity, bytecode serves as the compiled version of the smart contract's source code and is executed on the blockchain environment. Bytecode encapsulates the actions that a smart contract can perform. It contains statements and necessary information to execute the contract's functionalities. Bytecode is commonly derived from Solidity or other languages used in smart contract development. When deployed on Ethereum, bytecode is categorized into two types: creation bytecode and runtime bytecode. 1. **Creation Bytecode:** The creation bytecode runs only once during the deployment of the smart contract onto the system. It is responsible for initializing the contract's initial state, including initializing variables and constructor functions. Creation bytecode does not reside within the deployed smart contract on the blockchain network. 2. **Runtime Bytecode:** Runtime bytecode contains executable information about the smart contract and is deployed onto the blockchain network. Once a smart contract has been compiled into bytecode, it can be deployed onto the blockchain and executed by nodes within the network. Nodes execute the bytecode's statements to determine the behavior and interactions of the smart contract. Bytecode is highly deterministic and remains immutable after compilation. It provides participants in the blockchain network the ability to inspect and verify smart contracts before deployment. In summary, bytecode serves as a bridge between high-level programming languages and the blockchain environment, enabling smart contracts to be deployed and executed. Its deterministic nature and pre-deployment verifiability contribute to the security and reliability of smart contract implementations. ### Opcode of Smart Contracts Opcode in smart contracts refers to the executable machine instructions used in a blockchain environment to perform the functions of the smart contract. Opcodes are low-level machine commands used to control the execution process of the contract on a blockchain virtual machine, such as the Ethereum Virtual Machine (EVM). Each opcode represents a specific task within the smart contract, including logical operations, arithmetic calculations, memory management, data access, calling and interacting with other contracts in the Blockchain network, and various other tasks. Opcodes define the actions that a smart contract can perform and specify how the contract's data and state are processed. These opcodes are listed and defined in the bytecode representation of the smart contract. The use of opcodes provides flexibility and standardization in implementing the functionalities of smart contracts. Opcodes ensure consistency and security during the execution of the contract on the blockchain, and play a significant role in determining the behavior and logic of the smart contract. ### Control Flow Graph CFG is a powerful data structure in the analysis of Solidity source code, used to understand and optimize the control flow of a program extracted from the bytecode of a smart contract. The CFG helps determine the structure and interactions between code blocks in the program, providing crucial information about how the program executes and links elements in the control flow. Specifically, CFG identifies jump points and conditions in Solidity bytecode to construct a control flow graph. This graph describes the basic blocks and control branches in the program, thereby creating a clear understanding of the structure of the Solidity program. With CFG, we can identify potential issues in the program such as infinite loops, incorrect conditions, or security vulnerabilities. By examining control flow paths in CFG, we can detect logic errors or potential unwanted situations in the Solidity program. Furthermore, CFG supports the optimization of Solidity source code. By analyzing and understanding the control flow structure, we can propose performance and correctness improvements for the Solidity program. This is particularly crucial in the development of smart contracts on the Ethereum platform, where performance and security play essential roles. In conclusion, CFG is a powerful representation that allows us to analyze, understand, and optimize the control flow in Solidity programs. By constructing control flow graphs and analyzing the control flow structure, we can identify errors, verify correctness, and optimize Solidity source code to ensure performance and security. ## 3 Related work This section will review existing works on smart contract vulnerability detection, including conventional methods, single learning model and multimodal learning approaches. ### Static and dynamic method There are many efforts in vulnerability detection in smart contracts through both static and dynamic analysis. These techniques are essential for scrutinizing both the source code and the execution process of smart contracts to uncover syntax and logic errors, including assessments of input variable validity and string length constraints. Dynamic analysis evaluates the control flow during smart contract execution, aiming to unearth potential security flaws. In contrast, static analysis employs approaches such as symbolic execution and tainting analysis. Taint analysis, specifically, identifies instances of injection vulnerabilities within the source code. Recent research studies have prioritized control flow analysis as the primary approach for smart contract vulnerability detection. Notably, Kushwaha et al. [21] have compiled an array of tools that harness both static analysis techniques--such as those involving source code and bytecode--and dynamic analysis techniques via control flow scrutiny during contract execution. A prominent example of static analysis is Oyente [22], a tool dedicated to smart contract examination. Oyente employs control flow analysis and static checks to detect vulnerabilities like Reentrancy attacks, faulty token issuance, integer overflows, and authentication errors. Similarly, Slither [23], a dynamic analysis tool, utilizes control flow analysis during execution to pinpoint security vulnerabilities, encompassing Reentrancy attacks, Token Issuance Bugs, Integer Overflows, and Authentication Errors. It also adeptly identifies concerns like Transaction Order Dependence (TOD) and Time Dependence. Beyond static and dynamic analysis, another approach involves fuzzy testing. In this technique, input strings are generated randomly or algorithmically to feed into smart contracts, and their outcomes are verified for anomalies. Both Contract Fuzzer [24] and xFuzz [25] pioneer the use of fuzzing for smart contract vulnerability detection. Contract Fuzzer employs concolic testing, a hybrid of dynamic and static analysis, to generate test cases. Meanwhile, xFuzz leverages a genetic algorithm to devise random test cases, subsequently applying them to smart contracts for vulnerability assessment. Moreover, symbolic execution stands as an additional method for in-depth analysis. By executing control flow paths, symbolic execution allows the generation of generalized input values, addressing challenges associated with randomness in fuzzing approaches. This approach holds potential for overcoming limitations and intricacies tied to the creation of arbitrary input values. However, the aforementioned methods often have low accuracy and are not flexible between vulnerabilities as they rely on expert knowledge, fixed patterns, and are time-consuming and costly to implement. They also have limitations such as only detecting pre-defined fixed vulnerabilities and lacking the ability to detect new vulnerabilities. ### Machine Learning method ML methods often use features extracted from smart contracts and employ supervised learning models to detect vulnerabilities. Recent research has indicated that research groups primarily rely on supervised learning. The common approaches usually utilize feature extraction methods to obtain CFG and Abstract Syntax Tree (AST) through dynamic and static analysis tools on source code or bytecode. Th este studies [26; 27] used a sequential model of Graph Neural Network to process opcodes and employed LSTM to handle the source code. Besides, a team led by Nguyen Hoang has developed Mando Guru [28], a GNN model to detect vulnerabilities in smart contracts. Their team applied additional methods such as Heterogeneous Graph Neural Network, Coarse-Grained Detection, and Fine-Grained Detection. They leveraged the control flow graph (CFG) and call graph (CG) of the smart contract to detect 7 vulnerabilities. Their approach is capable of detecting multiple vulnerabilities in a single smart contract. The results are represented as nodes and paths in the graph. Additionally, Zhang Lejun et al. [29] also utilized ensemble learning to develop a 7-layer convolutional model that combined various neural network models such as CNN, RNN, RCN, DNN, GRU, Bi-GRU, and Transformer. Each model was assigned a different role in each layer of the model. ### Multimodal Learning The HyMo Framework [30], introduced by Chen et al. in 2020, presented a multimodal deep learning model used for smart contract vulnerability detection illustrates the components of the HyMo Framework. This framework utilizes two attributes of smart contracts, including source code and opcodes. After preprocessing these attributes, the HyMo framework employs FastText for word embedding and utilizes two Bi-GRU models to extract features from these two attributes. Another framework, the HYDRA framework, proposed by Chen and colleagues [31], utilizes three attributes, including API, bytecode, and opcode as input for three branches in the multimodal model to classify malicious software. Each branch processes the attributes using basic neural networks, and then the outputs of these branches are connected through fully connected layers and finally passed through the Softmax function to obtain the final result. And most recently, Jie Wanqing and colleagues have published a study [32] utilizing four attributes of smart contracts (SC), including source code, Static Single Assignment (SSA), CFG, and bytecode. With these four attributes, they construct three layers: SC, BB, and EVMB. Among these, the SC layer employs source code for attribute extraction using Word2Vec and BERT, the BB layer uses SSA and CFG generated from the source code, and finally, the EVMB layer employs assembly code and CFG derived from bytecode. Additionally, the authors combine these classes through various methods and undergo several distinct steps. These models yield promising results in terms of Accuracy, with HyMo [30] achieving approximately 0.79%, HYDRA [31] surpassing it with around 0.98% and the multimodal AI of Jie et al. [32]. achieved high-performance results ranging from 0.94% to 0.99% across various test cases. With these achieved results, these studies have demonstrated the power of multimodal models compared to unimodal models in classifying objects with multiple attributes. However, the limitations within the scope of the paper are more constrained by implementation than design choices. They utilized word2vec that lacks support for out-of-vocabulary words. To address this constraint, they proposed substituting word2vec with the fastText NLP model. Subsequently, their vulnerability detection framework was modeled as a binary classification problem within a supervised learning paradigm. In this work, their primary focus was on determining whether a contract contains a vulnerability or not. A subsequent task could delve into investigating specific vulnerability types through desired multi-class classification. From the evaluations presented in this section, we have identified the strengths and limitations of existing literature. It is evident that previous works have not fully optimized the utilization of Smart Contract data and lack the incorporation of a diverse range of deep learning models. While unimodal approaches have not adequately explored data diversity, multimodal ones have traded-off construction time for classification focus, solely determining whether a smart contract is vulnerable or not. In light of these insights, we propose a novel framework that leverages the advantages of three distinct deep learning models including BERT, GNN, and BiLSTM. Each model forms a separate branch, contributing to the creation of a unified architecture. Our approach adopts a multi-class classification task, aiming to collectively improve the effectiveness and diversity of vulnerability detection. By synergistically integrating these models, we strive to overcome the limitations of the existing literature and provide a more comprehensive solution. ## 4 Methodology This section provides the outline of our proposed approach for vulnerability detection in smart contracts. Additionally, by employing multimodal learning, we generate a comprehensive view of the smart contract, which allows us to represent of smart contract with more relevant features and boost the effectiveness of the vulnerability detection model. ### An overview of architecture Our proposed approach, VulnSense, is constructed upon a multimodal deep learning framework consisting of three branches, including BERT, BiLSTM, and GNN, as illustrated in **Figure 1**. More specifically, the first branch is the BERT model, which is built upon the Transformer architecture and employed to process the source code of the smart contract. Secondly, to handle and analyze the opcode context, the BiLSTM model is applied in the second branch. Lastly, the GNN model is utilized for representing the CFG of bytecode in the smart contract. This integrative methodology leverages the strengths of each component to comprehensively assess potential vulnerabilities within smart contracts. The fusion of linguistic, sequential, and structural information allows for a more thorough and insightful evaluation, thereby fortifying the security assessment process. This approach presents a robust foundation for identifying vulnerabilities in smart contracts and holds promise for significantly reducing risks in blockchain ecosystems. ### Bidirectional Encoder Representations from Transformers (BERT) In this study, to capture high-level semantic features from the source code and enable a more in-depth understanding of its functionality, we designed a BERT network which is a branch in our Multimodal. As in **Figure 2**, BERT model consists of 3 blocks: Preprocessor, Encoder and Neural network. More specifically, the Preprocessor processes the inputs, which is the source code of smart contracts. The inputs are transformed into vectors through the input embedding layer, and then pass through the _positional_encoding_ layer to add positional information to the words. Then, the **preprocessed** values are fed into the encoding block to compute relationships between words. The entire encoding block consists of 12 identical encoding layers stacked on top of each other. Each encoding layer comprises two main parts: a self-attention layer and a feed-forward neural network. The output **encoded** forms a vector space of length 768. Subsequently, the **encoded** values are passed through a simple neural network. The resulting **bert_output** values constitute the output of this branch in the multimodal model. Thus, the whole BERT component could be demonstrated as follows: \[\textbf{preprocessed}= positional\_encoding(\textbf{e}(input)) \tag{1}\] \[\textbf{encoded}=Encoder(\textbf{preprocessed}) \tag{2}\] \[\textbf{bert\_output}=NN(\textbf{encoded}) \tag{3}\] where, (1), (2) and (3) represent the Preprocessor block, Encoder block and Neural Network block, respectively. ### Bidirectional long-short term memory (BiLSTM) Toward the opcode, we applied the BiLSTM which is another branch of our Multimodal approach to analysis the contextual relation of opcodes and contribute crucial insights into the code's execution flow. By processing Opcode sequentially, we aimed to capture potential vulnerabilities that might be overlooked by solely considering structural information. In detail, as in **Figure 3**, we first tokenize the opcodes and convert them into integer values. The opcode features tokenized Figure 1: The overview of VulnSense framework. Figure 2: The architecture of BERT component in VulnSense are embedded into a dense vector space using an _embedding_ layer which has 200 dimensions. \[\begin{split}\textbf{token}=Tokenize(\textbf{opcode})\\ \textbf{vector\_space}=Embedding(\textbf{token})\end{split} \tag{4}\] Then, the opcode vector is fed into two BiLSTM layers with 128 and 64 units respectively. Moreover, to reduce overfitting, the Dropout layer is applied after the first BiLSTM layer as in (5). \[\begin{split}\textbf{bi\_lstm1}=Bi\_LSTM(128)(\textbf{vector\_space })\\ \textbf{r}=Dropout(dense(\textbf{bi\_lstm1}))\\ \textbf{bi\_lstm2}=Bi\_LSTM(128)(\textbf{r})\end{split} \tag{5}\] Finally, the output of the last BiLSTM layer is then fed into a dense layer with 64 units and ReLU activation function as in (6). \[\textbf{lstm\_output}=Dense(64,relu)(\textbf{bi\_lstm2}) \tag{6}\] ### Graph Neural Network (GNN) To offer insights into the structural characteristics of smart contracts based on bytecode, we present a CFG-based GNN model which is the third branch of our multimodal model, as shown in **Figure 4**. In this branch, we firstly extract the CFG from the bytecode, and then use OpenAI's embedding API to encode the nodes and edges of the CFG into vectors, as in (7). \[\textbf{encode}=Encoder(edges,nodes) \tag{7}\] The encoded vectors have a length of 1536. These vectors are then passed through 3 GCN layers with ReLU activation functions (8), with the first layer having an input length of 1536 and an output length of a custom hidden_channels (_hc_) variable. \[\begin{split}\textbf{GCN1}=GCNConv(1536,relu)(\textbf{encode}) \\ \textbf{GCN2}=GCNConv(hc,relu)(\textbf{GCN1})\\ \textbf{GCN3}=GCNConv(hc)(\textbf{GCN2})\end{split} \tag{8}\] Finally, to feed into the multimodal deep learning model, the output of the GCN layers is fed into 2 dense layers with 3 and 64 units respecitvely, as described in (9). \[\begin{split}\textbf{d1\_gnn}=Dense(3,relu)(\textbf{GCN3})\\ \textbf{gnn\_output}=Dense(64,relu)(\textbf{d1\_gnn})\end{split} \tag{9}\] ### Multimodal Each of these branches contributes a unique dimension of analysis, allowing us to capture intricate patterns and nuances present in the smart contract data. Therefore, we adopt an innovative approach by synergistically concatating the outputs of three distinctive models including BERT **bert_output** (3), BiLSTM **lstm_output** (6), and GNN **gnn_output** (9) to enhance the accuracy and depth of our predictive model, as shown in (10): \[\begin{split}\textbf{c}=\text{Concatenateate}([\textbf{bert\_output}, \\ \textbf{lstm\_output},\textbf{gnn\_output}])\end{split} \tag{10}\] Then the output \(\mathbf{c}\) is transformed into a 3D tensor with dimensions (batch_size, 194, 1) using the Reshape layer (11): \[\begin{split}\textbf{c\_reshaped}=\text{Reshape}((194,1))( \textbf{c})\end{split} \tag{11}\] Next, the transformed tensor **c_reshaped** is passed through a 1D convolutional layer (12) with 64 filters and a kernel size of 3, utilizing the rectified linear activation function: \[\begin{split}\textbf{conv\_out}=\text{Conv1D}(64,3,\text{relu})( \textbf{c\_reshaped})\end{split} \tag{12}\] The output from the convolutional layer is then flattened (13) to generate a 1D vector: \[\begin{split}\textbf{f\_out}=\text{flatten}()(\textbf{conv\_out} )\end{split} \tag{13}\] The flattened tensor **f_out** is subsequently passed through a fully connected layer with length 32 and an adjusted rectified linear activation function as in (14): \[\begin{split}\textbf{d\_out}=\text{Dense}(32,\text{relu})( \textbf{f\_out})\end{split} \tag{14}\] Finally, the output is passed through the softmax activation function (15) to generate a probability distribution across the three output classes: \[\begin{split}\widetilde{\textbf{y}}=\text{Dense}(3,\text{softmax })(\textbf{d\_out})\end{split} \tag{15}\] This architecture forms the final stages of our model, culminating in the generation of predicted probabilities for the three output classes. Figure 4: The architecture of GNN component in VulnSense Figure 3: The architecture of BiLSTM component in VulnSense. ## 5 Experiments and Analysis ### Experimental Settings and Implementation In this work, we utilize a virtual machine (VM) of Intel Xeon(R) CPU E5-2660 v4 @ 2.00GHz x 24, 128 GB of RAM, and Ubuntu 20.04 version for our implementation. Furthermore, all experiments are evaluated under the same experimental conditions. The proposed model is implemented using Python programming language and utilized well-established libraries such as TensorFlow, Keras. For all the experiments, we have utilized the fine-tune strategy to improve the performance of these models during the training stage. We set the batch size as 32 and the learning rate of optimizer Adam with 0.001. Additionally, to escape overfitting data, the dropout operation ratio has been set to 0.03. ### Performance Metrics We evaluate our proposed method via 4 following metrics, including Accuracy, Precision, Recall, F1-Score. Since our work conducts experiments in multi-classes classification tasks, the value of each metric is computed based on a 2D confusion matrix which includes True Positive (TP), True Negative (TN), False Positive (FP) and False Negative (FN). _Accuracy_ is the ratio of correct predictions \(TP\), \(TN\) over all predictions. _Precision_ measures the proportion of \(TP\) over all samples classified as positive. _Recall_ is defined the proportion of \(TP\) over all positive instances in a testing dataset. _F1-Score_ is the Harmonic Mean of \(Precision\) and \(Recall\). ### Dataset and Preprocessing In this dataset, we combine three datasets, including Smartbugs Curated [33; 34], SolidiFI-Benchmark [35], and Smartbugs Wild [33; 34]. For the Smartbugs Wild dataset, we collect smart contracts containing a single vulnerability (either an Arithmetic vulnerability or a Reentrancy vulnerability). The identification of vulnerable smart contracts is confirmed by at least two vulnerability detection tools currently available. In total, our dataset includes 547 Non-Vulnerability, 631 Arithmetic Vulnerabilities of Smart Contracts, and 591 Reentrancy Vulnerabilities of Smart Contracts, as shown in **Table 1**. #### 5.3.1 Source Code Smart Contract When programming, developers often have the habit of writing comments to explain their source code, aiding both themselves and other programmers in understanding the code snippets. BERT, a natural language processing model, takes the source code of smart contracts as its input. From the source code of smart contracts, BERT calculates the relevance of words within the code. Comments present within the code can introduce noise to the BERT model, causing it to compute unnecessary information about the smart contract's source code. Hence, preprocessing of the source code before feeding it into the BERT model is necessary. Moreover, removing comments from the source code also helps reduce the length of the input when fed into the model. To further reduce the source code length, we also eliminate extra blank lines and unnecessary whitespace. **Figure 5** provides an example of an unprocessed smart contract from our dataset. This contract contains comments following the '/' syntax, blank lines, and excessive white spaces that do not adhere to programming standards. **Figure 6** represents the smart contract after undergoing processing. #### 5.3.2 Opcode Smart Contracts We proceed with bytecode extraction from the source code of the smart contract, followed by opcode extraction through the bytecode. The opcodes within the contract are categorized into 10 functional groups, totaling 135 opcodes, according to the Ethereum Yellow Paper [36]. However, we have condensed them based on **Table 2**. \begin{table} \begin{tabular}{|c|c|} \hline Vulnerability Type & Contracts \\ \hline Arithmetic & 631 \\ Re-entrancy & 591 \\ Non-Vulnerability & 547 \\ \hline \end{tabular} \end{table} Table 1: Distribution of Labels in the Dataset Figure 5: An example of Smart Contract Prior to Processing During the preprocessing phase, unnecessary hexadecimal characters were removed from the opcodes. The purpose of this preprocessing is to utilize the opcodes for vulnerability detection in smart contracts using the BiLSTM model. In addition to opcode preprocessing, we also performed other preprocessing steps to prepare the data for the BiLSTM model. Firstly, we tokenized the opcodes into sequences of integers. Subsequently, we applied padding to create opcode sequences of the same length. The maximum length of opcode sequences was set to 200, which is the maximum length that the BiLSTM model can handle. After the padding step, we employ a Word Embedding layer to transform the encoded opcode sequences into fixed-size vectors, serving as inputs for the BiLSTM model. This enables the BiLSTM model to better learn the representations of opcode sequences. In general, the preprocessing steps we performed are crucial in preparing the data for the BiLSTM model and enhancing its performance in detecting vulnerabilities in Smart Contracts. #### 5.3.3 Control Flow Graph First, we extract bytecode from the smart contract, then extract the CFG through bytecode into.cfg.gy files as shown in **Figure 7**. From this.cfg.gy file, a CFG of a Smart Contract through bytecode can be represented as shown in **Figure 8**. The nodes in the CFG typically represent code blocks or states of the contract, while the edges represent control flow connections between nodes. To train the GNN model, we encode the nodes and edges of the CFG into numerical vectors. One approach is to use embedding techniques to represent these entities as vectors. In this case, we utilize the OpenAI API embedding to encode nodes and edges into vectors of length 1536. This could be a customized approach based on OpenAI's pre-trained deep learning models. Once the nodes and edges of the CFG are encoded into vectors, we employ them as inputs for the GNN model. ### Experimental Scenarios To prove the efficiency of our proposed model and compared models, we conducted training with a total of 7 models, categorized into two types: unimodal deep learning models and multi-modal deep learning models. On the one hand, the unimodal deep learning models consisted of models within each branch of VulnSense. On the other hand, these multimodal deep learning models are pairwise combinations of three unimodal deep learning models and VulnSense, utilizing a 2-way interaction. Specifically: * Multimodal BERT - BiLSTM (**M1**) * Multimodal BERT - GNN (**M2**) * Multimodal BiLSTM - GNN (**M3**) * (as mentioned in **Section 4**) Furthermore, to illustrate the fast convergence and stability of multimodal method, we train and validate 7 models on 3 different mocks of training epochs including 10, 20 and 30 epochs. \begin{table} \begin{tabular}{|c|c|} \hline Substituted Opcodes & Original Opcodes \\ \hline DUP & DUP1-DUP16 \\ SWAP & SWAP1-SWAP16 \\ PUSH & PUSH-PUSH32 \\ LOG & LOG1-LOG4 \\ \hline \end{tabular} \end{table} Table 2: The simplified opcode methods Figure 8: Visualize Graph by.cfg.gy file Figure 7: Graph Extracted from Bytecode ### Experimental Results The experimentation process for the models was carried out on the dataset as detailed in **Section 5.3**. #### 5.5.1 Models performance evaluation Through the visualizations in **Table 3**, it can be intuitively answered that the ability to detect vulnerabilities in smart contracts using multi-modal deep learning models is more effective than unimodal deep learning models in these experiments. Specifically, when testing 3 multimodal models including M1, M3 and VulnSense on 3 mocks of training epochs, the results indicate that the performance is always higher than 75.09% with 4 metrics mentioned above. Meanwhile, the testing performance of M2 and 3 unimodal models including BERT, BiLSTM and GNN are lower than 75% with all 4 metrics. Moreover, with the testing performance on all 3 mocks of training epochs, VulnSense model has achieved the highest F1-Score with more than 77% and Accuracy with more than 77.96%. In addition, **Figure 9** provides a more detailed of the performances of all 7 models at the last epoch training. It can be seen from **Figure 9** that, among these multimodal models, VulnSense performs the best, having the accuracy of Arithmetic, Reentrancy, and Clean label of 84.44%, 64.08% and 84.48%, respectively, followed by M3, M1 and M2 model. Even though the GNN model, which is an unimodal model, managed to attain an accuracy rate of 85.19% for the Arithmetic label and 82.76% for the Reentrancy label, its performance in terms of the Clean label accuracy was merely 1.94%. Similarity in the context of unimodal models, BiLSTM and BERT both have giv the accuracy of all 3 labels relatively low less than 80%. Furthermore, the results shown in **Figure 10** have demonstrated the superior convergence speed and stability of VulnSense model compared to the other 6 models. In detail, through testing after 10 training epochs, VulnSense model has gained the highest performance with greater than 77.96% in all 4 metrics. Although, VulnSense, M1 and M3 models give high performance after 30 training epochs, VulnSense model only needs to be trained for 10 epochs to achieve better convergence than the M1 and M3 models, which require 30 epochs. Besides, throughout 30 training epochs, the M3, M1, BiLSTM, and M2 models exhibited similar performance to the VulnSense model, yet they demonstrated some instability. On the one hand, the VulnSense model maintains a consistent performance level within the range of 75-79%, on the other hand, the M3 model experienced a severe decline in both Accuracy and F1-Score values, declining by over 20% by the 15\({}^{th}\) epoch, indicating significant disturbance in its performance. From this observation, these findings indicate that our proposed model, VulnSense, is more efficient in identifying vulnerabilities in smart contracts compared to these other models. Furthermore, by harnessing the advantages of multimodal over unimodal, VulnSense also exhibited consistent performance and rapid convergence. #### 5.5.2 Comparisons of Time **Figure 11** illustrates the training time for 30 epochs of each model. Concerning the training time of unimodal models, on the one hand, the training time for the GNN model is very short, at only 7.114 seconds, on the other hand BERT model reaches significantly longer training time of 252.814 seconds. For the BiLSTM model, the training time is significantly 10 times longer than that of the GNN model. Furthermore, when comparing the multimodal models, the shortest training time belongs to the M3 model (the multimodal combination of BiLSTM and GNN) at 81.567 seconds. Besides, M1, M2, and VulnSense involve the BERT model, resulting in relatively longer training times with over 270 seconds for 30 epochs. It is evident that the unimodal model significantly impacts the training time of the multimodal model it contributes to. Although VulnSense takes more time compared to the 6 other models, it only requires 10 epochs to converge. This greatly reduces the training time of VulnSense by 66% compared to the other 6 models. In addition, **Figure 12** illustrates the prediction time on the same testing set for each model. It's evident that these multimodal models M1, M2, and VulSense, which are incorporated from BERT, as well as the unimodal BERT model, ex \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline **Score** & **Epoch** & **BERT** & **BiLSTM** & **GNN** & **M1** & **M2** & **M3** & **VulnSense** \\ \hline \multirow{3}{*}{**Accuracy**} & **E10** & 0.5875 & 0.7316 & 0.5960 & 0.7429 & 0.6468 & 0.7542 & **0.7796** \\ \cline{2-10} & **E20** & 0.5903 & 0.6949 & 0.5988 & 0.7796 & 0.6553 & 0.7768 & **0.7796** \\ \cline{2-10} & **E30** & 0.6073 & 0.7146 & 0.6016 & 0.7796 & 0.6525 & 0.7683 & **0.7796** \\ \hline \multirow{3}{*}{**Precision**} & **E10** & 0.5818 & 0.7540 & 0.4290 & 0.7749 & 0.6616 & 0.7790 & **0.7940** \\ \cline{2-10} & **E20** & 0.6000 & 0.7164 & 0.7209 & 0.7834 & 0.6800 & 0.7800 & **0.7922** \\ \cline{2-10} & **E30** & 0.6000 & 0.7329 & 0.5784 & 0.7800 & 0.7000 & 0.7700 & **0.7800** \\ \hline \multirow{3}{*}{**Recall**} & **E10** & 0.5876 & 0.7316 & 0.5960 & 0.7429 & 0.6469 & 0.7542 & **0.7797** \\ \cline{2-10} & **E20** & 0.5900 & 0.6949 & 0.5989 & 0.7797 & 0.6600 & **0.7800** & 0.7797 \\ \cline{2-10} & **E30** & 0.6100 & 0.7147 & 0.6017 & 0.7700 & 0.6500 & 0.7700 & **0.7700** \\ \hline \multirow{3}{*}{**F1**} & **E10** & 0.5785 & 0.7360 & 0.4969 & 0.7509 & 0.6520 & 0.7602 & **0.7830** \\ \cline{2-10} & **E20** & 0.5700 & 0.6988 & 0.5032 & 0.7809 & 0.6600 & 0.7792 & **0.7800** \\ \cline{1-1} \cline{2-10} & **E30** & 0.6000 & 0.7185 & 0.5107 & 0.7700 & 0.6500 & 0.7700 & **0.7750** \\ \hline \end{tabular} \end{table} Table 3: The performance of 7 models hibit extended testing durations, surpassing 5.7 seconds for a set of 354 samples. Meanwhile, the testing durations for the GNN, BiLSTM, and M3 models are remarkably brief, approximately 0.2104, 1.4702, and 2.0056 seconds correspondingly. It is noticeable that the presence of the unimodal models has a direct influence on the prediction time of the multimodal models in which the unimodal models are involved. In the context of the 2 most effective multimodal models, M3 and VulSense, M3 model gave the shortest testing time, about 2.0056 seconds. On the contrary, the VulSense model exhibits the lengthiest prediction time, extending to about 7.4964 seconds, which is roughly four times that of the M3 model. While the M3 model outperforms the VulSense model in terms of training and testing duration, the VulSense model surpasses the M3 model in accuracy. Nevertheless, in the context of detecting vulnerability for smart contracts, increasing accuracy is more important than reducing execution time. Consequently, the VulSense model decidedly outperforms the M3 model. ## 6 Conclusion In conclusion, our study introduces a pioneering approach, VulnSense, which harnesses the potency of multimodal deep learning, incorporating graph neural networks and natural language processing, to effectively detect vulnerabilities within Ethereum smart contracts. By synergistically leveraging the strengths of diverse features and cutting-edge techniques, our framework surpasses the limitations of traditional single-modal methods. The results of comprehensive experiments underscore the superiority of our approach in terms of accuracy and efficiency, outperforming conventional deep learning techniques. This affirms the potential and applicability of our approach in bolstering Ethereum smart contract security. The significance of this research extends beyond its immediate applications. It contributes to the broader discourse on enhancing the integrity of blockchain-based systems. As the adoption of smart contracts continues to grow, the vulnerabilities associated with them pose considerable risks. Our proposed methodology not only addresses these vulnerabilities but also paves the way for future research in the realm of multimodal deep learning and its diversified applications. In closing, VulnSense not only marks a significant step towards securing Ethereum smart contracts but also serves as a stepping stone for the development of advanced techniques in blockchain security. As the landscape of cryptocurrencies and blockchain evolves, our research remains poised to contribute to the ongoing quest for enhanced security and reliability in decentralized systems. ## Acknowledgment This research was supported by The VNUHCM-University of Information Technology's Scientific Research Support Fund.
2309.09018
Real-time optimal control for attitude-constrained solar sailcrafts via neural networks
This work is devoted to generating optimal guidance commands in real time for attitude-constrained solar sailcrafts in coplanar circular-to-circular interplanetary transfers. Firstly, a nonlinear optimal control problem is established, and necessary conditions for optimality are derived by the Pontryagin's Minimum Principle. Under some assumptions, the attitude constraints are rewritten as control constraints, which are replaced by a saturation function so that a parameterized system is formulated to generate an optimal trajectory via solving an initial value problem. This approach allows for the efficient generation of a dataset containing optimal samples, which are essential for training Neural Networks (NNs) to achieve real-time implementation. However, the optimal guidance command may suddenly change from one extreme to another, resulting in discontinuous jumps that generally impair the NN's approximation performance. To address this issue, we use two co-states that the optimal guidance command depends on, to detect discontinuous jumps. A procedure for preprocessing these jumps is then established, thereby ensuring that the preprocessed guidance command remains smooth. Meanwhile, the sign of one co-state is found to be sufficient to revert the preprocessed guidance command back into the original optimal guidance command. Furthermore, three NNs are built and trained offline, and they cooperate together to precisely generate the optimal guidance command in real time. Finally, numerical simulations are presented to demonstrate the developments of the paper.
Kun Wang, Fangmin Lu, Zheng Chen, Jun Li
2023-09-16T15:12:59Z
http://arxiv.org/abs/2309.09018v2
# Real-time optimal control for attitude-constrained solar sailcrafts via neural networks ###### Abstract This work is devoted to generating optimal guidance commands in real time for attitude-constrained solar sailcrafts in coplanar circular-to-circular interplanetary transfers. Firstly, a nonlinear optimal control problem is established, and necessary conditions for optimality are derived by the Pontryagin's Minimum Principle. Under some mild assumptions, the attitude constraints are rewritten as control constraints, which are replaced by a saturation function so that a parameterized system is formulated. This allows one to generate an optimal trajectory via solving an initial value problem, making it efficient to collect a dataset containing optimal samples, which are essential for training Neural Networks (NNs) to achieve real-time implementation. However, the optimal guidance command may suddenly change from one extreme to another, resulting in discontinuous jumps that generally impair the NN's approximation performance. To address this issue, we use two co-states that the optimal guidance command depends on, to detect discontinuous jumps. A procedure for preprocessing these jumps is then established, thereby ensuring that the preprocessed guidance command remains smooth everywhere. Meanwhile, the sign of one co-state is found to be sufficient to revert the preprocessed guidance command back into the original optimal guidance command. Furthermore, three NNs are built and trained offline, and they cooperate together to precisely generate the optimal guidance command in real time. Finally, numerical simulations are presented to demonstrate the developments of the paper. keywords: Solar sailing, Attitude constraints, Real-time optimal control, Neural networks ## 1 Introduction Unlike conventional spacecrafts that use chemical or electric propellant to produce thrust during a flight mission, solar sailing exploits the Solar Radiation Pressure (SRP) to propel the solar sailcraft. Although the resulting propulsive force is smaller than that of chemical- or electric-based propulsion systems, the SRP offers an almost "infinite" energy source that can be used to orient the solar sailcraft [1]. This makes it a very promising technology, especially for long-duration interplanetary transfer missions [2; 3]. While the study on solar sailing dates back to the 1920s, substantial progress in the development of solar sailing has not been achieved until recently. Following the first demonstrations of solar sailcraft in orbit by JAXA's IKAROS [4] and NASA's NanoSail-D2 [5] in 2010, the successes of LightSail 1 and 2 [6] have sparked renewed interests in this technology. These achievements have led to the planning of numerous solar sailcraft-based missions, including Solar Cruiser [6] and OKEANOS [7]. Designing the transfer trajectory for solar sailcrafts is one of the most fundamental problems. As the acceleration provided by the SRP is extremely small, the transfer time of the solar sailcraft in space is usually too long. Thus, it is fundamentally important to find the time-optimal control so that the solar sailcraft can be steered to its targeted orbit within minimum time. This is essentially equivalent to addressing a minimum-time control problem, which is a conventional Optimal Control Problem (OCP). Numerical methods for OCPs include indirect and direct methods [8]. The indirect method, based on the calculus of variations or Pontryagin's Minimum Principle (PMP), transforms the OCP into a Two-Point Boundary-Value Problem (TPBVP) according to the necessary conditions for optimality. The resulting TPBVP is then typically solved by Newton-like iterative methods. The indirect method has been applied to solve the time-optimal orbital transfer problem for solar sailcrafts in Refs. [9; 10; 11]. Mengali and Quarta [12] employed the indirect method to solve the three-dimensional time-optimal transfer problem for a non-ideal solar sailcraft with optical and parametric force models. Sullo _et al._[13] embedded the continuation method into the indirect method, so that the time-optimal transfer of the solar sailcraft was found from a previously designed low-thrust transfer trajectory. Recently, the solar sail primer vector theory, combined with the indirect method, was employed to design the flight trajectory that minimizes the solar angle over a transfer with a fixed time of flight [14]. On the other hand, direct methods reformulate the OCP (usually via direct shooting or pseudospectral methods) as a nonlinear programming problem, which can be solved using interior point or sequential quadratic programming method [8]. Solar sailcraft transfers to hybrid pole and quasi pole sitters on Mars and Venus were studied using a direct pseudospectral algorithm in Ref. [15]. In Ref. [16], the Gaussian quadrature pseudospectral optimization was combined with a slerped control parameterization method, and the transfer time was drastically reduced. Furthermore, a comparison study on application of indirect methods and direct methods to the time-optimal transfer for solar sailcrafts was conducted in Ref. [17]. In addition to these conventional methods, heuristic methods have also been utilized in the literature to solve the time-optimal transfer problem; see, e.g., Ref. [18]. Although the techniques mentioned in the previous paragraph are widely used, they are usually time-consuming and need appropriate initial guesses of co-state or state vector. Therefore, these methods are typically not suitable for onboard systems with limited computational resources. To overcome this issue, shape-based methods have been developed. The basic idea of the shape-based methods is to describe the geometric shape of trajectory using a set of mathematical expressions with some tunable parameters. The parameters are usually adjusted so that the mathematical expressions match the required boundary conditions. In this regard, Peloni _et al._[19] expressed the trajectory as a function of four parameters for a multiple near-Earth-asteroid mission. A Radau pseudospectral method was leveraged to solve the resulting problem in terms of these four parameters. Since then, different shaping functions, such as Bezier curve [20] and Fourier series [21; 22], have been proposed for transfer problems. In order to further cope with constraints on the propulsive acceleration, some additional shape-based functions have been developed [23; 24]. It is worth mentioning that shape-based methods, albeit computationally efficient, often provide suboptimal solutions. Consequently, they are usually employed to provide initial guesses for direct methods. According to the preceding review, existing approaches for solar sailcraft transfer suffer from computational burden, convergence issues, or solution suboptimality. In fact, solution optimality plays an important role in space exploration missions [25], and it is demanding for onboard systems to generate real-time solutions. Therefore, it is worth exploiting more effective methods capable of providing optimal solutions in real time. Thanks to the emerging achievements of artificial intelligence in recent decades, machine learning techniques have become a viable approach. Generally, there are two quite distinct machine learning based approaches. The first class, known as Reinforcement Learning (RL) involves training an agent to behave in a potentially changing environment via repeatedly interactions, and the goal is to maximize a carefully designed reward function. Owing to excellent generalization abilities of deep Neural Networks (NNs), deep RL algorithms have demonstrated great success in learning policies to guide spacecrafts in transfer missions; see, e.g., Refs [26; 27; 28]. Although the trained agent is generalizable to non-nominal conditions and robust to uncertainties, significant effort may be required to design an appropriate reward function, in order to derive a solution that is very close to optimal [29]. In contrast to RL, Behavioral Cloning (BC) aims to clone the observed expert behavior through training an NN on optimal state-action pairs. Such pairs are usually obtained by solving deterministic OCPs via indirect or direct methods. Recently, BC has been widely used to generate optimal guidance commands in real time, or reduce the computational time in aerospace applications, such as spacecraft landing [30; 31], hypersonic vehicle reentry [32; 33], missile interception [34], low-thrust transfer [35], end-to-end human-Mars entry, powered-descent, and landing problem [36], minimum-fuel geostationary station-keeping [37], time-optimal interplanetary rendezvous [38], and solar sailcraft transfer [39]. Specifically, in Ref. [39], some minimum-time orbital transfer problems with random initial conditions were solved by the indirect method to formulate the dataset for the state-guidance command pairs. Then, the dataset was used to train NNs within a supervised learning framework. Finally, the trained NNs were able to generate the optimal guidance command in real time. However, constraints on the solar sailcraft attitude were not considered in that paper. In practice, factors such as power generation and thermal control generally reduce the admissible variation range of the cone angle, which is equal to the sail attitude for a perfectly flat ideal sail [40; 41; 42]. In addition, the optimal guidance command for the solar sailcraft transfer problem may be discontinuous, which generally degrades the approximation performance of NNs. In fact, approximating discontinuous controls via NNs, even with many hidden layers and/or neurons, can be quite challenging, as shown by the numerical simulations in Refs. [43; 44; 45]. As a continuation effort to the work in Ref. [39], this paper considers a scenario where the solar sailcraft's attitude is constrained, and an NN-based method will be developed for real-time generation of the optimal guidance command. Firstly, the time-optimal problem for the solar sailcraft with constraints on the attitude is formulated. Then, necessary conditions for optimal trajectories are established by employing the PMP. These conditions allow using a saturation function to approximate the optimal guidance law. Unlike conventional optimization-based approaches, such as indirect methods and direct methods which often suffer from convergence issues [46], we formulate a parameterized system to facilitate the generation of datasets for training NNs. In this method, an optimal trajectory can be generated by solving an initial value problem instead of a TPBVP. As a consequence, a large number of optimal trajectories can be readily obtained. However, discontinuous jumps in the optimal guidance command may defer one from obtaining perfect approximation. To resolve this issue, one viable method is to use NNs to approximate smooth co-states instead, as was done in Ref. [47]. Unfortunately, using co-states to train NNs is not reliable as the relationship between co-states and the optimal guidance command is highly nonlinear. This may result in magnified propagation errors even for small errors from the predicted co-states [48]. To avoid this difficulty, we propose to employ two co-states to preprocess the guidance command, smoothing out the discontinuous jumps. After preprocessing, the guidance command can be reverted to its original optimal form by examining the sign of one specific co-state. To achieve real-time generation of the optimal guidance command, we employ a system comprising three NNs that predict the optimal time of flight, the preprocessed guidance command, and one co-state. These three NNs cooperate together to generate the optimal guidance command in real time. The remainder of the paper is structured as follows. The OCP is formulated in Section 2. Section 3 presents the optimality conditions derived from the PMP, and a parameterized system for optimal solutions is established. In Section 4, procedures for generating the dataset and preprocessing discontinuous jumps are introduced, and the scheme for generating the optimal guidance command in real time is detailed. Section 5 provides some numerical examples to demonstrate the developments of the paper. This paper finally concludes by Section 6. ## 2 Problem Formulation ### Solar Sailcraft Dynamics We consider a two-dimensional interplanetary transfer of an ideal solar sailcraft. Before proceeding, we present some units widely used in astrodynamics for better numerical conditioning. The distance is in units of Astronomical Unit (AU, the mean distance between the Earth and the Sun), and time is in units of Time Unit (TU, a time unit such that an object in a circular orbit at 1 AU would have a speed of 1 AU/TU). For clearer presentation, a heliocentric-ecliptic inertial frame \((O,X,Y)\) and a polar reference frame \((O,r,\theta)\) are used, as shown in Fig. 1. The origin \(O\) is located at the Sun's center; the \(Y\) axis points towards the opposite direction of the vernal equinox, and the \(X\) axis is specified by rotating the \(Y\) axis 90 degrees clockwise in the ecliptic plane. \(r\in\mathbb{R}_{0}^{+}\) denotes the distance from the origin to the solar sailcraft, and \(\theta\in[0,2\pi]\) is the rotation angle of the Sun-solar sailcraft line measured counterclockwise in the polar reference frame from the positive \(X\) axis. In the context of preliminary mission design, it is reasonable to make some assumptions. The Sun and solar sailcraft are treated as point masses. The motion of the solar sailcraft is only influenced by the Sun's gravitational attraction and the propulsive acceleration due to the SRP acting on the solar sailcraft. Other disturbing factors, such as the solar wind and light aberration are neglected. Then, the dimensionless dynamics of the solar sailcraft is given by [11] \[\begin{cases}\dot{r}(t)=u(t),\\ \dot{\theta}(t)=\frac{v(t)}{r(t)};\\ \dot{u}(t)=\frac{\beta\cos^{3}\alpha(t)}{r^{2}(t)}+\frac{v^{2}(t)}{r(t)}- \frac{1}{r^{2}(t)},\\ \dot{v}(t)=\frac{\beta\sin\alpha(t)\cos^{2}\alpha(t)}{r^{2}(t)}-\frac{u(t)v(t) }{r(t)},\end{cases} \tag{1}\] where \(t\geq 0\) is the time; \(u\in\mathbb{R}\) and \(v\in\mathbb{R}\) is the radial and transverse speed in units of AU/TU, respectively; \(\beta\) is the normalized characteristic acceleration; \(\alpha\) represents the cone angle, which is the angle between the normal direction of sail plane \(\hat{n}\) and the direction of incoming rays \(\hat{i}_{u}\), measured positive in the clockwise direction from the normal direction of sail plane \(\hat{n}\). ### Attitude Constraints Note that the radial component \(a_{u}\) and transverse component \(a_{v}\) of the ideal propulsive acceleration vector acting on the solar sailcraft are given by [41] \[\begin{cases}a_{u}(t):=\frac{\beta\cos^{3}\alpha(t)}{r^{2}(t)},\\ a_{v}(t):=\frac{\beta\sin\alpha(t)\cos^{2}\alpha(t)}{r^{2}(t)}.\end{cases} \tag{2}\] Figure 1: Dynamics of an ideal solar sailcraft. Based on the assumption that the ideal sail is approximated with a flat and rigid reflecting surface, the attitude constraints can be rewritten in terms of the cone angle as \[\alpha\in[-\phi_{max},\phi_{max}], \tag{3}\] in which \(\phi_{max}\in(0,\frac{\pi}{2})\) is a given parameter depending on the minimum acceptable level of the electric power generated by the solar sailcraft [41]. It essentially sets the limits for the solar sailcraft's orientation with respect to the incoming solar rays. The "force bubble" of the ideal solar mentioned in Ref. [49] due to attitude constraints is displayed in Fig. 2. The colormap represents the value of \(\beta\) in \([0.01892,0.3784]\) (corresponding to a characteristic acceleration domain of \([0.1,2]\) mm/s\({}^{2}\)). Then, for a given \(r\), it is clear that \(\beta\) defines the size of the force bubble, whereas \(\phi_{max}\) determines its actual shape, as shown by the pink-dashed lines. Specifically, the propulsive acceleration is constrained to the radial direction if \(\phi_{max}=0\). ### Formulation of the OCP Without loss of generality, consider an initial condition for the solar sailcraft at the initial time \(t_{0}=0\) given by \[r(0)=r_{0},\theta(0)=\theta_{0},u(0)=u_{0},v(0)=v_{0}. \tag{4}\] The mission purpose is, through orientating the cone angle \(\alpha\) subject to constraints in Eq. (3), to drive the solar sailcraft governed by Eq. (1), into a coplanar circular orbit of radius \(r_{f}\) with the terminal condition given by \[r(t_{f})=r_{f},u(t_{f})=0,v(t_{f})=\frac{1}{\sqrt{r_{f}}}. \tag{5}\] Figure 2: Shape of the ideal sail force bubble. The functional cost \(J\) is to minimize the arrival time (final time) \(t_{f}\), i.e., \[J=\int_{0}^{t_{f}}1\ dt. \tag{6}\] ## 3 Parameterization of Optimal Trajectories In this section, we first derive the necessary conditions for optimal trajectories. Then, we formulate a parameterized system that enables the generation of an optimal trajectory via solving an initial value problem. For simplicity of presentation, important results and claims are written in lemmas with their proofs postponed to the appendix. ### Optimality Conditions Denote by \(\mathbf{\lambda}=[\lambda_{r},\lambda_{\theta},\lambda_{u},\lambda_{v}]\) the co-state vector of the state vector \(\mathbf{x}=[r,\theta,u,v]\). Then, the Hamiltonian for the OCP is expressed as \[\mathscr{H}=1+\lambda_{r}u+\lambda_{\theta}\frac{v}{r}+\lambda_{u}(\frac{ \beta\cos^{3}\alpha}{r^{2}}+\frac{v^{2}}{r}-\frac{1}{r^{2}})+\lambda_{v}(\frac {\beta\sin\alpha\cos^{2}\alpha}{r^{2}}-\frac{uv}{r}). \tag{7}\] According to the PMP [50], we have \[\begin{cases}\dot{\lambda}_{r}(t)=\lambda_{\theta}(t)\frac{v(t)}{r^{2}(t)}+ \lambda_{u}(t)[2\beta\frac{\cos^{3}\alpha(t)}{r^{3}(t)}+\frac{v^{2}(t)}{r^{2}( t)}-\frac{2}{r^{3}(t)}]+\lambda_{v}(t)[2\beta\cos^{2}\alpha(t)\frac{\sin \alpha(t)}{r^{3}(t)}-\frac{u(t)v(t)}{r^{2}(t)}],\\ \dot{\lambda}_{\theta}(t)=0,\\ \dot{\lambda}_{u}(t)=-\lambda_{r}(t)+\frac{\lambda_{v}(t)v(t)}{r(t)},\\ \dot{\lambda}_{v}(t)=-\frac{\lambda_{\theta}(t)}{r(t)}-2\frac{\lambda_{u}(t)v (t)}{r(t)}+\frac{\lambda_{v}(t)u(t)}{r(t)}.\end{cases} \tag{8}\] Because \(t_{f}\) is free, it holds \[\mathscr{H}(t)\equiv 0,\ \forall\ t\in[0,t_{f}]. \tag{9}\] In addition, the terminal rotation angle \(\theta(t_{f})\) is not constrained, leading to \[\lambda_{\theta}(t_{f})=0. \tag{10}\] By the following lemma, we present the optimal guidance law. **Lemma 1**: _The optimal guidance law that minimizes \(\mathscr{H}\) in Eq. (7) is_ \[\alpha^{*}=\text{Median}[-\phi_{max},\bar{\alpha},\phi_{max}],\text{where }\bar{\alpha}=\arctan\ \frac{-3\lambda_{u}-\sqrt{9\lambda_{u}^{2}+8\lambda_{v}^{2}}}{4\lambda_{v}}. \tag{11}\] The proof of this lemma is postponed to A. Notice that the optimal guidance law in Eq. (11) may not be smooth at some isolated points. In fact, the optimal guidance law in Eq. (11) can be approximated in a compact form as [51] \[\alpha^{*}\approx\alpha^{*}(\delta)=\frac{1}{2}\ [\sqrt{\left(\bar{\alpha}+\phi_{ max}\right)^{2}+\delta}-\sqrt{\left(\bar{\alpha}-\phi_{max}\right)^{2}+\delta}]. \tag{12}\] where \(\delta\) is a small non-negative number. It is clear that Eq. (12) is equivalent to Eq. (11) if \(\delta=0\). If \(\delta>0\) is sufficiently small, Eq. (12) acts like a smoothing filter function. Note that the initial rotation angle \(\theta_{0}\) has no effect on the solutions because the transfer trajectory is rotatable [39]. For brevity, a triple \((r(t),u(t),v(t))\) for \(t\in[0,t_{f}]\) is said to be an optimal trajectory if all the necessary conditions in Eqs. (8), (9), (10), and (11) are met. In order to generate the optimal guidance command in real time via NNs, a training dataset containing a large number of optimal trajectories is required. In this regard, one viable approach is to employ some root-finding algorithms to solve the TPBVP \[[r(t_{f})-r_{f},u(t_{f}),v(t_{f})-\frac{1}{\sqrt{r_{f}}},\mathscr{H}(t_{f})]= \mathbf{0}. \tag{13}\] Nevertheless, this procedure is usually time-consuming and suffers from convergence issues. In the next subsection, we present a parameterized system so that an optimal trajectory can be readily obtained by solving an initial value problem instead. ### Parameterized System Define a new independent variable \(\tau\) as below \[\tau=t_{f}-t,t\in[0,t_{f}]. \tag{14}\] Let us establish a first-order ordinary differential system \[\begin{cases}\dot{R}(\tau)=-U(\tau),\\ \dot{U}(\tau)=-\frac{\beta\cos^{3}\hat{\alpha}(\tau)}{R^{2}(\tau)}-\frac{V^{2} (\tau)}{R(\tau)}+\frac{1}{R^{2}(\tau)},\\ \dot{V}(\tau)=-\frac{\beta\sin\hat{\alpha}(\tau)\cos^{2}\hat{\alpha}(\tau)}{R ^{2}(\tau)}+\frac{U(\tau)V(\tau)}{R(\tau)},\\ \dot{\lambda}_{R}(\tau)=-\lambda_{U}(\tau)[2\beta\frac{\cos^{3}\hat{\alpha}( \tau)}{R^{3}(\tau)}\frac{V^{2}(\tau)}{R^{2}(\tau)}-\frac{2}{R^{3}(\tau)}]- \lambda_{V}(\tau)[2\beta\cos^{2}\hat{\alpha}(\tau)\frac{\sin\hat{\alpha}(\tau )}{R^{3}(\tau)}-\frac{U(\tau)V(\tau)}{R^{2}(\tau)}],\\ \dot{\lambda}_{U}(\tau)=\lambda_{R}(\tau)-\frac{\lambda_{V}(\tau)V(\tau)}{R( \tau)},\\ \dot{\lambda}_{V}(\tau)=2\frac{\lambda_{U}(\tau)V(\tau)}{R(\tau)}-\frac{ \lambda_{V}(\tau)U(\tau)}{R(\tau)},\end{cases} \tag{15}\] where \((R,U,V,\lambda_{R},\lambda_{U},\lambda_{V})\in\mathbb{R}_{0}^{+}\times\mathbb{ R}^{5}\), and \(\hat{\alpha}\) is defined as \[\hat{\alpha}=\frac{1}{2}\ [\sqrt{\left(\bar{\alpha}+\phi_{max}\right)^{2}+ \delta}-\sqrt{\left(\bar{\alpha}-\phi_{max}\right)^{2}+\delta}]\ \text{with}\ \bar{\alpha}=\arctan\ \frac{-3\lambda_{\text{U}}-\sqrt{9\lambda_{\text{U}}^{2}+8\lambda_{\text{V} }^{2}}}{4\lambda_{\text{V}}}. \tag{16}\] The initial value at \(\tau=0\) for the system in Eq. (15) is set as \[R(0)=R_{0},U(0)=0,V(0)=\frac{1}{\sqrt{R_{0}}},\lambda_{R}(0)=\lambda_{R_{0}}, \lambda_{U}(0)=\lambda_{U_{0}},\lambda_{V}(0)=\lambda_{V_{0}}. \tag{17}\] The value of \(\lambda_{V_{0}}\) satisfies the following equation \[\begin{split} 1+\lambda_{R}(0)U(0)+\lambda_{U}(0)(\frac{\beta\cos^{3} \hat{\alpha}(0)}{R^{2}(0)}+\frac{V^{2}(0)}{R(0)}-\frac{1}{R^{2}(0)})\\ +\lambda_{V}(0)(\frac{\beta\sin\hat{\alpha}(0)\cos^{2}\hat{ \alpha}(0)}{R^{2}(0)}-\frac{U(0)V(0)}{R(0)})=0.\end{split} \tag{18}\] Substituting Eq. (17) into Eq. (18) leads to \[1+\lambda_{U_{0}}\frac{\beta\cos^{3}\hat{\alpha}(0)}{R_{0}^{2}}+\lambda_{V_{0} }\frac{\beta\sin\hat{\alpha}(0)\cos^{2}\hat{\alpha}(0)}{R_{0}^{2}}=0. \tag{19}\] In view of Eqs. (16) and (19), it is clear that for a given \(\lambda_{U_{0}}\), the value for \(\lambda_{V_{0}}\) can be numerically determined by solving Eq. (19). By the following lemma, we shall show that an optimal trajectory can be generated by solving an initial value problem based on the parameterized system in Eq. (15). Lemma 2: For any given constants \(\lambda_{R_{0}}\) and \(\lambda_{U_{0}}\), a fixed final time \(t_{f}\), and a parameter \(\lambda_{V_{0}}\) to be determined by solving Eq. (19), denote by \[\begin{split}\mathcal{F}:=(R(\tau,\lambda_{R_{0}},\lambda_{U_{0} }),U(\tau,\lambda_{R_{0}},\lambda_{U_{0}}),V(\tau,\lambda_{R_{0}},\lambda_{U_ {0}}),\\ \lambda_{R}(\tau,\lambda_{R_{0}},\lambda_{U_{0}}),\lambda_{U}( \tau,\lambda_{R_{0}},\lambda_{U_{0}}),\lambda_{V}(\tau,\lambda_{R_{0}}, \lambda_{U_{0}}))\in\mathbb{R}^{+}\times\mathbb{R}^{5}.\end{split}\] the solution of the \((\lambda_{R_{0}},\lambda_{U_{0}})\)-parameterized system in Eq. (15) with the initial value specified in Eq. (17). Define \(\mathcal{F}_{p}\) as \[\begin{split}&\mathcal{F}_{p}:=\{R(\tau,\lambda_{R_{0}},\lambda_{U_ {0}}),U(\tau,\lambda_{R_{0}},\lambda_{U_{0}}),V(\tau,\lambda_{R_{0}},\lambda_{U _{0}}),\tau|\\ &(R(\tau,\lambda_{R_{0}},\lambda_{U_{0}}),U(\tau,\lambda_{R_{0}},\lambda_{U_{0}}),V(\tau,\lambda_{R_{0}},\lambda_{U_{0}}),\tau)\in\mathcal{F} \}.\end{split}\] Then \(\mathcal{F}_{p}\) represents the solution space of an optimal trajectory starting from a circular orbit with a radius of \(R_{0}\). The proof of this lemma is postponed to A. With the parameterized system established, obtaining an optimal trajectory becomes a straightforward process that involves solving an initial value problem, rather than tackling a TPBVP. Once the optimal trajectory \(\mathcal{F}_{p}\) is obtained, the corresponding optimal guidance command \(\hat{\alpha}\) can be determined through the definition of \(\mathcal{F}\) and reference to Eq. (16). To maintain clarity and prevent any ambiguity in notation, we continue to employ \((r,u,v)\) to represent the flight state and we use \(\tau\) to represent the optimal time of flight along the optimal trajectory \(\mathcal{F}_{p}\). Since the optimal time of flight is of great importance for preliminary mission design, evaluating the optimal time of flight without the need for the exact guidance law is quite attractive, as was done in Refs. [31; 35]. In this paper, we include \(\tau\) into the flight state, and it can be evaluated via a trained NN, as shown by our scheme in Subsection 4.3. Likewise, the optimal guidance command is again denoted by \(\alpha\). Then we shall define \(f\) as the nonlinear mapping from the flight state \((r,u,v,\tau)\) to the optimal guidance command \(\alpha\), i.e., \[f:=(r,u,v,\tau)\rightarrow\alpha.\] According to the universal approximation theorem [52], employing a large number of sampled data that map the flight state \((r,u,v,\tau)\) to the optimal guidance command \(\alpha\) to train feedforward NNs enables the trained networks to accurately approximate the nonlinear mapping \(f\). Before generating the dataset, a _nominal trajectory_ is typically required. Then, by applying perturbations over \(\lambda_{R_{0}}\) and \(\lambda_{U_{0}}\) on the _nominal trajectory,_ lots of optimal trajectories containing optimal state-guidance command pairs can be aggregated. With this dataset, the optimal guidance command can be generated in real time within the supervised learning framework, as we will detail in the next section. ## 4 Generation of Optimal Guidance Commands in Real Time via NNs In this section, we begin by outlining the procedure for generating the training dataset. As our analysis progresses, we observe a notable characteristic of the optimal guidance command, that is, it may abruptly change from one extreme to another, resulting in discontinuous jumps. This behavior poses a challenge, as the approximation performance of the NN typically deteriorates under such circumstances, a phenomenon seen in Refs. [43; 44; 45]. To address this issue, we introduce a preprocessing method that employs two co-states to detect the discontinuous jump. With the saturation function defined in Eq. (12), the pre-processed guidance command becomes smooth everywhere. Following this procedure, we present a scheme for generating optimal guidance commands in real time via NNs. ### Generation of the Training Dataset #### 4.1.1 Nominal Trajectory Consider a solar sailcraft subject to attitude constraints \(\alpha\in[-\frac{\pi}{3},\frac{\pi}{3}]\), starting from Earth's orbit around the Sun, which is a circular orbit with radius of 1 AU (1 AU = \(1.4959965\times 10^{11}\) m), i.e., \[r(0)=1,\theta(0)=0,u(0)=0,v(0)=1, \tag{20}\] and the mission is to fly into Mars' orbit around the Sun, which is a circular orbit with radius of 1.524 AU, i.e., \[r(t_{f})=1.524,u(t_{f})=0,v(t_{f})=\frac{1}{\sqrt{1.524}}. \tag{21}\] The normalized characteristic acceleration \(\beta\) is set as a constant of 0.16892, corresponding to a characteristic acceleration of 1 mm/s\({}^{2}\). To demonstrate the effects of the saturation function in Eq. (12), the optimal guidance laws in Eqs. (11) and (12) are embedded into the shooting function defined by Eq. (13). To enhance the convergence of the indirect method, we initially consider cases without attitude constraints and subsequently apply a homotopy technique to accommodate attitude constraints. Fig. 3 shows the optimal guidance command profiles for the unconstrained and constrained cases with two smoothing constants. It is evident that the constrained cases contain rapid changes in the guidance command. Additionally, the guidance command with \(\delta=1\times 10^{-3}\) displays smoother behavior compared to the case with \(\delta=0\). Besides, the arrival time for the unconstrained case is 7.01204 TU ( 1 TU \(=5.0226757\times 10^{6}\) s), whereas this time span extends to 7.01818 TU for the constrained case with \(\delta=0\). Regarding \(\delta=1\times 10^{-3}\), the arrival time is 7.01838 TU, indicating that a smoothing constant of \(\delta=1\times 10^{-3}\) leads to a minor functional penalty of merely \(2\times 10^{-4}\) TU. Consequently, we designate the transfer trajectory with \(\delta=1\times 10^{-3}\) as the _nominal trajectory_ henceforth. #### 4.1.2 Dataset Denote by \(\lambda_{r_{f}}^{*}\), \(\lambda_{u_{f}}^{*}\) the final values along the _nominal trajectory_ for the co-state \(\lambda_{r}\) and \(\lambda_{u}\), respectively. They are computed as \(\lambda_{r_{f}}^{*}=-21.51\) and \(\lambda_{u_{f}}^{*}=8.54\). Consider a new set of values given by \[\lambda_{R_{0}}^{\prime}=\lambda_{r_{f}}^{*}+\delta\lambda_{R}, \lambda_{U_{0}}^{\prime}=\lambda_{u_{f}}^{*}+\delta\lambda_{U}, \tag{22}\] where the perturbations \(\delta\lambda_{R}\) and \(\delta\lambda_{U}\) are chosen such that the resulting \(\lambda_{R_{0}}^{\prime}\) and \(\lambda_{U_{0}}^{\prime}\) are uniformly distributed in the interval \([-23,-20]\) and \([5,11]\), respectively. Set \(\lambda_{R_{0}}=\lambda_{R_{0}}^{\prime}\),\(\lambda_{U_{0}}=\lambda_{U_{0}}^{\prime}\), and calculate the value of \(\lambda_{V_{0}}\) by solving Eq. (19). Then, \(\mathcal{F}\) can be obtained by propagating the \((\lambda_{R_{0}},\lambda_{U_{0}})\)-parameterized system with the initial value specified in Eq. (17). Define an empty set \(\mathcal{D}\), and insert \(\mathcal{F}_{p}\) depending on the pair \((\lambda_{R_{0}},\lambda_{U_{0}})\), into \(\mathcal{D}\) until the Figure 3: Optimal guidance command profiles for the unconstrained and constrained cases. perturbation process described in Eq. (22) ends. In this context, we can amass a dataset \(\mathcal{D}\) comprising the essential optimal state-guidance command pairs required for NN training. Because we are specifically dealing with orbit transfer missions from Earth to Mars in this study, any optimal trajectory with \(R(\tau)>1.54\) AU for \(\tau\in[0,t_{f}]\) (\(t_{f}\) is fixed as \(10\)) is classified as beyond the region of interest and excluded from \(\mathcal{D}\). Moreover, to explore the NN's generalization ability, the _nominal trajectory_ is excluded from \(\mathcal{D}\), as was done in Ref. [35]. Ultimately, a total of \(18,324\) optimal trajectories are acquired, and they are evenly sampled, resulting in \(6,416,613\) pairs. ### Preprocessing Discontinuous Jumps While generating the dataset, we encounter instances where the optimal guidance command displays discontinuous jumps. Remember that these jumps can be detected using \(\lambda_{u}\) and \(\lambda_{v}\), as elaborated in A. For a representative optimal trajectory defined in \([0,t_{f}]\), spotting a discontinuous jump point \(t_{j}\) is straightforward, as illustrated in Fig. 4. The optimal trajectory is then split into two segments, i.e., \([0,t_{j}]\) and \([t_{j},t_{f}]\). Assume that there is only one jump in this scenario, we can derive a preprocessed smooth guidance command, denoted by \(\alpha_{pre}\), using the following approach \[\alpha_{pre}=\begin{cases}-\alpha,\text{for }t\in[0,t_{j}]\\ \phantom{-}\alpha,\text{for }t\in[t_{j},t_{f}]\end{cases} \tag{23}\] A representative example is provided in Fig. 5. It is evident that, thanks to the saturation function in Eq. (12) and the preprocessing procedure outlined in Eq. (23), \(\alpha_{pre}\) now demonstrates smooth behavior along the entire optimal trajectory. Conversely, if the discontinuous jump does not appear along the optimal trajectory, it is clear that \(\alpha_{pre}=\alpha\) always holds for \(t\in[0,t_{f}]\). Moreover, we observe that multiple discontinuous jumps are rare and primarily occur in orbital transfers with unusually long time of flight. Hence, in this study, we limit our consideration to orbital transfers with a maximum of one discontinuous jump. Figure 4: Two co-state profiles along an optimal trajectory. With the parameterized system, a sample of dataset \(\mathcal{D}_{\alpha}:=\{(r,u,v,\tau),\alpha\}\) can be extracted from the dataset \(\mathcal{D}\). Thanks to the preprocessing procedure aforementioned, we can transform the dataset \(\mathcal{D}_{\alpha}\), which could contain discontinuous jumps, into a refined dataset \(\mathcal{D}_{\alpha_{pre}}:=\{(r,u,v,\tau),\alpha_{pre}\}\) exclusively consisting of smooth guidance commands. Consequently, a well-trained NN \(\mathcal{N}_{\alpha_{pre}}\) based on \(\mathcal{D}_{\alpha_{pre}}\) is anticipated to yield reduced approximation errors compared to an NN \(\mathcal{N}_{\alpha}\) trained on \(\mathcal{D}_{\alpha}\)[25]. For the sake of completeness, we will elucidate the process of reverting the output \(\alpha_{pre}\) from \(\mathcal{N}_{\alpha_{pre}}\) back into the original optimal guidance command \(\alpha\) for practical implementation. It can be demonstrated that the sign of the original optimal guidance command \(\alpha\) is always opposite to that of \(\lambda_{v}\), i.e., \[4\lambda_{v}\cdot\arctan\alpha=-3\lambda_{u}-\sqrt{9\lambda_{u}^{2}+8\lambda_ {v}^{2}}<0. \tag{24}\] Hence, the original optimal guidance command \(\alpha\) can be obtained by simply comparing the sign of the preprocessed guidance command \(\alpha_{pre}\) and \(\lambda_{v}\), i.e., \[\alpha=\begin{cases}-\alpha_{pre},\text{if }\alpha_{pre}\cdot\lambda_{v}>0\\ \alpha_{pre},\ \text{ otherwise}\end{cases} \tag{25}\] ### Scheme for Generating Optimal Guidance Commands in Real Time To facilitate real-time implementation, a system composed of three NNs is established, as depicted in Fig. 6. To elaborate, \(\mathcal{D}\) is divided into three samples, i.e., \(\mathcal{D}_{\tau}:=\{(r,u,v),\tau\}\), \(\mathcal{D}_{\alpha_{pre}}:=\{(r,u,v,\tau),\alpha_{pre}\}\), and \(\mathcal{D}_{\lambda_{v}}:=\{(r,u,v,\tau),\lambda_{v}\}\); and the corresponding NN is denoted by \(\mathcal{N}_{\tau}\), \(\mathcal{N}_{\alpha_{pre}}\), and \(\mathcal{N}_{\lambda_{v}}\), respectively. \(\mathcal{N}_{\tau}\) is designed to forecast the optimal time of flight \(\tau\) given a flight state \((r,u,v)\). Additionally, this network facilitates preliminary mission design by providing time of flight evaluations without necessitating an exact solution. The output of \(\mathcal{N}_{\tau}\) is subsequently used as part of the input for \(\mathcal{N}_{\alpha_{pre}}\), which yields the preprocessed guidance command \(\alpha_{pre}\). Concurrently, \(\mathcal{N}_{\lambda_{v}}\) predicts the co-state \(\lambda_{v}\), in which its sign Figure 5: Preprocessed and original optimal guidance command profiles along an optimal trajectory. \(sgn(\lambda_{v})\) is used to revert the preprocessed guidance command \(\alpha_{pre}\) back into the original guidance command \(\alpha\). Once these three NNs are trained offline, they enable the generation of the optimal guidance command \(\alpha\) given an initial condition \((r_{0},u_{0},v_{0})\). Consequently, the trained NNs offer closed-form solutions to the OCP. This endows the proposed method with robustness and generalization abilities, as shown in Subsections 5.2 and 5.3. ### NN Training Now, our focus shifts to the implementation of NN training algorithm. In addition to the three aforementioned NNs, i.e., \(\mathcal{N}_{\tau}\), \(\mathcal{N}_{\lambda_{v}}\), and \(\mathcal{N}_{\alpha_{pre}}\), we also include the training of \(\mathcal{N}_{\alpha}\) based on the sample \(\mathcal{D}_{\alpha}=\{(r,u,v,\tau),\alpha\}\). This is done to highlight the enhancement in approximation resulting from the preprocessing procedure. All the networks considered are feedforward NNs with multiple hidden layers. _Prior_ to training, the dataset samples are shuffled to establish a split of 70% for training, 15% for validation, and 15% for testing sets. All input and output data are normalized by the min-max scaling and scaled to the range [0, 1]. Selecting an appropriate NN structure, particularly the number of hidden layers and neurons per layer, is a non-trivial task. A structure that is too simplistic tends to lead to underfitting. Conversely, overfitting often arises when the structure is overly complex. Making a balance between time consumption and the overfitting issue, we adopt a structure with three hidden layers, each containing 20 neurons. Subsequently, the sigmoid function serves as the activation function. For the output layer, a linear function is utilized. The crux of the training lies in minimizing the loss function, quantified as the mean squared error (MSE) between the predicted values from the trained NNs and the actual values within the dataset samples. We employ the 'Levenberg-Marquardt' in Ref. [53] for training NNs, and the training is terminated after 1,000 epochs or when the loss function drops below \(1\times 10^{-8}\). To prevent overfitting, we employ an early stopping criteria based on the concept of patience. Other hyperparameters utilized in training are set to their default values. Figure 6: Generation of optimal guidance commands in real time via NNs. Fig. 7 illustrates the training progression of four NNs. As seen in Fig. 6(a), it is evident that the MSEs for the training, validation, and test sets decrease to \(1\times 10^{-8}\) in less than 300 epochs. Table 1 displays the MSEs of the four NNs upon completion of the training process. Notably, the validation and test errors of \(\mathcal{N}_{\alpha_{pre}}\) are smaller than those of \(\mathcal{N}_{\alpha}\), indicating that our preprocessing procedure enhances the approximation accuracy. This is further confirmed through simulations detailed in the next section. ## 5 Numerical Simulations In this section, we present some numerical simulations to demonstrate the efficacy and performance of the proposed methodology. We commence by showing its optimality performance. Subsequently, robustness against perturbations and generalization ability of the proposed method are investigated. In addition, we evaluate the real-time execution of the \begin{table} \begin{tabular}{c c c c c} \hline & \(\mathcal{N}_{\tau}\) & \(\mathcal{N}_{\lambda_{v}}\) & \(\mathcal{N}_{\alpha_{pre}}\) & \(\mathcal{N}_{\alpha}\) \\ \hline Training & \(1\times 10^{-8}\) & \(4.94\times 10^{-6}\) & \(7.53\times 10^{-7}\) & \(1.72\times 10^{-6}\) \\ Validation & \(1\times 10^{-8}\) & \(5.02\times 10^{-6}\) & \(6.82\times 10^{-7}\) & \(3.53\times 10^{-6}\) \\ Test & \(1\times 10^{-8}\) & \(5.07\times 10^{-6}\) & \(7.70\times 10^{-7}\) & \(2.55\times 10^{-6}\) \\ \hline \end{tabular} \end{table} Table 1: MSEs of four NNs Figure 7: Training histories of four NNs. guidance command generation. It is worth noting that the proposed method has been implemented on a desktop equipped with an Intel Core i9-10980XE CPU @3.00 GHz and 128 GB of RAM. ### Optimality Performance This subsection is devoted to examining whether or not the proposed approach can produce optimal solutions compared with existing methods. Table 2 outlines the initial conditions for two orbital transfers. For comparison, two strategies are employed to steer the solar sailcraft. The first strategy employs three NNs, i.e., \(\mathcal{N}_{\tau}\), \(\mathcal{N}_{\alpha_{pre}}\), and \(\mathcal{N}_{\lambda_{v}}\), while the second one uses two NNs, \(\mathcal{N}_{\tau}\) and \(\mathcal{N}_{\alpha}\). The indirect method, resolving the shooting function in Eq. (13) will be used for comparison. For Case 1, the solar sailcraft embarks on a journey from Earth's orbit around the Sun, and the mission is to steer the solar sailcraft into Mars' orbit around the Sun. Fig. 8 represents the outcomes yielded by the trained NNs and the indirect method. The optimal time of flight obtained via the indirect method is 7.0184 TU, while the prediction from \(\mathcal{N}_{\tau}\) is 7.0192 TU for the given initial condition. Remarkably, \(\mathcal{N}_{\tau}\) exhibits precise approximation of the optimal time of flight throughout the entire transfer, as shown by Fig. 8a. On the other hand, Fig. 8b shows less accurate approximation of the optimal co-state \(\lambda_{v}\), especially at the end of the transfer. Fortunately, the sign of the predicted \(\lambda_{v}\), rather than its exact value, is sufficient for reverting the preprocessed guidance command back to the original guidance command. This observation is depicted in Fig. 8c, where the guidance commands derived from the first strategy closely align with the indirect method, except at the end of the transfer. Since the chattering guidance command derived from the proposed method at the end of the transfer is not practically implementable, the numerical simulation is terminated once the predicted time of flight \(\tau\) from \(\mathcal{N}_{\tau}\) drops below 0.005 TU (equivalent to 0.2907 days) hereafter. In contrast, the guidance command derived from the second strategy exhibits lower accuracy, especially in the presence of rapid changes in the optimal guidance command. The degeneration of approximation performance, caused by the second strategy, is demonstrated in Fig. 8d. It is worth emphasizing that this problem of performance degeneration may not be well fixed by simply adjusting the hyperparameters for training NNs, as shown by the numerical simulations in Ref. [43]. In that work, a random search for the hyperparameters was employed, and the trained NNs, even with the best approximation performance, were not able to capture the rapid change in the guidance command. In contrast, thanks to the preprocessing procedure in this paper, the well-trained NNs are capable of precisely generating the optimal guidance command. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \(r_{0}\) & \(\theta_{0}\) & \(u_{0}\) & \(v_{0}\) \\ \hline Case 1 & 1 & 0 & 0 & 1 \\ Case 2 & 1.05 & 0 & 0.15 & 1.03 \\ \hline \hline \end{tabular} \end{table} Table 2: Initial conditions for two orbital transfer cases To assess the performance of fulfilling the terminal condition, we define the flight error \(\Delta\Phi\) as the difference between the NN-driven terminal flight state \((r^{\mathcal{N}}(t_{f}),u^{\mathcal{N}}(t_{f}),v^{\mathcal{N}}(t_{f}))\) and desired terminal flight state \((1.524,0,\frac{1}{\sqrt{1.524}})\). This error is computed as follows \[\Delta\Phi=(|r^{\mathcal{N}}(t_{f})-1.524|,|u^{\mathcal{N}}(t_{f})|,|v^{ \mathcal{N}}(t_{f})-\frac{1}{\sqrt{1.524}}|).\] Then, the flight error caused by the first strategy is \(\Delta\Phi=(5.3703\times 10^{-7},3.0121\times 10^{-5},3.6943\times 10^{-5})\), which is lower than that of the second strategy with \(\Delta\Phi=(2.2220\times 10^{-5},4.1852\times 10^{-4},2.5968\times 10^{-4})\). Although both strategies can guide the solar sailcraft into Mars' orbit with negligible flight errors, the guidance command derived from the first strategy is more feasible and accurate than the second strategy, as shown in Figs. (c)c and (d)d. In addition, the transfer trajectories derived from the first strategy and the indirect method are illustrated in Fig. 9. Given the superiority of the first strategy over the second, our focus will exclusively be on the first strategy, which will be called NN-based approach hereafter. Shifting to Case 2, the output of \(\mathcal{N}_{\tau}\) for the initial condition stands at 6.1643 TU, while the optimal solution attained via the indirect method is 6.1552 TU, resulting in a marginal functional penalty of merely 0.5290 days. Furthermore, Fig. 10 illustrates that the NN-based approach precisely captures the discontinuous jump and effectively approximates the optimal guidance command, except at the end of the transfer. The transfer trajectories derived from the NN-based approach Figure 8: Comparison of results from the trained NNs and the indirect method for Case 1. and the indirect method are depicted in Fig. 11. As a consequence, the flight error caused by the NN-based approach is \(\Delta\Phi=(5.2011\times 10^{-6},3.3526\times 10^{-4},1.5393\times 10^{-4})\), implying that the NN-based approach capably steers the solar sailcraft into Mars' orbit with an acceptable flight error. ### Robustness Analysis In reality, the normalized characteristic acceleration \(\beta\) is influenced by diverse factors, such as solar sailing efficiency and solar radiation pressure [54]. Consequently, the solar sailcraft must continually adjust its flight trajectory based on its current flight state to rectify any flight errors. Unfortunately, this process is quite challenging for onboard systems with limited computational resources. Additionally, solving OCPs featuring parameters under perturbations tends to be problematic for conventional direct and indirect methods Figure 10: Guidance command profiles derived from the NN-based approach and the indirect method for Case 2. Figure 9: Transfer trajectories derived from the first strategy and the indirect method for Case 1. that rely on precise system models. For this reason, we evaluate the proposed method's robustness against perturbations concerning \(\beta\). The initial condition for the solar sailcraft is set as \[r_{0}=1.1,\theta_{0}=\frac{\pi}{2},u_{0}=0.18,v_{0}=0.93,\] and the mission is to fly into Mars' orbit. The normalized characteristic acceleration \(\beta\) is under perturbations following a standard uniform distribution within the range of \((-15\%,15\%)\). Figs.12 and 13 depicts the transfer trajectory and the corresponding guidance command derived from the NN-based approach, respectively. It can be seen that the colormap-identified speed exhibits a gradual reduction for most of the transfer, followed by a gradual increase towards the end of the transfer, as depicted in Fig.12. Reg Figure 11: Transfer trajectories derived from the NN-based approach and the indirect method for Case 2. Figure 12: NN-based transfer trajectory with \(\beta\) under perturbations. ance command is generally smooth during the initial phase of the transfer. Subsequently, to uphold precise flight, the NN-based approach acts like an error corrector by dynamically adapting the guidance command in the latter phase of the transfer. A remarkable observation is that even when the solar sailcraft's dynamics is under perturbations, the attitude constraints remain unviolated throughout the entire transfer. Furthermore, it takes 5.9152 TU for the solar sailcraft to accomplish its journey with a minor flight error of \(\Delta\Phi=(2.0160\times 10^{-5},9.8708\times 10^{-5},8.4738\times 10^{-5})\). ### Generalization Analysis The assessment of generalization ability pertains to evaluating the predictive performance for inputs that lie outside the training dataset. Define a state space \(\mathcal{A}\), that is not inside \(\mathcal{D}\), as below \[r_{0}\in[1,1.15],\theta_{0}\in[0,2\pi],u_{0}\in[0,0.1],v_{0}\in[0.8,1.2].\] Within this space, the solar sailcraft's initial conditions are randomly chosen. Subsequently, the proposed approach is employed to guide the solar sailcraft from the selected initial condition to Mars' orbit. A total of 30 tests are conducted, resulting in 20 successful transfers, as depicted in Fig. 14, in which the small circles denote the solar sailcraft's initial positions. Among these successful cases, the average flight error is \(\Delta\Phi=(4.5091\times 10^{-5},1.4726\times 10^{-4},1.0147\times 10^{-4})\). This indicates that the proposed method is able to steer the spacecraft to Mars' obit accurately even with some initial states that are not in the training set. ### Real-Time Performance Since the output of a feedforward NN is essentially a composition of linear mappings of the input vector, a trained NN can produce the output within a constant time. Recall that our method requires three trained NNs with simple structures. 10,000 trials of the proposed Figure 13: NN-based guidance command profile with \(\beta\) under perturbations. method across various flight states are run in a C-based computational environment, and the mean execution time for generating a guidance command is 0.0177 ms. This translates to approximately 0.5310 ms on a typical flight processor operating at 100 MHz [55]. Conversely, indirect methods typically necessitate time-consuming processes of integration and iteration. We utilize _fsolve_ function to solve the relevant TPBVP and observe a convergence time of approximately 1.22 seconds, even when an appropriate initial guess is provided. Consequently, the computational burden associated with optimization-based indirect methods can be substantially alleviated through adopting the proposed method. ## 6 Conclusions Real-time optimal control for attitude-constrained solar sailcrafts in coplanar circular-to-circular interplanetary transfers was studied in the paper. The necessary conditions governing optimal trajectories were deduced through the PMP. Different from the conventional methods that rely on optimization-based solvers to collect a training dataset, we formulated a parameterized system. This enabled us to generate an optimal trajectory by simply solving an initial value problem, and the optimal state-guidance command pairs could be readily obtained. Regarding the challenge posed by potential discontinuities in the optimal guidance command, which could undermine the NN's approximation performance, we developed a technique to preprocess the guidance command. With the saturation function introduced in the optimal guidance law, this preprocessing technique smoothened the guidance command. In this way, the approximation performance of the preprocessed guidance command was enhanced, and another NN was used to predict one co-state, whose sign was adopted to revert the preprocessed guidance command into the potentially discontinuous original guidance command. As a result, the well-trained NNs were capable of not only generating the optimal guidance command precisely in real time but also predicting the optimal time of flight given a flight state. Figure 14: NN-based transfer trajectories with initial conditions randomly chosen in \(\mathcal{A}\). ## Acknowledgement This research was supported by the National Natural Science Foundation of China under Grant No. 62088101. ## Appendix A Proof of Lemmas in Section 2 Proof of Lemma 1. In view of \(\dot{\lambda}_{\theta}(t)=0\) in Eqs. (8) and (10), it is easy to see that \(\lambda_{\theta}(t)\) remains zero for \(t\in[0,t_{f}]\) along an optimal trajectory. Then, Eq. (7) reduces to \[\mathscr{H}=1+\lambda_{r}u+\lambda_{u}(\frac{\beta\cos^{3}\alpha}{r^{2}}+\frac {v^{2}}{r}-\frac{1}{r^{2}})+\lambda_{v}(\frac{\beta\sin\alpha\cos^{2}\alpha}{r ^{2}}-\frac{uv}{r}). \tag{10}\] To obtain stationary values of \(\mathscr{H}\), we have \[\frac{\partial\mathscr{H}}{\partial\alpha}=\frac{\beta}{r^{2}}\cos^{3}\alpha [-3\lambda_{u}\tan\alpha+\lambda_{v}(1-2\tan^{2}\alpha)]=0. \tag{11}\] Assume that \(\lambda_{u}\) and \(\lambda_{v}\) are not both zero [10]. Because \(\beta>0\), and \(\cos\alpha\neq 0\) for \(\alpha\in[-\phi_{max},\phi_{max}]\), two different roots, denoted by \(\alpha_{1}\) and \(\alpha_{2}\), can be obtained by solving Eq. (11) as \[\begin{cases}\alpha_{1}=\arctan\ \frac{-3\lambda_{u}+\sqrt{9\lambda_{u}^{2}+8 \lambda_{v}^{2}}}{4\lambda_{v}},\\ \alpha_{2}=\arctan\ \frac{-3\lambda_{u}-\sqrt{9\lambda_{u}^{2}+8\lambda_{v}^{2}}}{4 \lambda_{v}}.\end{cases} \tag{12}\] Now we proceed to check for the Hessian's positive definiteness. According to Eq. (11), taking the second partial derivative of \(\mathscr{H}\) w.r.t. \(\alpha\) yields \[\frac{\partial^{2}\mathscr{H}}{\partial\alpha^{2}}=\frac{\beta}{r^{2}}\{-3 \cos^{2}\alpha\sin\alpha[-3\lambda_{u}\tan\alpha+\lambda_{v}(1-2\tan^{2} \alpha)]+\cos^{3}\alpha[-3\lambda_{u}\sec^{2}\alpha-4\lambda_{v}\tan\alpha \sec^{2}\alpha]\}. \tag{13}\] Substituting Eq. (12) into Eq. (13) leads to \[\begin{cases}\frac{\partial^{2}\mathscr{H}}{\partial\alpha^{2}}=\frac{\beta }{r^{2}}\cos\alpha(-\sqrt{9\lambda_{u}^{2}+8\lambda_{v}^{2}}),\\ \frac{\partial^{2}\mathscr{H}}{\partial\alpha_{2}^{2}}=\frac{\beta}{r^{2}} \cos\alpha(+\sqrt{9\lambda_{u}^{2}+8\lambda_{v}^{2}}).\end{cases} \tag{14}\] Clearly, \(\frac{\partial^{2}\mathscr{H}}{\partial\alpha^{2}}>0\) always holds if and only if \(\alpha=\alpha_{2}\). Therefore, the local minimum solution \(\bar{\alpha}\) is given by \[\bar{\alpha}=\alpha_{2}=\arctan\ \frac{-3\lambda_{u}-\sqrt{9\lambda_{u}^{2}+8 \lambda_{v}^{2}}}{4\lambda_{v}}. \tag{15}\] Notice that Eq. (15) will become singular if \(\lambda_{v}(t)=0\). Then \(\lambda_{v}(t)=0\) can hold at some isolated points for \(t\in[0,t_{f}]\) or \(\lambda_{v}(t)\equiv 0\) holds in a time interval \([t_{a},t_{b}]\in[0,t_{f}]\). We now analyze the first case. If \(\lambda_{u}(t)<0\) for \(t\in[t_{a},t_{b}]\), then \[\begin{split}\lim_{\lambda_{u}<0,\lambda_{v}\to 0}\frac{-3 \lambda_{u}-\sqrt{9\lambda_{u}^{2}+8\lambda_{v}^{2}}}{4\lambda_{v}}=\lim_{ \lambda_{u}<0,\lambda_{v}\to 0}\frac{-3\lambda_{u}+3\lambda_{u}\sqrt{1+\frac{8 \lambda_{v}^{2}}{9\lambda_{u}^{2}}}}{4\lambda_{v}}\\ \approx\lim_{\lambda_{u}<0,\lambda_{v}\to 0}\frac{-3 \lambda_{u}+3\lambda_{u}(1+\frac{1}{2}\frac{8\lambda_{v}^{2}}{9\lambda_{u}^{2} })}{4\lambda_{v}}=\lim_{\lambda_{u}<0,\lambda_{v}\to 0}\frac{\lambda_{v}}{3 \lambda_{u}}=0.\end{split}\] (A7) Analogously, if \(\lambda_{u}(t)>0\) for \(t\in[t_{a},t_{b}]\), we have \[\begin{split}\lim_{\lambda_{u}>0,\lambda_{v}\to 0}\frac{-3 \lambda_{u}-\sqrt{9\lambda_{u}^{2}+8\lambda_{v}^{2}}}{4\lambda_{v}}=\lim_{ \lambda_{u}>0,\lambda_{v}\to 0}\frac{-3\lambda_{u}-3\lambda_{u}\sqrt{1+ \frac{8\lambda_{v}^{2}}{9\lambda_{u}^{2}}}}{4\lambda_{v}}\\ \approx\lim_{\lambda_{u}>0,\lambda_{v}\to 0}\frac{-3 \lambda_{u}-3\lambda_{u}(1+\frac{1}{2}\frac{8\lambda_{v}^{2}}{9\lambda_{u}^{2} })}{4\lambda_{v}}=\lim_{\lambda_{u}>0,\lambda_{v}\to 0}-(\frac{3 \lambda_{u}}{2\lambda_{v}}+\frac{\lambda_{v}}{3\lambda_{u}})=\lim_{\lambda_{u} >0,\lambda_{v}\to 0}-\frac{3\lambda_{u}}{2\lambda_{v}}.\end{split}\] (A8) Therefore, from Eq. (A7), it is clear that the local minimum solution \(\bar{\alpha}\) in Eq. (A6) will automatically reduce to zero if \(\lambda_{u}(t)<0\) and \(\lambda_{v}(t)=0\) at some isolated points, indicating that Eq. (A6) still holds in such case. On the other hand, from Eq. (A8), the sign of the local minimum solution \(\bar{\alpha}\) will change if \(\lambda_{v}(t)\) crosses zero and \(\lambda_{u}(t)>0\) at the isolated points, which will result in the appearance of the discontinuous jump. As a result, a discontinuous jump can be detected using the signs of \(\lambda_{v}\) and \(\lambda_{u}\). Now we prove that the second case, that is, \(\lambda_{v}(t)\equiv 0\) in a time interval \(t\in[t_{a},t_{b}]\), does not hold along an optimal trajectory. By contradiction, we assume that \(\lambda_{v}(t)\equiv 0\) for \(t\in[t_{a},t_{b}]\). Recall that \(\lambda_{\theta}(t)\equiv 0\) along an optimal trajectory. Then, in view of Eq. (8), we have \[\dot{\lambda}_{v}(t)=-2\frac{\lambda_{u}v(t)}{r(t)}\equiv 0,\forall\ t \in[t_{a},t_{b}].\] (A9) Note that if \(v(t)\equiv 0\) in any time interval, it will result in \(\theta\) being constant in such time interval, as shown by Eq. (1). This is obviously impossible during an orbital transfer. Thus, to make Eq. (A9) true, \(\lambda_{u}(t)\) must be kept zero for \(t\in[t_{a},t_{b}]\). In this case, it implies \[\lambda_{r}(t)\equiv 0,\forall\ t\in[t_{a},t_{b}].\] (A10) Clearly, if \(\lambda_{v}(t)\equiv 0\) for \(t\in[t_{a},t_{b}]\) does hold, the equation as follows will be valid \[\boldsymbol{\lambda}(t)=[\lambda_{r}(t),\lambda_{\theta}(t),\lambda_{u}(t), \lambda_{v}(t)]\equiv\boldsymbol{0},\forall\ t\in[t_{a},t_{b}],\] (A11) which contradicts the PMP. Thus, \(\lambda_{v}(t)\) can only be zero at some isolated points. Remember that the optimal guidance law is still ambiguous for the case that \(\lambda_{v}(t)\) crosses zero and \(\lambda_{u}(t)>0\), as shown in Eq. (A8). Because a nonlinear function takes its global minimum at one of its local minima or at one of the endpoints of its feasible domain [42], we have \[\alpha^{*}=\operatorname*{argmin}_{\alpha\in\{-\phi_{max},\bar{\alpha},\phi_{max} \}}\mathscr{H}. \tag{101}\] To further resolve the ambiguity in terms of \(\alpha\), rewrite Eq. (100) as \[\mathscr{H}(\alpha,\mathbf{\lambda},\mathbf{x})=\mathscr{H}_{1}(\alpha,\mathbf{\lambda}, \mathbf{x})+\mathscr{H}_{2}(\mathbf{\lambda},\mathbf{x}), \tag{102}\] where \(\mathscr{H}_{1}\) is part of \(\mathscr{H}\) related to \(\alpha\), and \(\mathscr{H}_{2}\) is the rest of \(\mathscr{H}\) independent from \(\alpha\). Then, \(\mathscr{H}_{1}\) satisfies \[\mathscr{H}_{1}(\alpha,\mathbf{\lambda},\mathbf{x})=\frac{\beta}{r^{2}}(\lambda_{u} \cos^{3}\alpha+\lambda_{v}\sin\alpha\cos^{2}\alpha). \tag{103}\] Recall Eq. (3) and \(\phi_{max}\in(0,\frac{\pi}{2})\), thus we obtain \[\mathscr{H}_{1}(\frac{\pi}{2},\mathbf{\lambda},\mathbf{x})=\mathscr{H}_{1}(-\frac{ \pi}{2},\mathbf{\lambda},\mathbf{x})=0. \tag{104}\] Substituting Eq. (102) into Eq. (103) yields \[\begin{cases}\mathscr{H}_{1}(\alpha_{1})=\frac{\beta}{4r^{2}}\cos^{3}\alpha \left(\lambda_{u}+\sqrt{9\lambda_{u}^{2}+8\lambda_{v}^{2}}\right)\geq 0,\\ \mathscr{H}_{1}(\alpha_{2})=\frac{\beta}{4r^{2}}\cos^{3}\alpha\left(\lambda_{u }-\sqrt{9\lambda_{u}^{2}+8\lambda_{v}^{2}}\right)\leq 0,\end{cases} \tag{105}\] which indicates that \(\alpha_{1}\) and \(\alpha_{2}\) is the global maximum and global minimum for \(\mathscr{H}\) in \([-\frac{\pi}{2},\frac{\pi}{2}]\), respectively. Define a variable \(\Delta\mathscr{H}\) as \[\Delta\mathscr{H}=\mathscr{H}_{1}(\phi_{max},\mathbf{\lambda},\mathbf{x})-\mathscr{H} _{1}(-\phi_{max},\mathbf{\lambda},\mathbf{x})=\frac{2\beta}{r^{2}}\cos^{2}\phi_{max} \sin\phi_{max}\lambda_{v}. \tag{106}\] Without loss of generality, we consider the case that \(\lambda_{v}<0\), which leads to a negative local maximum point \(\alpha_{1}\) and a positive local minimum point \(\alpha_{2}\). If \(\alpha_{2}<\phi_{max}\), then \(\alpha_{2}\) is the global minimum point in \([-\phi_{max},\phi_{max}]\). If \(\alpha_{2}>\phi_{max}\), then \(\phi_{max}\) will be the global minimum in \([-\phi_{max},\phi_{max}]\) since \(\Delta\mathscr{H}<0\), as shown in Fig. 15. Hence, in view of Eq. (103), we have that the optimal guidance law \(\alpha^{*}\) minimizing the Hamiltonian \(\mathscr{H}\) w.r.t. \(\alpha\) is \[\alpha^{*}=\text{Median}[-\phi_{max},\bar{\alpha},\phi_{max}],\text{where }\bar{\alpha}=\arctan\ \frac{-3\lambda_{u}-\sqrt{9\lambda_{u}^{2}+8\lambda_{v}^{2}}}{4\lambda_{v}}. \tag{107}\] Proof of Lemma 2. Taking Eq. (17) into consideration, it is easy to see that the solution to the parameterized system in Eq. (15) at \(\tau=0\) represents a circular orbit with a radius of \(R_{0}\). Moreover, regarding Eq. (10), we have that Eq. (9) holds at \(\tau=0\) for the pair \((\lambda_{R_{0}},\lambda_{U_{0}})\) and \(\lambda_{V_{0}}\) determined by Eq. (19). By propagating the parameterized system in Eq. (15) for \(\tau\in[0,t_{f}]\), \(\mathcal{F}\) meets all the necessary conditions for an optimal trajectory. By the definition of \(\mathcal{F}_{p}\), it is obvious that \(\mathcal{F}_{p}\) represents the solution space of an optimal trajectory, and \(\tau\in[0,t_{f}]\) defines the time of flight. In other words, an optimal trajectory can be readily obtained simply by arbitrarily choosing two parameters \(\lambda_{R_{0}}\) and \(\lambda_{U_{0}}\) and solving an initial value problem governed by Eqs. (15) and (17).
2309.10987
SpikingNeRF: Making Bio-inspired Neural Networks See through the Real World
Spiking neural networks (SNNs) have been thriving on numerous tasks to leverage their promising energy efficiency and exploit their potentialities as biologically plausible intelligence. Meanwhile, the Neural Radiance Fields (NeRF) render high-quality 3D scenes with massive energy consumption, but few works delve into the energy-saving solution with a bio-inspired approach. In this paper, we propose SpikingNeRF, which aligns the radiance ray with the temporal dimension of SNN, to naturally accommodate the SNN to the reconstruction of Radiance Fields. Thus, the computation turns into a spike-based, multiplication-free manner, reducing the energy consumption. In SpikingNeRF, each sampled point on the ray is matched onto a particular time step, and represented in a hybrid manner where the voxel grids are maintained as well. Based on the voxel grids, sampled points are determined whether to be masked for better training and inference. However, this operation also incurs irregular temporal length. We propose the temporal padding strategy to tackle the masked samples to maintain regular temporal length, i.e., regular tensors, and the temporal condensing strategy to form a denser data structure for hardware-friendly computation. Extensive experiments on various datasets demonstrate that our method reduces the 70.79% energy consumption on average and obtains comparable synthesis quality with the ANN baseline.
Xingting Yao, Qinghao Hu, Tielong Liu, Zitao Mo, Zeyu Zhu, Zhengyang Zhuge, Jian Cheng
2023-09-20T01:04:57Z
http://arxiv.org/abs/2309.10987v3
# SpikingNeRF: Making Bio-inspired Neural Networks See through the Real World ###### Abstract Spiking neural networks (SNNs) have been thriving on numerous tasks to leverage their promising energy efficiency and exploit their potentialities as biologically plausible intelligence. Meanwhile, the Neural Radiance Fields (NeRF) render high-quality 3D scenes with massive energy consumption, but few works delve into the energy-saving solution with a bio-inspired approach. In this paper, we propose SpikingNeRF, which aligns the radiance ray with the temporal dimension of SNN, to naturally accommodate the SNN to the reconstruction of radiance fields. Thus, the computation turns into a spike-based, multiplication-free manner, reducing the energy consumption. In SpikingNeRF, each sampled point on the ray is matched to a particular time step, and represented in a hybrid manner where the voxel grids are maintained as well. Based on the voxel grids, sampled points are determined whether to be masked for better training and inference. However, this operation also incurs irregular temporal length. We propose the temporal padding strategy to tackle the masked samples to maintain regular temporal length, _regular tensors_, and the temporal condensing strategy to form a denser data structure for hardware-friendly computation. Extensive experiments on a variety of datasets demonstrate that our method reduces the 70.79% energy consumption on average and obtains comparable synthesis quality with the ANN baseline. ## 1 Introduction Spiking neural network (SNN) is considered the third generation of the neural network, and its bionic modeling encourages much research attention to explore the prospective biological intelligence that features multi-task supporting as the human brain does[27, 35]. While, much dedication has been devoted to SNN research, the gap between the expectation of SNN boosting a wider range of intelligent tasks and the fact of artificial neural networks (ANN) dominating deep learning in the majority of tasks still exists. Recently, more research interests have been invested to shrink the gap and acquired notable achievements in various tasks, including image classification[49], object detection[46], graph prediction[52], natural language processing[51], _etc_. Despite multi-task supporting, SNN research is also thriving in performance lifting and energy efficiency exploration at the same time. However, we have not yet witnessed the establishment of SNN in the real 3D reconstruction task with an advanced performance. To this end, this naturally raises an issue: _could bio-inspired spiking neural networks reconstruct the real 3D scene with an advanced quality at low energy consumption_? In this paper, we investigate the rendering of neural radiance fields with a Figure 1: Comparisons of our SpikingNeRF with other NeRF-based works in synthesis quality and model rendering energy. Different colors represent different works, and our works with two different frameworks are denoted in red and violet, respectively. Detailed notation explanation is specified in Sec. 5. Different testing datasets are denoted in different shapes. spiking approach to answer this question. We propose SpikingNeRF to reconstruct a volumetric scene representation from a set of images. Learning from the huge success of NeRF[31] and its follow-ups[2, 3, 33, 34, 37], we utilize the voxel grids and the spiking multilayer perceptron (sMLP) to jointly model the volumetric representations. In such hybrid representations, voxel grids explicitly store the volumetric parameters, and the spiking multilayer perceptron (sMLP) implicitly transforms the parameters into volumetric information, _i.e_., the density and color information of the radiance fields. After the whole model is established, we follow the classic ray-sample accumulation method of radiance fields to render the colors of novel scenes[29]. To accelerate the training and inference, we use the voxel grids to predefine borders and free space, and mask those samples out of borders, in free space or with a low-density value as proposed in [37]. Inspired by the imaging process of the primate fovea in the retina that accumulates the intensity flow over time to stimulate the photoreceptor cell[28, 39], we associate the accumulation process of rendering with the temporal accumulation process of SNNs, which ultimately stimulates the spiking neurons to fire. Guided by the above insight, we align the ray with the temporal dimension of the sMLP, and match each sampled point on the ray to a time step one-to-one during rendering. Therefore, different from the original rendering process where all sampled points are queried separately to retrieve the corresponding volumetric information, sampled points are queried sequentially along each ray in SpikingNeRF, and the geometric consecutiveness of the ray is thus transformed into the temporal continuity of the SNN. As a result, SpikingNeRF seizes the nature of both worlds to make the NeRF rendering in a spiking manner. However, the number of sampled points on different rays varies due to the aforementioned masking operation, which causes the temporal lengths of different rays to become irregular. Consequently, the querying for the color information can hardly be parallelized on graphics processing units (GPUs), severely hindering the rendering process. To solve this issue, we first investigate the temporal padding (TP) method to attain the regular temporal length in a querying batch, _i.e_., a regular-shaped tensor, thus ensuring parallelism. Furthermore, we propose the temporal condensing-and-padding (TCP), which totally removes the temporal effect of masked points, to fully constrain the tensor size and condense the data distribution, which is hardware-friendly to domain-specific accelerators. Our thorough experimentation proves that TCP can maintain the energy merits and good performance of SpikingNeRF as shown in Fig. 1. To sum up, our main contributions are as follows: * We propose SpikingNeRF that aligns the radiance rays in NeRF with the temporal dimension of SNNs. To the best of our knowledge, this is the first work to accommodate spiking neural networks to reconstructing real 3D scenes. * We propose TP and TCP to solve the irregular temporal lengths, ensuring the training and inference parallelism on GPUs and keeping hardware-friendly. * Our experiments demonstrate the effectiveness of SpikingNeRF on four mainstream inward-facing 3D reconstruction tasks, achieving advanced energy efficiency as shown in Fig. 1. For another specific example, SpikingNeRF-D can reduce 72.95% rendering energy consumption with 0.33 PSNR drop on Tanks&Temples. ## 2 Related work NeRF-based 3D reconstruction.Different from the traditional 3D reconstruction methods that mainly rely on the explicit and discrete volumetric representations, NeRF[31] utilizes a coordinate neural network to implicitly represent the 3D radiance field, and synthesizes novel views by accumulating the density and color information along the view-dependent rays with the ray tracing algorithm[17]. With this new paradigm of 3D reconstruction, NeRF achieves huge success in improving the quality of novel view synthesis. The followup works further enhance the rendering quality[1, 6, 38], and many others focus on accelerating the training[7, 12, 37] or rendering process[24, 34, 37, 45]. While, we concentrate on exploring the potential integration of the spike-based low-energy communication and NeRF-based high-quality 3D synthesis, seeking ways to the energy-efficient real 3D reconstruction. **Fast NeRF synthesis.** The accumulation process of rendering in NeRF[31] requires a huge number of MLP querying, which incurs heavy flop-operation and memory-access burdens, delaying the synthesis speed. Recent studies further combine the traditional explicit volumetric representations, _e.g_., voxels[15, 26, 37] and MPIs[40], with MLP-dependent implicit representations to obtain more efficient volumetric representations. Thus, the redundant MLP queries for points in free space can be avoided. In SpikingNeRF, we adopt the voxel grids to mask the irrelevant points with low density, and discard unimportant points with low weights, thus reducing the synthesis overhead. **Spiking neural networks.** With the high sparsity and multiplication-free operation, SNNs outstrip ANNs in the competition of energy efficiency[5, 21, 23], but fall behind in the performance lifting. Therefore, great efforts have been dedicated to making SNNs deeper[10, 48], converge faster[8, 42], and eventually high-performance[49]. With the merits of energy efficiency and advanced performance, recent SNN research sheds light on exploring versatile SNNs, _e.g_., Spikeformer[49], Spiking GCN[52], SpikeGPT[51]. In this paper, we seize the analogous nature of both NeRF and SNNs to make bio-inspired spiking neural networks reconstruct the real 3D scene with an advanced quality at low energy consumption. **SNNs in 3D reconstruction.** Applying SNNs to 3D Reconstruction has been researched with very limited efforts. As far as we know, only two works exist in this scope. EVSNN [50] is the first attempt to develop a deep SNN for image reconstruction task, which achieves comparable performance to ANN. E2P [30] strives to solve the event-to-polarization problem and uses SNNs to achieve a better polarization reconstruction performance. Unfortunately, both of them focus solely on the reconstruction of event-based images with traditional methods, neglecting the rich RGB world. While, we are the first to explore the reconstruction of the real RGB world with SNNs. ## 3 Preliminaries **Neural radiance field.** To reconstruct the scene for the given view, NeRF[31] first utilizes an MLP, which takes in the location coordinate \(\mathbf{p}\in\mathbb{R}^{3}\) and the view direction \(\mathbf{v}\in\mathbb{R}^{2}\) and yields the density \(\sigma\in\mathbb{R}\) and the color \(\mathbf{c}\in\mathbb{R}^{3}\), to implicitly maintain continuous volumetric representations: \[\mathbf{e},\sigma =MLP_{\theta}(\mathbf{p}), \tag{1}\] \[\mathbf{c} =MLP_{\gamma}(\mathbf{e},\mathbf{v}), \tag{2}\] where \(\theta\) and \(\gamma\) denote the parameters of the separate two parts of the MLP, and \(\mathbf{e}\) is the embedded features. Next, NeRF renders the pixel of the expected scene by casting a ray \(\mathbf{r}\) from the camera origin point to the direction of the pixel, then sampling \(K\) points along the ray. Through querying the MLP as in Eq. (1-2) \(K\) times, \(K\) color values and \(K\) density values can be retrieved. Finally, following the principles of the discrete volume rendering proposed in [29], the expected pixel RGB \(\hat{C}(\mathbf{r})\) can be rendered: \[\alpha=1-\exp(-\sigma_{i}\delta_{i}),\quad T_{i}=\prod_{j=1}^{i-1}(1-\alpha_{ i}), \tag{3}\] \[\hat{C}(\mathbf{r})\approx\sum_{i=1}^{K}T_{i}\alpha_{i}\mathbf{c}_{i}, \tag{4}\] where \(\mathbf{c}_{i}\) and \(\sigma_{i}\) denotes the color and the density values of the \(i\)-th point respectively, and \(\delta_{i}\) is the distance between the adjacent point \(i\) and \(i+1\). After rendering all the pixels, the expected scene is reconstructed. With the ground-truth pixel color \(C(\mathbf{r})\), the parameters of the MLP can be trained end-to-end by minimizing the MSE loss: \[\mathcal{L}=\frac{1}{|\mathcal{R}|}\sum_{r\in\mathcal{R}}\|\hat{C}(\mathbf{r}) -C(\mathbf{r})\|_{2}^{2}, \tag{5}\] where \(\mathcal{R}\) is the mini-batch containing the sampled rays. **Hybrid volumetric representation.** The number of sampled points \(K\) in Eq. (4) is usually big, leading to the heavy MLP querying burden as displayed in Eq. (1-2). To alleviate this problem, voxel grid representation is utilized to contain the volumetric parameters directly, _e.g_., the embedded feature \(e\) and the density \(\sigma\) in Eq. (1), as the values of the voxel grid. Thus, querying the MLP in Eq. (1) is substituted to querying the voxel grids and operating the interpolation, which is much easier: \[\sigma =\mathrm{act}(\mathrm{interp}(\mathbf{p},\mathbf{V}_{\sigma})), \tag{6}\] \[\mathbf{e} =\mathrm{interp}(\mathbf{p},\mathbf{V}_{\mathbf{f}}), \tag{7}\] where \(\mathbf{V}_{\sigma}\) and \(\mathbf{V}_{\mathbf{f}}\) are the voxel grids related to the volumetric density and features, respectively. "interp" denotes the interpolation operation, and "act" refers to the activation function, _e.g_., ReLU or the shifted softplus[1]. Furthermore, those irrelevant points with low density or unimportant points with low weight can be **masked** through predefined thresholds \(\lambda\), then Eq. (4) turns into: \[A\triangleq\{i:T_{i}>\lambda_{1},\alpha_{i}>\lambda_{2}\}, \tag{8}\] \[\hat{C}(\mathbf{r})\approx\sum_{i\in A}T_{i}\alpha_{i}\mathbf{c}_ {i}. \tag{9}\] Thus, the queries of the MLP for sampled points in Eq. (2) are significantly reduced. **Spiking neuron.** The spiking neuron is the most fundamental unit in spiking neural networks, which essentially differs SNNs from ANNs. The modeling of spiking neuron commonly adopts the leaky integrate-and-fire (LIF) model: \[\mathbf{U}^{t} =\mathbf{V}^{t-1}+\frac{1}{\tau}(\mathbf{X}^{t}-\mathbf{V}^{t-1} +V_{reset}), \tag{10}\] \[\mathbf{S}^{t} =\mathbb{H}(\mathbf{U}^{t}-V_{th}),\] (11) \[\mathbf{V}^{t} =\mathbf{U}^{t}\odot(1-\mathbf{S}^{t})+V_{reset}\mathbf{S}^{t}. \tag{12}\] Here, \(\odot\) denotes the Hadamard product. \(\mathbf{U}^{t}\) is the intermediate membrane potential at time-step \(t\) and can be updated through Eq. (10), where \(\mathbf{V}^{t-1}\) is the actual membrane potential at time-step \(t-1\) and \(\mathbf{X}^{t}\) denotes the input vector at time-step \(t\), _e.g_., the activation vector from the previous layer in MLPs. The output spike vector \(\mathbf{S}^{t}\) is given by the Heaviside step function \(\mathbb{H}(\cdot)\) in Eq. (11), indicating that a spike is fired when the membrane potential exceeds the potential threshold \(V_{th}\). Dependent on whether the spike is fired at time-step \(t\), the membrane potential \(\mathbf{V}^{t}\) is set to \(\mathbf{U}^{t}\) or the reset potential \(V_{reset}\) through Eq. (12). Since the Heaviside step function \(\mathbb{H}(\cdot)\) is not differentiable, the surrogate gradient method is utilized to solve this issue, which is defined as : \[\frac{d\mathbb{H}(x)}{dx}=\frac{1}{1+\exp(-\alpha x)}, \tag{13}\] where \(\alpha\) is a predefined hyperparameter. Thus, spiking neural networks can be optimized end-to-end. ## 4 Methodology ### Data encoding In this subsection, we explore two naive data encoding approaches for converting the input data to SNN-tailored formats,, direct-encoding and Poisson-encoding. Both of them are proven performing well in the direct learning of SNNs[4, 11, 14, 22, 36]. As described in Sec. 3, spiking neurons receive data with an additional dimension called the temporal dimension, which is indexed by the time-step \(t\) in Eq. (10-12). Consequently, original ANN-formatted data need to be encoded to fill the temporal dimension as illustrated in Fig. 2. In the direct-encoding scheme, the original data is duplicated \(T\) times to fill the temporal dimension, where \(T\) represents the total length of the temporal dimension. As for the Poisson-encoding scheme, besides the duplication operation, it perceives the input value as the probability and generates a spike according to the probability at each time step. Additionally, a decoding method is entailed for the subsequent operations of rendering, and the mean [22] and the voting[11] decoding operations are commonly considered. We employ the former approach since the latter one is designed for classification tasks[9, 42]. Thus, with the above two encoding methods, we build two naive versions of SpikingNeRF, and conduct experiments on various datasets to verify the feasibility. ### Time-ray alignment This subsection further explores the potential of accommodating the SNN to the NeRF rendering process in a more natural and novel way, where we attempt to remain the real-valued input data as direct-encoding does, but do not fill the temporal dimension with the duplication-based approach. We first consider the MLP querying process in the ANN philosophy. For an expected scene to reconstruct, the volumetric parameters of sampled points,, \(e\) and \(v\) in Eq. (2), are packed as the input data with the shape of \([batch,c_{e}]\) or \([batch,c_{v}]\), where \(batch\) represents the sample index and \(c\) is the channel index of the volumetric parameters. Thus, the MLP can query these data and output the corresponding color information in parallel. However, from the geometric view, the input data should be packed as \([ray,pts,c]\), where \(ray\) is the ray index and the \(pts\) is the index of the sampled points. Obviously, the ANN-based MLP querying process can not reflect such geometric relations between the ray and the sampled points. Then, we consider the computation modality of SNNs. As illustrated in Eq. (10-12), SNNs naturally entail the temporal dimension to process the sequential signals. This means a spiking MLP naturally accepts the input data with the shape of \([batch,time,c]\), where \(time\) is the temporal index. Therefore, we can reshape the volumetric parameters back to \([ray,pts,c]\), and intuitively match each sample along the ray to the corresponding time step: \[\begin{split}\text{Input}_{MLP}&:=[batch,c]\\ &\Rightarrow[ray,pts,c]\\ &\Rightarrow[batch,time,c]:=\text{Input}_{sMLP},\end{split} \tag{14}\] which is also illustrated in Fig. 3(b). Such an alignment does not require any input data pre-process such as duplication[49] or Poisson generation[13] as prior arts commonly do. ### Tcp Yet the masking operation on sampled points, as illustrated in Sec. 3, makes the time-ray alignment intractable. Although such masking operation improves the rendering speed and quality by curtailing the computation cost of redundant samples, it also causes the number of queried samples on different rays to be irregular, which indicates the reshape operation of Eq. (14),, shaping into a tensor, is unfeasible on GPUs after the masking operation. To ensure computation parallelism on GPUs, we first propose to retain the indices of those masked samples but discard their values. As illustrated in Fig. 3(c) Left, we arrange both unmasked and masked samples sequentially to the corresponding \(ray\)-indexed vector, and pad zeros to the vacant tensor elements. Such that, a regular-shaped input tensor is built. We refer to this simple approach as the temporal padding (TP) method. However, TP does not handle those masked samples effectively because those padded zeros will still get involved Figure 2: Conventional data encoding schemes. For direct-encoding, the operation #1 is necessary that the input data will be duplicated \(T\) times to fit the length of the temporal dimension. For Poisson-encoding, both operation #1 and #2 are utilized to generate the input spike train. The “Mean” or “Voting” operation is able to decoding the SNN output. in the following computation and cause the membrane potential of sMLP to decay, implicitly affecting the outcomes of the unmasked samples in the posterior segment of the ray. Even for a sophisticated hardware accelerator that can skip those zeros, the sparse data structure still causes computation inefficiency such as imbalanced workload[47]. To solve this issue, we design the temporal condensing-and-padding (TCP) scheme, which is illustrated in Fig. 3(c) Right. Different from TP, TCP completely discards the parameters and indices of the masked samples, and adjacently rearranges the unmasked sampled points to the corresponding \(ray\) vector. For the vacant tensor elements, zeros are filled as TP does. Consequently, valid data is condensed to the left side of the tensor. Notably, the \(ray\) dimension can be sorted according to the valid data number to further increase the density. As a result, TCP has fully eliminated the impact of the masked samples and made SpikingNeRF more hardware-friendly. Therefore, we choose TCP as our mainly proposed method for the following study. ### Temporal flip Moreover, with the alignment between the temporal dimension and the camera ray, the samples on the same ray are queried by the sMLP sequentially rather than in parallel. This raises an issue: which querying direction is better for Figure 4: Temporal flip. The direction of the temporal dimension is consistent with the ray #1 but opposite to the ray #2. “Pl.” is the abbreviation of “point”. Figure 3: Overview of the proposed SpikingNeRF. (a) The rendering process of SpikingNeRF. The whole 3D volumetric parameters are stored in the voxel grids. The irrelevant or unimportant samples are masked before the sMLP querying. The expected scenes are rendered with the volumetric information yielded by the sMLP. (b) Alignment between the temporal dimension and the ray. The sMLP queries each sampled point step-by-step to yield the volumetric information. (c) Proposed temporal padding (left) and temporal condensing-and-padding (right) methods. For simplification, the channel length of the volumetric parameters is set to 1. SpikingNeRF, the camera ray direction or the opposite one? In this work, we propose to rely on empirical experiments to choose the querying direction. As illustrated in Fig. 4, the temporal flip operation is utilized to make the directions of the temporal dimension and the camera ray become opposite. The experimental results in Sec. 5.3 indicate the direction of the camera ray is better for SpikingNeRF. Therefore, we choose this direction as our main method. ### Overall algorithm This section summarizes the overall algorithm of SpikingNeRF based on the DVGO[37] framework. And, the pseudo code is given in Algorithm 1. As illustrated in Fig. 3(a), SpikingNeRF first establishes the voxel grids filled with learnable volumetric parameters. In the case of the DVGO implementation, two groups of voxel grids are built as the input of Algorithm 1, which are the density and the feature voxel grids. Given an expected scene with \(N\) pixels to render, Step 1 is to sample \(M\) points along each ray shot from the camera origin to the direction of each pixel. With the \(N\times M\) sampled points, Step 2 queries the density grids to compute the weight coefficients, and Step 3 uses these coefficients to mask those irrelevant points. Then, Step 4 queries the feature grids for the filtered points and returns each point's volumetric parameters. Step 5 and 6 prepare the volumetric parameters into a receivable data format for \(sMLP\) with TP or TCP. Step 7 and 8 compute the RGB values for the expected scene. If a backward process is required, Step 9 calculates the MSE loss between the expected and the ground-truth scenes. Notably, the proposed methods can also be applied to other voxel-grid based NeRF frameworks, _e.g_., TensoRF[2], NSVF[25], since the core idea is to replace the MLP with the sMLP and exploit the proposed TCP approach. Therefore, we also implement SpikingNeRF on TensoRF for further verification, and defer the corresponding implementation details to the appendix. ``` 0: The density and the feature voxel grids \(V_{\sigma}\) and \(V_{f}\), the spiking MLP \(sMLP(\cdot)\), the view direction of the camera \(v\), the rays from the camera origin to the directions of \(N\) pixels of the expected scene \(R_{\{N\}}=\{r_{1},r_{2},...,r_{N}\}\), the number of the sampled points per ray \(M\), the ground-truth RGB \(\mathbf{C}=\{C_{1},C_{2},...,C_{N}\}\) of the expected \(N\) pixels. 0: The expected RGB \(\mathbf{\hat{C}}=\{\hat{C}_{1},\hat{C}_{2},...,\hat{C}_{N}\}\) of the expected \(N\) pixels, the training loss \(\mathcal{L}\). 1: The location coordinates of the sampled points \(P_{\{N\times M\}}=\{p_{1,1},p_{1,2},...,p_{N,M}\}\gets Sample(R)\). 2:\(\alpha_{\{N\times M\}},T_{\{N\times M\}}\gets Weigh(P,V_{\sigma})\) as in Eq. (6) and (3). 3: Filtered coordinates \(P^{\prime}\gets Mask(P,\alpha,T)\) as in Eq. (8). 4: Input\({}_{MLP}\gets QueryFeatures(P^{\prime},V_{f},v)\) as described in Eq. (7) and (2). 5: The temporal length \(T\leftarrow\) The maximum point number among the batched rays. 6: Input\({}_{sMLP}\leftarrow\) The TP or TCP transformation on Input\({}_{MLP}\) as described in Sec. 4.3 and Eq. (14). 7: The RGB values \(c_{\{N,T\}}\gets sMLP(\text{Input}_{sMLP})\) 8:\(\mathbf{\hat{C}}\gets Accumulate(P^{\prime},\alpha,T,c)\) as in Eq. (9). 9:\(\mathcal{L}\gets MSE(\mathbf{C},\mathbf{\hat{C}})\) as in Eq. (5) ``` **Algorithm 1** Overall algorithm of the DVGO-based SpikingNeRF (SpikingNeRF-D) in the rendering process. ## 5 Experiments In this section, we demonstrate the effectiveness of our proposed SpikingNeRF. We first build the codes on the voxel-grid based DVGO framework[37]1. Secondly, we compare the proposed SpikingNeRF with the original DVGO method along with other NeRF-based works in both rendering quality and energy cost. Then, we conduct ablation studies to evaluate different aspects of our proposed methods. Finally, we extend SpikingNeRF to other 3D tasks, including unbounded inward-facing and forward-facing datasets, and to the TensoRF framework[2]2. The extension results on other 3D tasks are deferred to the appendix. For clarity, we term the DVGO-based SpikingNeRF as **SpikingNeRF-D** and the TensoRF-based as **SpikingNeRF-T**. Footnote 1: [https://github.com/sunset1995/DirectVoxGO](https://github.com/sunset1995/DirectVoxGO) Footnote 2: [https://apchenstu.github.io/TensoRF](https://apchenstu.github.io/TensoRF) ### Experiments settings We conduct experiments mainly on the four inward-facing datasets, including Synthetic-NeRF[31] that contains eight objects synthesized from realistic images, Synthetic-NSVF[25] that contains eight objects synthesized by NSVF, BlendedMVS[44] with authentic ambient lighting by blending real images, and Tanks&Temples datasets[18] which is a real world dataset. We refer to **DVGO as the ANN counterpart to SpikingNeRF-D**, and keep all the hyper-parameters consistent with the original DVGO implementation[37] except for the training iteration in the fine stage being set to 40000. The grid resolutions for all the above datasets are set to \(160^{3}\). We also refer to **TensoRF as the ANN counterpart to SpikingNeRF-T**. In terms of the energy computation, we follow the prior arts[16, 19, 20, 43, 49] to estimate the theoretical rendering energy cost in the most of our experiments except for those in Tab. 3 whose results are produced by SpikeSim[32]. More details about SpikingNeRF-T and the SpikeSim evaluation are specified in the appendix. ### Comparisons **Quantitative evaluation on the synthesized novel view.** As shown in Tab. 1, our SpikingNeRF-D can achieve 70.79\(\pm\)1.2% energy saving with 0.53\(\pm\)0.19 PSNR drop on average over the ANN counterpart. Such a trade-off between synthesis quality and energy cost is reasonable because a significant part of inference is conducted with the addition operations in the sMLP of SpikingNeRF-D rather than the float-point operations in the original DVGO. On the one hand, compared with those methods[1, 6, 31] that do not perform the masking operation, SpikingNeRF-D can reach orders of magnitude lower energy consumption. On the other hand, compared with those methods[2, 25, 37, 41] that significantly exploit the masking operation, SpikingNeRF-D can still obtain better energy efficiency and comparable synthesis quality. Moreover, SpikingNeRF-T also reduces 62.80\(\pm\)3.9% energy consumption with 0.69\(\pm\)0.23 PSNR drop on average. Notably, SpikingNeRF-T only uses two FC layers as TensoRF does. One layer is for encoding data with full precision, the other for spiking with binary computation, leading to only half of the computation burden is tackled with addition operation. This accounts for the reason why SpikingNeRF-T performs a little worse than SpikingNeRF-D on energy reduction ratio. In conclusion, these results demonstrate the effectiveness of our proposed SpikingNeRF in improving energy efficiency. **Qualitative comparisons.** Fig. 5 compares SpikingNeRF-D with its ANN counterpart on three different scenes. Basically, SpikingNeRF-D shares the analogous issues with the ANN counterpart on texture distortion and blur. ### Ablation study **Feasibility of the conventional data encodings.** As described in Sec. 4.1, we propose two naive versions of SpikingNeRF-D that adopt two different data encoding \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c c c} \hline \hline Dataset & \multicolumn{3}{c|}{Synthetic-NeRF} & \multicolumn{3}{c|}{Synthetic-NSVF} & \multicolumn{3}{c|}{BlendedMVS} & \multicolumn{3}{c}{Tanks\&Temples} \\ Metric & PSNR\(\uparrow\) & SSIM\(\uparrow\) & Energy\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & Energy\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & Energy\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & Energy\(\downarrow\) \\ \hline NeRF[31] & 31.01 & 0.947 & 4.5e5 & 30.81 & 0.952 & 4.5e5 & 24.15 & 0.828 & 3.1e5 & 25.78 & 0.864 & 1.4e6 \\ Mip-NeRF[1] & 33.09 & 0.961 & 4.5e5 & - & - & - & - & - & - & - & - \\ JaxNeRF[6] & 31.65 & 0.952 & 4.5e5 & - & - & - & - & - & - & 27.94 & 0.904 & 1.4e6 \\ NSVF[25] & 31.74 & 0.953 & 16801 & 35.13 & 0.979 & 9066 & 26.90 & 0.898 & 15494 & 28.40 & 0.900 & 103753 \\ DIVeR[41] & 32.32 & 0.960 & 343.96 & - & - & - & 27.25 & 0.910 & 548.65 & 28.18 & 0.912 & 1930.67 \\ DVGO*[37] & 31.98 & 0.957 & 374.72 & 35.12 & 0.976 & 187.85 & **28.15** & **0.922** & 320.66 & 28.42 & 0.912 & 2147.86 \\ TensoRF*[2] & **33.14** & **0.963** & 641.17 & **36.74** & **0.982** & 465.09 & - & - & - & **28.50** & **0.920** & 2790.03 \\ \hline SpikingNeRF-D & 31.34 & 0.949 & **110.80** & 34.33 & 0.970 & **56.69** & 27.80 & 0.912 & **96.37** & 28.09 & 0.896 & **581.04** \\ SpikingNeRF-T & 32.45 & 0.956 & 240.81 & 35.76 & 0.978 & 149.98 & - & - & - & 28.09 & 0.904 & 1165.90 \\ \hline \hline \end{tabular} * denotes an ANN counterpart implemented by the official codes. \end{table} Table 1: Comparisons with the ANN counterpart and other NeRF-based methods. \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c|c c} \hline \hline Dataset & \multicolumn{3}{c|}{Synthetic-NeRF} & \multicolumn{3}{c|}{Synthetic-NSVF} & \multicolumn{3}{c|}{BlendedMVS} & \multicolumn{3}{c}{Tanks\&Temples} \\ Metric & PSNR\(\uparrow\) & SSIM\(\uparrow\) & Energy\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & Energy\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & Energy\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & Energy\(\downarrow\) \\ \hline SpikingNeRF-D with time-step 1. & & & & & & & & & & & (mJ) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & Energy\(\downarrow\) \\ \hline Direct & 31.22 & 0.947 & 113.03 & 34.17 & 0.969 & 53.73 & 27.78 & 0.911 & 92.70 & 27.94 & 0.893 & 446.31 \\ Poisson & 22.03 & 0.854 & **49.61** & 24.83 & 0.893 & **29.64** & 20.74 & 0.759 & **58.04** & 21.53 & 0.810 & **377.92** \\ TRA & **31.34** & **0.949** & 110.80 & **34.33** & **0.970** & 56.69 & **27.80** & **0.912** & 96.37 & **28.09** & **0.896** & 581.04 \\ \hline SpikingNeRF-D with time-step 2. & & & & & & & & & & & \\ \hline Direct & **31.51** & **0.951** & 212.20 & **34.49** & **0.971** & 104.05 & **27.90** & **0.915** & 172.26 & **28.21** & **0.900** & 1076.82 \\ Poisson & 21.98 & 0.855 & **91.97** & 24.83 & 0.893 & **55.58** & 20.74 & 0.759 & 107.77 & 21.57 & 0.814 & 712.24 \\ TRA & 31.34 & 0.949 & 110.80 & 34.33 & 0.970 & 56.69 & 27.80 & 0.912 & **96.37** & 28.09 & 0.896 & **581.04** \\ \hline SpikingNeRF-D with time-step 4. & & & & & & & & & & & \\ \hline Direct & **31.55** & **0.951** & 436.32 & **34.56** & **0.971** & 217.86 & **27.98** & **0.917** & 358.82 & **28.23** & **0.901** & 2296.22 \\ Poisson & 21.90 & 0.856 & 147.49 & 24.83 & 0.893 & 94.36 & 20.74 & 0.759 & 184.82 & 21.60 & 0.818 & 1138.89 \\ TRA & 31.34 & 0.949 & **110.80** & 34.33 & 0.970 & **56.69** & 27.80 & 0.912 & **96.37** & 28.09 & 0.896 & **581.04** \\ \hline \hline \end{tabular} * **TRA** denotes the time-ray alignment with TCP. \end{table} Table 2: Ablation study with the naïve versions. schemes: direct-encoding and Poisson-encoding. Tab. 2 lists the experimental results. On the one hand, direct-encoding obtains good synthesis performance with only one time step, and can achieve higher PSNR as the time step increases. But, the energy cost also grows linearly with the time step. On the other hand, Poisson-encoding achieves lower energy cost, but the synthesis quality is far from acceptance. The results indicates the direct-encoding shows good feasibility in accommodating SNNs to the NeRF-based rendering. **Effectiveness of the time-ray alignment.** To demonstrate the effectiveness of the proposed time-ray alignment, we further compare SpikingNeRF-D with the direct-encoding. In Tab. 2, SpikingNeRF-D can achieve the same-level energy cost as the direct-encoding with time-step 1, while the synthesis quality is consistently better across all the four datasets, which indicates SpikingNeRF is a better trade-off between energy consumption and synthesis performances. **Advantages of temporal condensing.** To demonstrate the advantages of the proposed temporal condensing on hardware accelerators as described in Sec. 4.3, we evaluate TCP and TP on SpikeSim with the SpikeFlow architecture. As listed in Tab. 3, TCP consistently outperforms TP in both inference latency and energy overhead by a significant margin over the four datasets. Specifically, on Synthetic-NSVF and BlendedMVS, the gap between TCP and TP is at least an order of magnitude in both inference speed and energy cost. These results demonstrate the effectiveness of a denser data structure in improving computation efficiency on hardware accelerators. **Importance of the alignment direction.** As described in Sec. 4.4, we propose temporal flip to empirically decide the alignment direction since the querying direction of sMLP along the camera ray will affect the inference outcome. Tab. 4 lists the experimental results of SpikingNeRF-D with and without temporal flip, _i.e_., with the consistent and the opposite directions. Distinctly, keeping the direction of the temporal dimension consistent with that of the camera ray outperforms the opposite case over all the four datasets in synthesis performance and energy efficiency. Similar conclusion can also be drawn from the experiments of the TP-based SpikingNeRF-D as shown in the appendix. Therefore, the consistent alignment direction is important in SpikingNeRF. ## 6 Conclusion In this paper, we propose SpikingNeRF that accommodates the spiking neural network to reconstructing real 3D scenes for the first time to improve the energy efficiency. In SpikingNeRF, we adopt the direct-encoding that directly inputs \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline \hline Dataset & \multicolumn{2}{c|}{Synthetic-NeRF} & \multicolumn{2}{c|}{Synthetic-NSVF} & \multicolumn{2}{c|}{BlendedMVS} & \multicolumn{2}{c}{Tanks\&Temples} \\ SpikingNeRF-D & w/ TCP & w/ TP & w/ TCP & w/ TCP & w/ TCP & w/ TCP & w/ TCP \\ \hline PSNR\(\uparrow\) & 31.34 & 31.34 & **34.34** & 34.33 & 27.80 & 27.80 & **28.09** & 28.01 \\ Latency(s) & **26.12** & 222.22 & **13.37** & 164.61 & **22.70** & 243.93 & **138.98** & 980.28 \\ Energy\({}^{+}\)(mJ) & **65.78** & 559.45 & **33.68** & 414.37 & **57.16** & 614.13 & **350.03** & 2468.16 \\ \hline \multicolumn{10}{l}{\(+\) denotes the energy result particularly produced by SpikeSim.} \\ \hline \hline \end{tabular} \end{table} Table 3: Comparisons between TCP and TP. Figure 5: Qualitative comparisons on the different challenging parts. **Top:** On _Character_ from BlendedMVS, where the color changes densely and intensely. **Middle:** On _Ignatius_ from Tanks\&Temples, where the textures are distinct and dense. **Bottom:** On _Truck_ from Tanks\&Temples, where detail information are explicitly displayed. \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline \hline Dataset & \multicolumn{2}{c|}{Synthetic-NeRF} & \multicolumn{2}{c|}{Synthetic-NSVF} & \multicolumn{2}{c|}{BlendedMVS} & \multicolumn{2}{c}{Tanks\&Temples} \\ SpikingNeRF-D & w/ TCP & w/ TP & w/ TCP & w/ TCP & w/ TCP & w/ TCP & w/ TCP \\ \hline PSNR\(\uparrow\) & 31.34 & 31.34 & **34.34** & 34.33 & 27.80 & 27.80 & **28.09** & 28.01 \\ Latency(s) & **26.12** & 222.22 & **13.37** & 164.61 & **22.70** & 243.93 & **138.98** & 980.28 \\ Energy\({}^{+}\)(mJ) & **65.78** & 559.45 & **33.68** & 414.37 & **57.16** & 614.13 & **350.03** & 2468.16 \\ \hline \multicolumn{10}{l}{\(+\) denotes the energy result particularly produced by SpikeSim.} \\ \hline \hline \end{tabular} \end{table} Table 4: Comparisons with temporal flip. the volumetric parameters into SNNs, and align the temporal dimension with the camera ray in the consistent direction, combining SNNs with NeRF-based rendering in a natural and effective way. Furthermore, we propose TP to solve the irregular tensor and TCP to condense the valid data. Finally, we validate our SpikingNeRF on various 3D datasets and the hardware benchmark, and also extend to a different NeRF-based framework, demonstrating the effectiveness of our proposed methods. Despite the gain on energy efficiency, the spike-based computation still incur performance decrease. Moreover, we only show the usage of our methods in the voxel-grid based NeRF. It would also be interesting to extend SpikingNeRF to other 3D reconstruction, _e.g._, point-based reconstruction.
2309.15592
Pulsar Classification: Comparing Quantum Convolutional Neural Networks and Quantum Support Vector Machines
Well-known quantum machine learning techniques, namely quantum kernel assisted support vector machines (QSVMs) and quantum convolutional neural networks (QCNNs), are applied to the binary classification of pulsars. In this comparitive study it is illustrated with simulations that both quantum methods successfully achieve effective classification of the HTRU-2 data set that connects pulsar class labels to eight separate features. QCNNs outperform the QSVMs with respect to time taken to train and predict, however, if the current NISQ era devices are considered and noise included in the comparison, then QSVMs are preferred. QSVMs also perform better overall compared to QCNNs when performance metrics are used to evaluate both methods. Classical methods are also implemented to serve as benchmark for comparison with the quantum approaches.
Donovan Slabbert, Matt Lourens, Francesco Petruccione
2023-09-27T11:46:57Z
http://arxiv.org/abs/2309.15592v1
Pulsar Classification: Comparing Quantum Convolutional Neural Networks and Quantum Support Vector Machines ###### Abstract Well-known quantum machine learning techniques, namely quantum kernel assisted support vector machines (QSVMs) and quantum convolutional neural networks (QCNNs), are applied to the binary classification of pulsars. In this comparitive study it is illustrated with simulations that both quantum methods successfully achieve effective classification of the HTRU-2 data set that connects pulsar class labels to eight separate features. QCNNs outperform the QSVMs with respect to time taken to train and predict, however, if the current NISQ era devices are considered and noise included in the comparison, then QSVMs are preferred. QSVMs also perform better overall compared to QCNNs when performance metrics are used to evaluate both methods. Classical methods are also implemented to serve as benchmark for comparison with the quantum approaches. ## I Introduction In the field of astronomy, where vast amounts of observational data of various types of astronomical objects are collected annually, cataloging these objects into their respective categories is crucial. This comparative study focuses on pulsars, which are a type of neutron star formed when a massive star reaches the end of its life, collapsing under gravity but not enough to overcome neutron degeneracy pressure [1, 2, 3]. This results in a dense star remnant with a significantly smaller radius and rapid rotation, often on the order of seconds to milliseconds [4]. Pulsars possess strong magnetic fields that emit particle beams from their poles. If Earth happens to intercept these beams we observe periodic pulses of observable signals, typically measured using radio telescopes. These neutron stars exhibiting this periodic signal are known as pulsars. Identifying pulsars is essential for distinguishing them from typical main sequence stars and other neutron stars, as binary systems consisting of two pulsars can generate gravitational waves [5, 6]. Pulsars' precise rotational periods also make them useful for detecting gravitational waves passing between them and Earth, by observing deviations in their pulses [7, 8, 9]. This capability enhances our ability to study gravitational waves. With the increasing volume of astronomical data due to improving observational technology, traditional software will eventually struggle to manage and process it effectively. The Square Kilometer Array (SKA), set to launch soon, is expected to produce data on the order of exabytes (\(10^{18}\) bytes) [10]. Real-time data analysis and efficient classification tasks are the ultimate goals in astronomy and data science. While current classical classification methods suffice for now, the problem will become insurmountable as data sizes continue to grow. It is for this reason that machine learning has been thoroughly explored as a solution to this challenge. Classical machine learning has been applied to pulsar classification [11, 12, 13, 14, 15, 16, 17], however, since classical approaches face limitations with increasing data sizes [18], the problem might have to be approached with a different perspective. Quantum computing (QC), particularly quantum machine learning (QML) that combines the computational power of quantum computers with machine learning, is such a new perspective. We endeavour to explore the applicability of quantum computers to the machine learning task of classifying pulsars. We do this by comparing two quantum approaches. Quantum methods (one of which is also called a QSVM) have been used on the HTRU-2 data set before [19]. Another known quantum approach [20] uses a circuit with a depth that would introduce a lot of noise. We improve over the accuracy presented in this case [20] with shorter depth circuits. Our study uses QML approaches that have circuits with lower depths to effectively and accurately classify pulsars using the real-world HTRU-2 data set using noise free and noisy simulated quantum algorithms and in a limited implementation on currently available NISQ devices. The HTRU-2 data has been manually created by experts from a large number of pulsar candidates [21]. This means that the problem falls under the machine learning category of supervised machine learning [22; 23] in that we have a ground truth to use in training, but this ground truth was decided on by human experts. There are eight features present in the data set for each candidate. Each of the eight features are encoded into a quantum state using rotational encoding. The quantum approaches under consideration are a basic 8-qubit implementation of a quantum-assisted support vector machine (QSVM) and an 8-qubit quantum convolutional neural network (QCNN). Both approaches will be compared with each other as well as to their classical counterparts. The two methods implemented are variants of commonly used QML archetypes, namely quantum kernel methods and variational quantum algorithms. The goal is to investigate which is currently more suited for real-world binary classification and the variants of choice to investigate are the QSVM and QCNN. In [24] the assertion is made that variational quantum algorithms can also be considered to be quantum kernel methods, but the difference in this variational approach is the use of hierarchical ideas inspired by classical convolutional neural networks and the training procedure. The QSVM evaluates the kernel using a quantum device, followed by fitting a classical SVM, whereas the QCNN relies on a classical optimization loop to optimize parameters used in its parameterized circuit. There are theoretical guarantees associated with kernel methods [24] and it is these guarantees that we want to compare the QCNN with using its more manipulable quantum circuit. We show below how the QCNN, despite its larger circuit, outperforms the QSVM when considering the time taken to train the model and to use it for prediction. Using standard confusion matrices [25] and evaluation metrics [26] the best method is not as apparent. The QSVM generally performs slightly better depending on the metric used, however, of the two methods the QSVM avoids the most false negatives. In this discovery-focused use case, avoiding false negatives is really important. A short noise test illustrates the obvious point that longer quantum circuits undergo more noise. This is because there is an error associated with each quantum logic gate applied [27]. With noise included in the comparison, the QSVM performs better because of its shorter depth. The rest of the paper is structured as follows: Section 2 is an explanation of binary classification as a supervised machine learning task. It also contains the necessary information regarding the two methods that are implemented. Section 3 is an explanation of how the HTRU-2 data set is used and how data preprocessing was performed. Here the specific methodology used for both approaches are also discussed Section 4 serves as a comprehensive comparison between the two methods. This includes noise free and noisy simulated results as well as limited real quantum device results. Section 5 is a short conclusion discussion. ## II Theory ### Binary Classification Binary classification is a supervised machine learning problem where samples in a data set are categorized into one of two classes, in this case as being a pulsar or not. The following is the formal problem statement for binary classification [28]: \[f_{\theta}:\mathbf{X}\rightarrow\mathbf{Y},f_{\theta}(\vec{x})=y,\vec{x}\in\mathbf{X} \:\text{and}\:y\in\mathbf{Y}, \tag{1}\] where \(\mathbf{X}\) and \(\mathbf{Y}\) are the input and output domain respectively, \(x\) is a \(n\)-dimensional input vector (also called a feature vector) that contains \(n\) entries or features, and \(y\) is a class label from the set \(\{0,1\}\), where a label of \(1\) indicates the positive class (pulsars) and \(0\) the negative class. The function \(f_{\hat{\theta}}\) is the optimal model from a model family, or set of models \(\{f_{\theta}\}\), and the subscript \(\theta\) indicates that this function has a set of parameters that need to be optimised during training. Training involves the use of a subset of both \(\mathbf{X}\) and \(\mathbf{Y}\), called the training set \(\mathbf{X}_{\text{train}}\) and \(\mathbf{Y}_{\text{train}}\), during an optimisation or fitting process to find the best model from a model family that predicts the testing set \(\mathbf{X}_{\text{test}}\) and \(\mathbf{Y}_{\text{test}}\) labels as accurately as possible. Based on the predictions made, cross-referenced with the original labels, the output can be formulated as a confusion matrix, which is an \(2\times 2\) array containing the prediction outcomes. Confusion matrices are useful for assessing how well a model predicts on new samples by visually representing the distribution of predictions. Once a model has been trained that performs at a level that is required, the model can be used to predict on previously unseen data, where no prior knowledge of the labels are possible. Various metrics are chosen to quantify how well a model performs at prediction and are calculated using values extracted from the confusion matrices. In our case it is more important to classify possible pulsar candidates instead of finding true non-pulsars. This is to aid the discovery of new pulsars, which implies that true positives, false positives and false negatives are all of interest, since both the false positives and false negatives still have a probability of being a pulsar. It is, however, important to minimize the amount of false negatives, since the cost of misclassifying a pulsar is larger than misclassifying a non-pulsar. This ties to the fact that all positive predictions are double checked after prediction. The true negatives are of the least amount of interest. Eight metrics are included and are defined in Table 1[29; 26]. Accuracy is a metric that measures the proportion of correctly classified samples when compared to the known labels. The recall, also known as sensitivity, is a measure of the proportion of positive cases being identified as positive. Specificity is equivalent to recall, but for the negative case. Precision measures how many of the positively identified samples were in fact actual positives and the negative prediction value (NPV) does the same for the negative predictions. The higher both the recall and precision are, the better the model is at predicting the positive class. Three more metrics are included that are commonly used: balanced accuracy, geometric mean, and informedness. Balanced accuracy is the average of recall and specificity, supplying a way to move away from a possibly inflated accuracy value if imbalanced testing sets lead to biased prediction. Geometric mean also compensates for imbalanced data, but the focus here is placed on the training set specifically. Our training sets will be balanced, but we include geometric mean for completeness. Finally informedness provides another measure of how well the model predicts both the positive and negative classes. The best model would be the one that maximizes performance targeted at the positive class (recall and precision), while avoiding as many false negatives as possible (NPV). ### Quantum Kernel Assisted Support Vector Machine Support vector machines are designed as maximal margin classifiers where the data is classified into two classes by a linear decision boundary [30], after calculating a kernel function value, which is simply a chosen distance measure between two feature vectors. Depending on the dimension of the problem, this separating boundary is usually a separating hyperplane and is found by specifying two parameters: the normal vector of the plane and a scalar number, which together determine the offset of the plane. The margin is the distance between the closest two data points from opposing classes and the support vectors are the most important data points used when determining the hyperplane as they are the closest points to the plane and have the highest impact on how the plane should be oriented. This assumes that the data is linearly separable in the \(n\)-dimensional input space. If linear separability is impossible, then a feature map \(\phi(\vec{x})\) can be used to map or embed the feature vectors in input space \(\mathbf{X}\), into a higher-dimensional feature space \(\mathbf{F}\) where it may become linearly separable. In feature space, the kernel function is typically an inner product of two mapped feature vectors and because of the kernel trick [31, 32], only the kernel function needs be evaluated, instead of processing the individual vectors in feature space. The quantum version of SVMs uses data embedding to encode the data features into quantum mechanical states that are simulable on quantum devices [33, 34, 35]. In this way the feature map maps the data into a feature space that is similar to Hilbert space, where the Hilbert-Schmidt inner product can be used [24]. This implies that the feature map in this case, maps real data to quantum mechanical states described by complex density matrices. The new feature map can be described as [24]: \[\phi(\vec{x})=\ket{S(\vec{x})}\bra{S(\vec{x})}=\rho(\vec{x}) \tag{2}\] where the feature map \(\phi(\vec{x})\) is the outer product of a data encoded feature vector \(S(\vec{x})\) with itself. Quantum kernel methods, of which quantum support vector machines are an example, can be split into two steps: data encoding, which can be done in a multitude of ways [28] (amplitude encoding, angle embedding etc.) and measurement, which can be understood as a projective operation using some observable matrix \(\hat{O}\). Quantum kernel methods use the inner product in feature space, as a distance measure between two feature vectors, to train the model based on these values by finding a linear decision boundary that separates two classes from one another. Stated more explicitly, training here means that the optimal measurement must be found as it determines the decision boundary. The inner product of quantum embedded states is the mutual overlap. This similarity measure is now called the quantum kernel. A matrix of pairwise inner product values can be calculated by calculating the kernel value for each combination of data points. The pairwise inner product matrix calculated for two sets of identical feature vectors results in a matrix called the kernel gram matrix. The kernel matrix in this case is calculated in feature space, meaning that quantum kernels are functions of the form: \[K(\vec{x},\vec{x}^{\prime})=|\bra{\phi(\vec{x})}\hat{O}\ket{\phi(\vec{x}^{ \prime})}|^{2}=\operatorname{tr}\left[\rho(\vec{x})\hat{O}\rho(\vec{x}^{ \prime})\right], \tag{3}\] where \(\vec{x}\) and \(\vec{x}^{\prime}\) are two separate \(n\)-dimensional input vectors, each with \(n\) features, \(\phi\) is the feature map determined by the data embedding and \(\hat{O}\) is any observable \begin{table} \begin{tabular}{l c} \hline \hline **Metric** & **Equation** \\ \hline Accuracy & \(\frac{\operatorname{TP+TN}}{\operatorname{TP+TN+FP+FN}}\) \\ Recall & \(\frac{\operatorname{TP}}{\operatorname{TP+FN}}\) \\ Specificity & \(\frac{\operatorname{TN}}{\operatorname{TN+FP}}\) \\ Precision & \(\frac{\operatorname{TP}}{\operatorname{TP+FP}}\) \\ Negative Prediction Value & \(\frac{\operatorname{TN}}{\operatorname{TN+FN}}\) \\ Balanced Accuracy & \(\frac{1}{2}(\operatorname{Recall+Specificity})\) \\ Geometric Mean & \(\sqrt{\operatorname{Recall\cdot Specificity}}\) \\ Informedness & \(\operatorname{Recall+Specificity-1}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The expressions for the evaluation metrics used. TP, TN, FP, and FN represent the true positives, true negatives, false positives, and false negatives respectively after using a binary classification model on a testing set. These values form part of confusion matrices. that can be omitted if the only the overlap is to be calculated. The resultant kernel function is then used for fitting a classical SVM to the kernel gram matrix for training. The quantum kernel is used in place of a classical kernel function when the classical SVM is fitted to the training data. This implies that only the kernel matrix, or rather its elements, are calculated using the quantum device. The kernel only depends on the data encoding, which implies that the data encoding defines the minima used in training [24]. We will call this method a QSVM from now on. A major drawback to QSVMs is the fact that in order to calculate the full kernel matrix, a quadratically scalable number of entries must be calculated [24]. One entry for every pair of input feature vectors exists. The hope is that a classically intractable quantum kernel can be found that will improve classification accuracy when compared to classical kernel functions. There is also the hope that an improvement with respect to the runtime can be found, but this might prove difficult with how long quantum circuit evaluations generally take. It is also an added benefit that SVMs generally do not suffer from barren plateaus [24]. ### Quantum Convolutional Neural Networks Convolutional neural networks (CNN) are commonly used architectures for image-related machine learning tasks, such as image recognition, image segmentation, and object detection [36, 37]. Generic CNN architectures consist of interconnected layers, including input, output, and hidden layers, with activation functions determining neuron activation. This is arranged in a graph-like configuration where the edges between nodes indicate connection weights that determine how strongly certain nodes are linked. Training involves passing data through the network, calculating a cost function, and using gradient descent to update weights in back propagation during each epoch or iteration. CNNs have three key components: convolutional layers, pooling layers, and fully connected layers. The fully connected layers operate as a standard neural network. Convolutional layers perform matrix multiplication to learn spatial patterns and pooling layers reduce spatial dimensions to reduce computational costs. While there are many proposals for the quantum analogue of convolutional neural networks [38, 39, 40], we focus on the framework proposed by Cong et al. [41] and follow the design methodology of Lourens et al. [42]. Here the QCNN implements analogous convolution and pooling operations in a quantum circuit setting. The key components being weight sharing, sequential reduction of system size via pooling, and translational invariance in convolution steps. These operations are applied on a circuit architectural level, where a convolution consists of unitary operations (also called mappings) applied to all available qubits in a given layer. Identical unitaries are utilised, enabling translational invariance and the sharing of weights. The choice of unitary is arbitrary, but should be chosen in such a way to minimize circuit-based noise, while retaining the ability to learn patterns in the data. Pooling consists of using a portion of the available qubits as control qubits and applying controlled rotations on the targets. This leads to a reduction in system size and allows the number of parameters to scale logarithmically with circuit depth. Pooling introduces non-linearity to the model while also reducing its computational overhead and is analogous to coarse-graining in the classical case. Information is lost, but by connecting two qubits through a separate unitary mapping, in our case a simple CNOT gate, some information is retained for use in classification. Convolution and pooling operations are repeated until the system size is sufficiently small, in our case when there is one solitary qubit remaining. Circuit architecture may be changed to find possible relationships in the data, such as changing the solitary qubit. We show an example QCNN in Figure 3 along with it's corresponding directed graph representation [42]. In the digraph representation, nodes correspond to qubits and edges to unitaries applied between them. This representation is useful for defining hyperparameters such as strides, steps and offsets that are used to optimise circuit architecture. One advantage of the QCNN is its flexibility in choice of architecture and Lourens et al. [42] showed how changing the layout of unitaries on the circuit can greatly improve model performance. Finally, all variational circuits require a data encoding step, just like for kernel methods. The choice of data encoding is really important for the effectiveness of variational quantum circuits [43]. ## III Methodology ### The HTRU-2 data set and Feature Selection The HTRU-2 data set used for this comparitive study has already been formatted for use in machine learning and consists of 8 features or inputs per candidate sample with a singular classification label of either 0 or 1, where 1 is the label of the positive class (a pulsar). There are eight features that can be grouped into two groups of four, where each of the features are statistical elements of a candidate's integrated field and DM-SNR curves. The four statistical elements of these two properties are the mean, standard deviation, excess kurtosis, and skewness. The data set contains 16259 pulsar candidates, where 1639 have been confirmed to be pulsars manually by experts. To read more about what the integrated field and DM-SNR curves are and why they are important for pulsar classification see [21, 44]. The features were all normalized to be between 0 and \(\pi\) radians. This is done because of the rotational encoding strategy chosen. The full data set was split in two at a ratio of 70:30. A specified number of samples from 70% of the data set, in this case a balanced number of positive and negative class candidates, were sampled randomly and used as training sets. Another set of samples was also created by randomly sampling from the remaining 30%. This set was used as the testing set. This allows for creating training and testing sets of any size if necessary. A specific case of batching the training set for the QCNN was done as follows: from the balanced training sets already created through random sampling, more random sampling was used to create balanced batches of 10 candidates each that were used in each epoch. The way in which this was done, allowed for sample overlap, meaning some batches might have duplicate entries. ### QSVM Circuit We kept the quantum circuit depth as low as possible to circumvent decoherence noise. The quantum circuit used is an extension of the circuit used in [45]. The features were embedded as the phase arguments of Y-gates applied to each of the 8 qubits, one for each feature. This is called angle embedding and is mathematically represented by applying a Y-rotation gate to the initial state \(\ket{0}\), where the parameter specifying by how much the rotation should be applied is a specific feature from a feature vector. The rotation operator can be written as \(R_{Y}(x_{i})=e^{-i\frac{x_{i}}{2}\sigma_{Y}}\), where \(\sigma_{Y}\) is one of the Pauli matrices. \[R_{Y}(x_{i})\ket{0}, \tag{4}\] where \(R_{Y}(x_{i})\) is the Y-rotation gate that is applied to one of the 8 qubits. Each qubit was rotated by a Y-rotation of \(x_{i}\), where the argument was the \(i^{th}\) entry to a feature vector. Keeping to the simplistic design a second set of gates, the complex conjugate of the angle embedding this time only with another set of 8 features from a separate feature vector, was applied to the 8 qubits. \[R_{Y}^{\dagger}(x_{i}^{\prime})R_{Y}(x_{i})\ket{0}=e^{-i(\frac{x_{i}-x_{i}^{ \prime}}{2})\sigma_{Y}}\ket{0}, \tag{5}\] where we now have a term, \(x_{i}-x_{i}^{\prime}\), referencing the difference between two features from two separate feature vectors. This difference forms part of the distance measure required by SVMs. The combined effect of the two sets of Y-rotation gates can be written as unitary operators \(S(\vec{x})\) and \(S^{\dagger}(\vec{x}^{\prime})\) and their operation on the 8-qubit initial state can be written as: \[S^{\dagger}(\vec{x}^{\prime})S(\vec{x})\ket{00\dots 0}, \tag{6}\] where \(\ket{00\dots 0}\) is the 8-qubit initial state vector. Afterwards a projective measurement was taken by projecting the state down to the 8-qubit initial state's density matrix, \(\rho=\ket{00\dots 0}\bra{00\dots 0}\). In other words the density matrix serves as part of a projector matrix before measurements are done in the computational basis. This step forms the final part of calculating the inner product. Without it the measurement would only produce the state after two rotations. This has the effect of calculating the inner product in the computational basis without performing a SWAP test [45] and becomes the kernel function that will be used along with fitting a classical support vector machine to the training data for prediction. This can be seen in the following: \[\bra{00\dots 0}S(\vec{x}^{\prime})S^{\dagger}(\vec{x})\hat{O}S^{\dagger}( \vec{x}^{\prime})S(\vec{x})\ket{00\dots 0}, \tag{7}\] where \(\bra{00\dots 0}S(\vec{x}^{\prime})S^{\dagger}(\vec{x})\hat{O}\) is the projector used and a final expression can be found: \[\ket{\bra{00\dots 0}S^{\dagger}(\vec{x}^{\prime})S(\vec{x})\ket{00\dots 0}}^{2}, \tag{8}\] that is just the kernel discussed before. The above was done for the entire training set. The consequence of how kernel values are calculated (one value exists for each possible pair of features from two separate feature vectors) the scaling of this method is quadratic. This implies that the kernel will become classical intractable at some point. Therefore, since the kernel function is hard to simulate classically, it is the hope that an actual quantum device would be able to calculate it and reap the benefits of it if there are any. We estimate the training time of the training step of the algorithm on a real device by measuring the time it takes for one circuit execution to run and then multiplying this runtime with the total number of circuit executions. This would extrapolate the training time for more circuit executions. From our perspective, runtime Figure 1: A correlation matrix showing the correlation coefficients between each possible pair of the features in the HTRU-2 data set. is the time it takes the model to train on the training set. The predicting time is not irrelevant though, as for QSVMs it scales with the size of the training and testing set. In other words to estimate the training time we use: \[\Delta t_{\text{Training}}=n_{\text{CE}}\times\Delta t_{\text{CE}}, \tag{9}\] where \(\Delta t_{\text{Training}}\) is the total training time, \(n_{\text{CE}}\) the number of circuit executions and \(\Delta t_{\text{CE}}\) is the training time of one circuit execution. The total number of circuit executions can be calculated using: \[n_{\text{CE}}=n_{\text{train}}^{2}, \tag{10}\] where \(n_{\text{train}}\) is the number of training samples. The prediction time can similarly be calculated, but instead the number of circuit executions is calculated using the number of testing samples: \[n_{\text{CE}}=n_{\text{train}}\times n_{\text{test}}. \tag{11}\] No attempt was made to batch the training data and to implement incremental learning, as kernel-assisted support vector machines are one shot solvers of a maximal margin classifier type that fits the decision boundary to training data in one go, meaning no iterations are required. The quantum circuit diagram used as explained, is illustrated in Figure 2. This was all done using the Python libraries Pennylane and scikit-learn. ### QCNN Circuit Quantum convolutional neural network architecture and consequently the architechture of the quantum circuit representing the QCNN are influenced by the shape of the convolutional and pooling steps. Our circuit makes use of eight qubits, three convolutional steps and three pooling steps, before measuring the last remaining qubit. A stride of five was chosen for the convolutional steps, meaning qubits 1 and 6 will be connected by an edge in the first convolutional step. This will have an effect throughout the circuit. We choose two-qubit unitaries for the convolution steps and attempt to maintain a short circuit depth in our experiments. Convolutional steps have the effect of cross-correlating the two qubits the operator is applied to, which also results in the sharing of weights/parameters in a neural network representation of the circuit. The chosen unitary can be seen in equation 12. \[U|q_{1}\rangle\otimes|q_{2}\rangle=\text{CNOT}\cdot R_{Y1}(\theta_{1})\cdot R_{ Y2}(\theta_{2})|q_{1}\rangle\otimes|q_{2}\rangle, \tag{12}\] where the \(R_{Y1}\) and \(R_{Y2}\) are y-rotation gates applied to each qubit and the rotation parameters \(\theta_{1}\) and \(\theta_{2}\) will be optimised every iteration. After the convolutional steps, pooling steps reduce the number of qubits to an eventual solitary qubit that is to be measured for output. In the pooling steps we use CNOT gates to represent the directed edges. Figure 3 shows an example QCNN circuit with the directed graph representation We considered two cases: 1 - using the entire training set for training every epoch and 2 - batching the training data into smaller batches of 10 samples or candidates per batch for training every epoch. The way the batching was done meant that there was chance of sample overlap between batches, with this overlap being almost guaranteed if the batch size of 10 multiplied by the number of epochs was comparable to the training set size. Ideally it would be best to use large training sets. In both cases the learning rate was set to 0.01 and the iteration limit was set to 150 epochs. The chosen loss function to minimize is the binary cross-entropy loss: \[\begin{split}\text{BCEL}(y_{\text{pred}},y_{\text{test}})=-\Big{(} y_{\text{test}}\cdot\log(y_{\text{pred}})+\\ (1-y_{\text{test}})\cdot\log(1-y_{\text{pred}})\Big{)}.\end{split} \tag{13}\] The QCNN quantum circuit used is illustrated in Figure 3. The Python libraries PyTorch and Pennylane were both used in conjunction to iterate over the forward pass, back pass and gradient updating steps. A consequence of both the circuit architecture and how the method is implemented is a different scaling to the QSVM. We can still approximate the total training time by using equation 9, but now the total circuit executions can be easily controlled and calculated using the total number of epochs and samples used: Figure 2: The QSVM circuit. We simply apply angle embedding (rotational encoding) using \(Y\)-rotation gates, followed by applying the complex conjugate transpose of this embedding using features from a separate feature vector as parameters. Measurement is done in the computational basis only after a projector matrix was applied to help calculate the inner product without a SWAP test [45]. \[n_{\rm CE}=n_{\rm Epochs}\times n_{\rm Train}, \tag{14}\] where the number of epochs and training samples can be controlled. Estimating the prediction time is much simpler: \[n_{\rm CE}=n_{\rm Test}, \tag{15}\] since only one circuit execution is performed per candidate. ## IV Results ### Simulations We simulate the above circuits, using Pennylane, and plot the confusion matrices. This gives us an indication of how the methods compare with respect to performance on the chosen data set. The classical device used to simulate all the noise-free and noisy simulations has the following specifications: Lenovo Laptop Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz 2.60 GHz, with 16,0 GB (15,8 GB usable) RAM and a GTX 960M GPU, which was not used to supplement the computational power. The confusion matrices from Figure 4 (which are plotted for runs at 200 training samples and 400 testing samples) show that both methods perform well at classifying pulsars correctly. Both methods are good at correctly predicting negative cases. False positives are harder to avoid for the QSVM than for the QCNN and the reverse is true for the false negatives. The QSVM predicted a higher proportion of the positive cases correctly, however, the QCNN still managed to correctly predict most positive cases correctly as well. The implication of the confusion matrices is that if the QSVM model predicts a candidate as a non-pulsar, it is almost certainly a non-pulsar, however when the model predicts that a candidate is a pulsar, then that candidate might have to be double checked. The reverse applies to the QCNN. The QCNN performs worse at avoiding false negatives, which is not ideal as we want to minimise the amount of mistakes when predicting the positive case. It is fortunate that there are more non-pulsar candidates than pulsar candidates that are correctly predicted in both cases. This causes a reduced amount of manual confirmation required, as negative predictions will never be doubled checked. This is because they form the majority of all predictions, which means there are too many to manually confirm. Even though the amount of manual labour is reduced, to be entirely sure of the positive predictions, all positive predictions will have to be manually checked, especially when applying the model to unseen data. We also simulate the circuits for an increasing number of training samples. The resultant curves of eight performance metrics can be plotted as seen in Figure 5. The expected behaviour would be for the performance metrics to increase as sample size is increased, as more samples would typically lead to a more optimized model. This is marginally true for the lower sample sizes (less than 50 samples) where a slight increase can be observed. It is also observed in the QSVM precision plot and in isolated cases for informedness, geometric mean, balanced accu Figure 3: Top: The QCNN circuit. Note the stride of 5 in the convolutional steps. The rotational encoding, convolution, and pooling steps are all illustrated as a collection of gates inside coloured blocks. Bottom: The directed graph representation of the entire circuit. Edges in the the convolutional steps are the unitary specified earlier and edges in the pooling steps are CNOT gates. Directed edge arrows indicate which node (qubit) is the target qubit. racy and recall. The near constant behavior throughout the QCNN case indicates that the saturation point for training is reached early and a possible explanation for this is training over many epochs using the entire training set. The uncertainty at lower samples is also more prominent, which implies that for a limited number of samples, training is not satisfactory. This is observed in the accuracy, specificity, and precision curves. We observe that as the number of samples are increased, the uncertainty in the accuracy and specificity decreases. This indicates an improved prediction aimed towards the negative case. There is also an increase in the precision curve for the QSVM, implying more samples result in a improved prediction performance towards the positive cases. Recall and precision have a higher uncertainty and lower values than specificity, which again illustrates that it is easier to predict negative cases correctly. All positive predictions have a higher probability to be incorrect and will thus have to be manually checked. This is consistent with the results from the confusion matrices. Two important points worth noting, are the uncertainty and fluctuating behaviour in the balanced accuracy, geometric mean, and informedness. All three metrics depend on recall as a metric, which indicates the performance on positive cases. Since there are more mistakes for the positive cases, and hence more uncertainty, these values tend to fluctuate more. Finally, from a general perspective, it is clear that the models perform well when measuring in accuracy, specificity and negative prediction value as metrics and slightly worse when considering precision and recall. It depends on the sample size for the other metrics, but we note that the QSVM outperforms the QCNN on most metrics, which is a strong motivator for its use. If precision is prioritised, then the QCNN can be considered as well, since it has a precision that is high compared with the QSVM. It is also interesting to note that changing the QCNNs architecture, for example the choice of unitary operators used, may give insight as to how certain data features might be linked. In other words QCNNs might be more interpretable than QSVMs. This is especially true for changing the striding between qubits in the convolutional step and changing the final remaining qubit after all the pooling steps. It might supply a way of gauging which features are more important or more closely related without performing PSA or feature correlation tests. An example of this is a sudden increase in accuracy when using a stride of 5, which can be attributed to an unknown correlation in the features. ### QCNN: Unbatched versus Batched Data We considered the effect of data batching for the QCNN, which involved randomly batching the training data into balanced groups of 10 each. This means that in each epoch of the training and optimization step, the model is training with a new set of 10 samples. The two cases can be compared in Figure 4. For the batched case it is clear that the minimization of the loss fluctuated a lot more, since it trained over less samples in each iteration. The overall result in the confusion matrices was nearly identical. The results from Table 2 shows that the gain from training over all samples in each epoch is not worth the increase in overall training time. It is therefore recom Figure 4: Confusion matrices for runs at 200 training samples and 400 testing samples using QSVMs (Green), QCNNs (Purple) and batched QCNNs (Blue). The confusion matrices of both QCNN approaches are nearly identical, which is an indication that batching should be strongly considered. The two loss curves are each an example of one of the convergent loss optimization processes. An example of a kernel gram matrix (top right) is provided for 200 training samples. Each entry in the confusion matrices is the average of six distinct runs and the uncertainty is the standard error. mended that data batching be used for this method as it will drastically decrease the amount of time required to train the model. It is also a good idea to increase the number of epochs to cover as many samples as possible and to limit the amount of sample overlap in each batch. Note that this only applies to the way we batched the training data and can be removed entirely if unique batches are sampled. Since it is possible to use a batched QCNN to benefit from its faster training and prediction times compared to the standard QCNN approach, the batched QCNN can also be implemented on even larger batches or data set sizes. The result should be an increase in performance and a decrease in uncertainty. ### Classical versus Quantum The runtime plots, that follow later in Figure 7, indicate that there is an intersection in the training time somewhere close to 200 training samples for both quantum models. Assuming that training time would be comparable at this sample size, generic classical versions of a SVM and a CNN were also performed to compare these methods with their classical equivalents. The results are tabulated in Table 2. Comparing CNNs and SVMs separately shows that classical SVMs perform better compared to quantum SVMs when considering every metric, however, the QSVM does perform similarly when considering speci Figure 5: Eight performance metrics plotted versus an increasing training sample size. The plots included (alternating from left to right going downwards) are accuracy, balanced accuracy, recall, specificity, precision, negative prediction value, informedness, and geometric mean. Averages are illustrated using x-markers and the uncertainty (standard error) is indicated as a shaded area. Each data point is the average of twelve distinct runs. Green indicates the QSVM and purple the QCNN. ficty. This means that the QSVM is up to par with the CSVM when predicting negative cases. The QCNN is also outperformed by its classical equivalent, but the difference is that its specificity is actually higher than its classical counterpart. This implies that if you wanted to solve the current problem with a quantum approach, the QCNN would be the better choice of the two, not only because it has the higher specificity, but also its faster training time. CCNNs are typically used for spatial analysis in image data, which can cause shortcomings in their performance when identifying feature vectors. This is clearly observed in the low precision value in the CCNN column. It might be possible to improve the classical CNN and achieve improved results (specifically to increase the specificity and the precision), however, it is promising that if quantum computers become more error robust and have faster operational and readout times that there are some improvements that can be made for quantum approaches. Perhaps when we transcend the NISQ era, meaning when the runtimes are reduced and noise limited enough for these methods to become viable when compared to classical methods, quantum approaches may become mainstream. ### Real Devices and Noisy Simulations We compare the runtime performance of both models on real quantum devices. This is made possible by Qiskit Runtime [46; 47], which is a cloud-based hybrid quantum-classical computational system to implement quantum calculations and to optimize classically. Qiskit Runtime can be interfaced with Pennylane's qiskit-pennylane plugin [48]. It was observed that with both methods, in order to perform a comparative run at 200 training samples and all other parameters kept exactly as it was set in the simulations, the overall training time would be completely impractical on real devices. The QCNN and QSVM on real devices had an approximate training time of \(142.00\pm 5.10s\) and \(3.33\pm 0.47s\) respectively. Using the scalability statements made earlier, these training times can be used to approximate a total training time value for each method by simply multiplying it with the total number of circuit executions required to finish the entire training and predicting process. The result is that it would take longer than a day for the QCNN and QSVM approaches just to be trained. This becomes worse if we include real device prediction and queue time. These times are calculated for a reduced number of epochs of 10 in the QCNN case. Both numbers are impractical, and even if a data batching workaround was used along with reducing the number of iterations even further, the time of one circuit execution is still too long compared with the near instantaneous runtime of one iteration in the classical counterparts. The QSVM would be the better choice here, since its circuit execution time is faster, however, the scalability of the QSVM has to be factored in. To compare the impact of noise on both circuits, focus was placed on noisy simulations. We wanted to find the relationship between the depth of the quantum circuits and their intrinsic noise. Apart from how the circuits are changed during the compilation process, where it is translated from the designed architecture to what is has to be on the quantum device due to the device's restricted basis gate set, another source of error is gate-based error. This is where any gate applied in the circuit has a probability of failing due to many factors such as: short coherence times that lead to eventual decoherence of the state, crosstalk between qubits, and imprecise control of the qubits when applying operator gates. The more gates there are in succession, the higher the impact will be. Device specific qubit connectivity can lead to an overall greater depth during compilation which would accentuate these problems. This happens because certain multi-qubit gates require two qubits to be connected in order to be applied. If they are not connected, the compilation process tries to find another gate combination that would \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Metric** & (1) **QSVM** & (2) **QCNN** & (3) **Batched QCNN** & (4)**CSVM** & (5) **CCNN** \\ \hline Accuracy & \(0.950\pm 0.006\) & \(\mathbf{0.972\pm 0.002}\) & \(0.971\pm 0.003\) & \(0.972\pm 0.004\) & \(0.942\pm 0.009\) \\ Balanced Accuracy & \(0.902\pm 0.009\) & \(0.865\pm 0.011\) & \(0.863\pm 0.014\) & \(\mathbf{0.944\pm 0.007}\) & \(0.935\pm 0.013\) \\ Recall & \(0.842\pm 0.018\) & \(0.734\pm 0.022\) & \(0.730\pm 0.028\) & \(0.910\pm 0.012\) & \(\mathbf{0.928\pm 0.027}\) \\ Specificity & \(0.961\pm 0.006\) & \(\mathbf{0.996\pm 0.001}\) & \(\mathbf{0.996\pm 0.001}\) & \(0.978\pm 0.004\) & \(0.943\pm 0.010\) \\ Precision & \(0.707\pm 0.033\) & \(\mathbf{0.950\pm 0.010}\) & \(0.948\pm 0.018\) & \(0.817\pm 0.030\) & \(0.638\pm 0.036\) \\ Negative Prediction Value & \(0.984\pm 0.002\) & \(0.974\pm 0.002\) & \(0.973\pm 0.003\) & \(0.991\pm 0.001\) & \(\mathbf{0.992\pm 0.003}\) \\ G-Mean & \(0.899\pm 0.009\) & \(0.854\pm 0.012\) & \(0.852\pm 0.017\) & \(\mathbf{0.943\pm 0.007}\) & \(0.935\pm 0.013\) \\ Informedness & \(0.803\pm 0.018\) & \(0.730\pm 0.021\) & \(0.726\pm 0.028\) & \(\mathbf{0.888\pm 0.014}\) & \(0.871\pm 0.026\) \\ Training Time (s) & \(1241.99\pm 29.75\) & \(1262.85\pm 9.08\) & \(61.25\pm 0.63\) & **Negligible** & \(1.77\pm 0.05\) \\ Prediction Time (s) & \(2492.15\pm 60.53\) & \(12.26\pm 0.13\) & \(12.04\pm 0.03\) & **Negligible** & \(0.14\pm 0.01\) \\ \hline \hline \end{tabular} \end{table} Table 2: The results of both the QSVM (1) and QCNN (2) approaches, as well as the batched QCNN (3), classical SVM (4), and classical CNN (5) methods. All runs were taken for models trained at 200 training samples and testing was performed on 400 testing samples. Each entry is the average \(\pm\) standard error. The best performing method for each metric is indicated in bold font. deliver the same result. This increases the depth, adding more gates that have probabilities to fail. This is the main source of error in larger depth circuits and we model the gate-based error of a quantum computer with simple bit flips, then it can be clearly seen, in Table 3, that the longer depth of the QCNN circuit induces a more error prone method and the eventual measurement results will be more random. This serves as an indication that you would rather choose the QSVM, since it is more resilient to noise during the current NISQ era. We increase the probability of a single gate undergoing a bit flip to see just how random the results can get. This probability is the analogue of the median error rates that can be found on the device page on the IBM Quantum Experience page [49]. As the probability of gate-based bit flips are increased for both the QSVM and QCNN we see a clear downward trend in the balanced accuracy. This is accentuated for the QCNN case where the falloff is more rapid. The effect of noise is more pronounced for the QCNN because of its larger depth. More gates in succession have a higher probability of undergoing a bit flip. These measurements were taken for 50 training samples and 100 testing samples to reduce computational costs when simulating noisy quantum circuits. Finally for a bit flip probability of 0.5, the measurements become truly random, which is exactly what is expected. The reason balanced accuracy was used, and none of the other metrics, was to use a metric that is not inflated when more negative cases than positive cases were predicted correctly. We used a metric not biased towards either the positive or negative cases. The quantum device used to approximate the total runtime of the two circuits was the IBMQ Guadalupe device, which has 16 qubits of which the ones marked in red in Figure 6 were used for the QCNN. The black numbers indicate which qubit each node represented in the actual implementation. Qubits 0 to 7 were used for the QSVM as no qubit connectivity was explicitly required. ### Runtime Another figure of merit we considered, as discussed earlier, is runtime. This includes both training and prediction time. The runtime plots can be found in Figure 7. The training time scaling of both methods can be seen from Figure 7. The QSVM scales quadratically, while the QCNN scales linearly, with the intersection between the two curves indicating that for a low number of samples the QSVM is faster. The QCNN will train faster on a larger sample size. The same intersection for the real devices cannot be observed in the figure, but it is clear the two curves are tending towards a similar intersection. The prediction time plots show that for QCNNs the prediction time remains constant compared to the linear scaling of the QSVM. This is because QCNN prediction is done for a constant 400 samples. No reference is made to the training set as in the QSVM case. The linear scaling of the prediction time for QSVMs is because it scales linearly with the number of training samples, while the number of prediction samples are held constant. This is a significant aspect to consider when QSVMs are used, since using a QSVM on unseen data will take longer. ## V Conclusion In this comparative study we implemented an 8-qubit quantum-enhanced support vector machine and an 8-qubit quantum convolutional neural network. The two methods were compared by performing noise free and noisy simulations as well as a limited real device implementation that allowed for approximation of a possible fully implemented run on real devices. The training time and prediction time for both methods were also compared for many sample sizes. Both methods show improvement over the known quantum approach [20] and are also lower depth, which helps with noise robustness. \begin{table} \begin{tabular}{l c c} \hline \hline **Noise Level** & (1) QSVM & (2) QCNN \\ \hline **p = 0** & \(0.911\pm 0.023\) & \(0.914\pm 0.026\) \\ **p = 0.01** & \(0.879\pm 0.021\) & \(0.852\pm 0.03\) \\ **p = 0.1** & \(0.836\pm 0.031\) & \(0.758\pm 0.055\) \\ **p = 0.5** & \(0.501\pm 0.018\) & \(0.500\pm 0.000\) \\ \hline \hline \end{tabular} \end{table} Table 3: The effect of circuit-based noise due to the depth of the quantum circuits on method performance indicated using balanced accuracy. A decreasing trend is observed for both methods. The QSVM case performs better at increasing noise probabilities than the QCNN case because of larger depth in the QCNN circuit. Both methods yield random results at \(p=0.5\). Each table entry is the average of 6 distinct runs and the uncertainties are the standard deviations. Figure 6: The qubit connectivity diagram for the IBMQ Guadalupe device. The qubits marked with red circles formed part of the QCNN approach and the order in which they were assigned in the circuit is indicated by black numbers. Qubits 0 to 7 (white text) were used for the QSVM. QSVMs are easy to implement and have a small depth, which is really important for avoiding decoherence and noise in real device runs. QSVMs, however, scale quadratically with sample size, which makes them really unattractive for big-data problems. The kernel function used by the SVM is a black box, meaning that there is no way to tell how the algorithm relates features to prediction. They are, however, really effective at a lower number of training samples and in this regime they can definitely be considered. QCNNs with a larger depth and a more complex circuit are more prone to noise, but does indeed provide more efficient training and prediction than from the QSVM. A specificity that improved over our classical approaches is also observed. Although the approach is not completely transparent, it also supplies a method to get an intuition as to how the different features are linked to one another. The special case of batching the training data resulted in near identical results (the same high specificity was observed here as well) which makes QCNNs viable for a larger number of samples. It is true that both methods have their downsides and advantages, but it should be clear that the method of choice should depend on the training set size. Judging by its ease of implementation, shorter depth making it more noise resistant, guarantees of performance compared to classical methods and its shorter runtime for a low sample size when simulated and on real devices, QSVMs should be preferred comparatively over QCNNs. Larger sample sizes after the two training time curves have intersected demands that QCNNs be implemented instead, definitely making use of data batching, however the larger depth will introduce a lot of noise into the system, which means that error mitigation techniques will be required. It should be clear that quantum machine learning can be used for astronomical object classification and real-world binary classification in general, but in the current NISQ era QML methods applied to classical data falls short when compared to classical methods. A broad architecture search was already performed by changing some of the hyperparameters like striding, but it may be possible to find a QCNN circuit architecture that performs even better than the one considered here by using different unitaries in the convolutional or pooling steps. It would be good idea to do a more comprehensive circuit architecture search for other 8-qubit QCNN architectures that would provide possible improvements to classification accuracy. This is made easier by using the HierarQcal package, which has been used for a similar application in [42]. Circuit-based noise would be the main limiting factor and is the reason why we kept the QCNN circuit as simple as possible. There are many ways in which this work can be extended. Multi-dimensional classification, where there are Figure 7: Training and prediction time in seconds versus an increasing number of samples using a testing set of 400 samples for (Top) simulated runs and (Bottom) approximated real device runs. Both training plots show a linear scaling in the QCNN and a quadratic scaling in the QSVM that has been described before. The prediction time plots indicate that the QSVM takes longer to perform predictions. This is because prediction requires reference to the training set used. Simulated and real device results are taken and approximated for QCNN runs at 150 and 10 epochs, respectively. Note the scale of the y-axis in the real device plots. more than two classes could be an interesting extension. A typical example of multi-classes here would be standard and millisecond pulsars. These methods can also be applied to other data sets to find out if the same behaviour persists. Once better quantum computers are built, either through fault tolerance, improved qubit connectivity or faster operational and readout times, the real device findings can be revisited to confirm if they still hold true then. Applying the methods on real devices may or may not require improvement using error mitigation techniques. This would, however, be the perfect indicator of the viability of these methods for the application of pulsar classification on real devices. Trying different approaches to the problem of pulsar classification such as quantum anomaly detection may yield interesting results. This would involve training a quantum model on only the non-pulsar data and then testing the model on a testing set consisting of a mix of pulsar and non-pulsar samples to see if the model is able to effectively pick out the anomalous pulsars. ###### Acknowledgements. This work was funded by the South African Quantum Technology Initiative (SA QuTi) through the Department of Science and Innovation of South Africa. We also wish to acknowledge Dr. Maria Schuld for clarifying theoretical points regarding support vector machines and some of the initial work done by Shivani Pillay on classification of pulsars using quantum kernel methods.
2309.07815
Nonlinear model order reduction for problems with microstructure using mesh informed neural networks
Many applications in computational physics involve approximating problems with microstructure, characterized by multiple spatial scales in their data. However, these numerical solutions are often computationally expensive due to the need to capture fine details at small scales. As a result, simulating such phenomena becomes unaffordable for many-query applications, such as parametrized systems with multiple scale-dependent features. Traditional projection-based reduced order models (ROMs) fail to resolve these issues, even for second-order elliptic PDEs commonly found in engineering applications. To address this, we propose an alternative nonintrusive strategy to build a ROM, that combines classical proper orthogonal decomposition (POD) with a suitable neural network (NN) model to account for the small scales. Specifically, we employ sparse mesh-informed neural networks (MINNs), which handle both spatial dependencies in the solutions and model parameters simultaneously. We evaluate the performance of this strategy on benchmark problems and then apply it to approximate a real-life problem involving the impact of microcirculation in transport phenomena through the tissue microenvironment.
Piermario Vitullo, Alessio Colombo, Nicola Rares Franco, Andrea Manzoni, Paolo Zunino
2023-09-14T16:09:29Z
http://arxiv.org/abs/2309.07815v1
Nonlinear model order reduction for problems with microstructure using mesh informed neural networks ###### Abstract Many applications in computational physics involve approximating problems with microstructure, characterized by multiple spatial scales in their data. However, these numerical solutions are often computationally expensive due to the need to capture fine details at small scales. As a result, simulating such phenomena becomes unaffordable for many-query applications, such as parametrized systems with multiple scale-dependent features. Traditional projection-based reduced order models (ROMs) fail to resolve these issues, even for second-order elliptic PDEs commonly found in engineering applications. To address this, we propose an alternative nonintrusive strategy to build a ROM, that combines classical proper orthogonal decomposition (POD) with a suitable neural network (NN) model to account for the small scales. Specifically, we employ sparse mesh-informed neural networks (MINNs), which handle both spatial dependencies in the solutions and model parameters simultaneously. We evaluate the performance of this strategy on benchmark problems and then apply it to approximate a real-life problem involving the impact of microcirculation in transport phenomena through the tissue microenvironment. keywords: reduced order modeling, finite element approximation, neural networks, deep learning, embedded microstructure, microcirculation + Footnote †: journal: t.b.d. ## 1 Introduction The repeated solution of differential problems required to describe, forecast, and control the behavior of a system in multiple virtual scenarios is a computationally extensive task if relying on classical, high-fidelity full order models (FOMs) such as, e.g., the finite element method, when very fine spatial grids and/or time discretizations are employed to capture the detailed behavior of the solution. The presence of multiple (spatial and/or temporal) scales affecting problem's data and its input parameters - such as, e.g., material properties or source terms included in the physical model - makes this task even more involved, and is a common issue whenever dealing with, e.g., biological models [1; 2; 3], structural mechanics [4; 5] as well as environmental flows [6; 7], just to make a few examples. Reduced order modeling techniques provide nowadays a wide set of numerical strategies to solve parametrized differential problems in a rapid and reliable way. For instance, in the case of problems where the microstructure is characterized by slender fibers immersed into a continuum, reduced order models based on dimensional reduction of the fibers have been successfully adopted, in the framework of mixed dimensional problem formulations, see, e.g., [8; 9; 10]. However, given the increasing complexity of the problems that need to be addressed, mixed dimensional formulations are no longer sufficient; indeed, they actually represent the starting point for a second level of model reduction that is addressed by our work. Physics-based ROMs such as, e.g., the reduced basis (RB) method [11; 12], provide a mathematically rigorous framework to build ROMs involving a linear trial subspace, or reduced basis - obtained, e.g., through proper orthogonal decomposition (POD) on a set of FOM snapshots - to approximate the solution manifold, and a (Petrov-Galerkin projection to derive the expression of the ROM. Provided the problem operators depend affinely on the input parameters, a suitable offline-online splitting ensures a very fast assembling and solution of the ROM during the online testing stage, once the linear subspace and the projected reduced operators have been computed and stored during the offline training stage. The presence of involved, spatially-dependent, input parameters representing, e.g., diffusivity fields or distributed force fields within the domain where the problem is set, usually makes the offline-online splitting not so straightforward because of their high dimension and (potentially, highly) nonlinear nature of the parameter-to-solution map. To be consistent with the usual formulation of ROMs, we will refer, throughout the paper, to space-varying fields representing some problem inputs as to _input parameters_, despite these latter usually denote vectors of quantities. These features might impact on the reducibility of problems with microstructure with _linear_ ROMs at least in two ways: _(i)_ linear trial subspaces might have large dimension, thus not really reducing the complexity of the problem; _(ii)_ classical hyper-reduction techniques, such as the (discrete) empirical interpolation method (DEIM) [13; 14; 15; 16] or the energy-conserving sampling and weighting (ECSW) method [17; 18], usually employed to speed up the ROM assembling, can also suffer from severe computational costs, and result in intrusive procedures. To overcome these drawbacks, alternative data-driven methods can be used for the approximation of the RB coefficients without resorting to (Galerkin) projection. In these cases, the FOM solution is projected onto the RB space and the combination coefficients are approximated by means of a surrogate model, exploiting, e.g., radial basis function (RBF) approximation [19], polynomial chaos expansions [20], artificial neural networks (NNs) [21; 22; 23], or Gaussian process regression (GPR) [24; 25; 26; 27]. The high-fidelity solver is thus used only offline to generate the data required to build the reduced basis, and then to train the surrogate model. Non-intrusive POD-based approaches using RBFs to interpolate the ROM coefficients have been proposed in, e.g., [19; 28; 29]. In a seminal contribution, Hesthaven and Ubbiali [21] have instead employed NNs to build a regression model to compute the coefficients of a POD-based ROM, in the case of steady PDEs; a further extension to time-dependent nonlinear problems, i.e., unsteady flows, has been addressed in [30]; see also, e.g., [31; 32] for the use of alternative machine learning strategies to approximate the POD coefficients. We will refer hereon to this class of approaches as to _POD-NN_ methods. POD with GP regression have been used to build ROMs by Guo et al. in [24] to address steady nonlinear structural analysis, as well as for time-dependent problems [25]. A detailed comparison among non-intrusive ROMs employing RBF, ANN, and GP regressions can be found in, e.g., [33]; see instead [34; 35] for an alternative use of NNs to perform regression in the context of multi-fidelity methods, capable of leveraging models with different fidelities. These latter may involve data-driven projection-based [36; 37] ROMs or more recently developed deep learning-based ROMs [38; 39; 40; 41]. In these latter cases, POD has been replaced by (e.g., convolutional) autoencoders to enhance dimensionality reduction of the solution manifold, relying on (deep) feedforward NNs to learn the reduced dynamics onto the reduced trial manifold. Despite the advantages they provide compared with dense architectures - in terms of costs and size of the optimization problem to be solved during training - convolutional NNs cannot handle general geometries and they might become inappropriate as soon as the domain where the problem is set is not an hypercube, although some preliminary attempts to generalize CNN in this direction have recently appeared [42]. In the case of problems with microstructure, this issue usually arises when attempting at reducing the dimensionality of spatially distributed parameters using DNNs rather than POD as, e.g., the DEIM would do - provided a linear subspace built through POD is still employed to reduce the solution manifold. For this reason, in this paper we rely on Mesh-Informed Neural Networks (MINNs), a class of architectures recently introduced in [43] and specifically tailored to handle mesh based functional data. The driving idea behind MINNs is to embed hidden layers into discrete functional spaces of increasing complexity, obtained through a sequence of meshes defined over the underlying spatial domain. This approach leads to a natural pruning strategy, whose purpose is to inform the model with the geometrical knowledge coming from the domain. As shown through several examples in [43], MINNs offer reduced training times and better generalization capabilities, making them a competitive alternative to other operator learning approaches, such as DeepONets [44] and Fourier Neural Operators [45]. Our purpose is to employ MINNs to enable the design of sparse architectures aiming at feature extraction from space-varying parameters that define the problem's microstructure. In this paper we propose a new strategy to tackle parametrized problems with microstructure, combining a POD-NN method to build a reduced order model in a nonintrusive way and MINNs to build a closure model capable of integrating in the resulting approximation the information coming from mesh based functional data, in order to avoid to deal with a very large number of POD modes in presence of complex microstructures. Such a problem arises, e.g., when describing oxygen transfer in the microcirculation by including blood flow and hematocrit transport coupled with the interstitial flow, oxygen transport in the blood and the tissue, described by the vascular-tissue exchange. The presence of microvasculature, described in terms of a (varying) graph-like structure within the domain, requires the use of a NN-based strategy to reduce the dimensionality of such data, thus calling into play MINNs, yielding a strategy we refer to as a POD-MINN method. To account for the neglected scales at the POD level, and correct the POD-MINN approximation to enhance its accuracy without further increasing its dimension, we equip the POD-MINN method with a closure model, ultimately yielding a strategy we refer to as a POD-MINN+ approximation. The structure of the paper is as follows. In Sect. 2 we formulate the class of problems with microstructure we focus on in this paper, introducing their high-fidelity approximation by the finite element method, recalling how classical projection-based ROMs are formulated, and showing their main limitations. In Sect. 3 we describe how to take advantage of mesh-informed neural networks to tackle problems with microstructure, while in Sect. 4 we address the POD-MINN and the POD-MINN+ methods, showing results obtained in a series of simple numerical test cases. Finally, in Sect. 5 we consider the application of the proposed strategy to an oxygen transfer problem taking place in the microcirculation, and draw some conclusions in Sect. 6. ## 2 Formulation and approximation of PDEs with embedded microstructure ### Problem setup We start by describing, in general terms, the essential properties of a problem governed by a parametrized PDE affected by a microstructure. With the term _microstructure_ we refer to some features, primarily of the forcing terms of the equations, that induce the coexistence of multiple characteristic scales in the solution. Despite this is a particular case of _multiscale_ problem, developing ROMs for multiscale problems in the spirit of upscaling and/or numerical homogenization is not the scope of this work - this aspect has been addressed, at least under some simplifying assumptions, by several authors in the framework of reduced basis methods, see, e.g., [46; 47; 48]. Conversely, here we aim to develop a ROM that fully captures all the scales of the solution. We restrict to steady problems governed by second order elliptic equations and we assume that the microstructure may influence the parameters of the operator, the forcing terms and the boundary conditions, whereas the domain is fixed and do not depend on parameters. Under these assumptions, in this paper we address parametrized PDEs of the form \[\begin{cases}L_{\mathbf{\mu}}u_{\mathbf{\mu}}=f_{\mathbf{\mu}}&\text{in }\Omega,\\ B_{\mathbf{\mu}}u_{\mathbf{\mu}}=g_{\mathbf{\mu}}&\text{on }\partial\Omega,\end{cases} \tag{1}\] where the solution \(u_{\mathbf{\mu}}\) depends on the parameter vector \(\mathbf{\mu}=(\mathbf{\mu}_{M},\mathbf{\mu}_{m})\in\mathcal{P}=\mathcal{P}_{M}\times \mathcal{P}_{m}\), with \(\dim(\mathcal{P}_{M})=n_{M}\) and \(\dim(\mathcal{P}_{m})=n_{m}\). In other words, \(u_{\mathbf{\mu}}=u(\mathbf{\mu})\). Both notations will be used in what follows, with a preference to the most compact one when the context allows it. By the distinction between _macroscale parameters_ - denoted by \(\mathbf{\mu}_{M}\) - and _microscale parameters_ - denoted instead by \(\mathbf{\mu}_{m}\) - we highlight that problem parameters satisfy a _scale separation property._ For example, in the biophysical application discussed at the end of this work, the physical parameters of the operator are affected by the macroscale parameter \(\mathbf{\mu}_{M}\), namely \(L_{\mathbf{\mu}}=L_{\mathbf{\mu}_{M}}\), while the geometry of the microstructure determines the forcing terms of the problem, that is, \(f_{\mathbf{\mu}}=f_{\mathbf{\mu}_{m}}\) and \(g_{\mathbf{\mu}}=g_{\mathbf{\mu}_{m}}\). As a consequence of these assumptions, we focus on the particular case of the abstract problem (1), that can be rewritten as follows, \[\begin{cases}L_{\mathbf{\mu}_{M}}u_{\mathbf{\mu}}=f_{\mathbf{\mu}_{m}}&\text{in }\Omega,\\ B_{\mathbf{\mu}_{m}}u_{\mathbf{\mu}}=g_{\mathbf{\mu}_{m}}&\text{on }\partial\Omega.\end{cases} \tag{2}\] In this particular context, we implicitly assume that the physical parameters belong to a low-dimensional space, whereas the geometry of the microstructure features high dimensionality - that is, \(n_{m}\gg n_{M}\). Moreover, for the sake of presentation, in this section we assume that the operator \(L_{\mathbf{\mu}_{M}}\) appearing in problem (2) is linear, although our methodology will also be applied to a nonlinear oxygen transfer problem, whose formulation is briefly described in Sect. 2.2. ### An example of problem with microstructure inspired to microcirculation We also address in this paper a mathematical model for oxygen transfer in the microcirculation on the basis of a comprehensive model described in [3; 49] that includes blood flow and hematocrit transport coupled with the interstitial flow, oxygen transport in the blood and the tissue, described by the vascular-tissue exchange. Indeed, our proposed methodology can be applied to any field described by the general microcirculation model, such as fluid pressure, velocity, oxygen concentration (or partial pressure), despite we only focus, in this paper, on the oxygen transport. The non-intrusive character of the POD-MINN (and POD-MINN+) method indeed allows us to approximate a single field, despite the coupled nature of the general model. Note that if we relied on a classical projection-based ROM we had to approximate all the variables simultaneously, thus requiring to face a much higher degree of complexity when constructing the ROM. The general model describes the flow in two different domains, the tissue domain (\(\Omega\subset\mathbb{R}^{3}\) with \(\dim(\Omega)=3\)), where the unknowns are the fluid pressure \(p_{t}\), the fluid velocity \(\mathbf{v}_{t}\), and the oxygen concentration \(C_{t}\), and the vascular domain (\(\Lambda\subset\mathbb{R}^{3}\) with \(\dim(\Lambda)=1\)), which is a metric graph describing a network of connected one-dimensional channels, where the unknowns are the blood pressure \(p_{v}\), the blood velocity \(\mathbf{v}_{v}\), and the vascular oxygen concentration \(C_{v}\). The model for the oxygen transport employs the velocity fields \(\mathbf{v}_{v}\) and \(\mathbf{v}_{t}\) to describe blood flow in the vascular network and the plasma flow in the tissue. The governing equations of the oxygen transfer model read as follows: \[\begin{cases}\begin{aligned} \nabla\cdot(-D_{t}\nabla C_{t}+ \mathbf{v}_{t}\ C_{t})+V_{max}\ \frac{C_{t}}{C_{t}+\alpha_{t}\ p_{m_{50}}}=\phi_{O_{2}}\,\delta_{\Lambda}& \text{on }\Omega\\ \pi R^{2}\frac{\partial}{\partial s}\left(-D_{v}\frac{\partial C_{v}}{ \partial s}+v_{v}\ C_{v}+v_{v}\ k_{1}\ H\ \frac{C_{v}^{\gamma}}{C_{v}^{\gamma}+k_{2}}\right)=-\phi_{O_{2}}& \text{on }\Lambda\\ \phi_{O_{2}}=2\pi R\ P_{O_{2}}(C_{v}-C_{t})+(1-\sigma_{O_{2}})\ \left(\frac{C_{v}+C_{t}}{2}\right)\ \phi_{v}& \text{on }\Lambda\\ \phi_{v}=2\pi RL_{p}\big{(}(p_{v}-p_{t})-\sigma(\pi_{v}-\pi_{t})\big{)}& \\ C_{v}=C_{in}&\text{on }\partial\Lambda_{\text{in}}\\ -D_{v}\frac{\partial C_{v}}{\partial s}=0&\text{on }\partial \Lambda_{\text{out}}\\ -D_{t}\nabla C_{t}\cdot\mathbf{n}=\beta_{O_{2}}\ (C_{t}-C_{0,t})&\text{on } \partial\Omega.\end{aligned}\end{cases} \tag{3}\] In particular, the first equation governs the oxygen in the tissue, the second describes how the oxygen is transported by the blood stream, and the third defines the oxygen transfer between the two domains \(\Omega\) and \(\Lambda\). In particular, the model for the flux \(\phi_{O_{2}}\) is obtained assuming that the vascular wall acts as a semipermeable membrane. This model is complemented with a set of boundary conditions reported in the last three equations: at the vascular inlets \(\partial\Lambda_{\text{in}}\) we prescribe the oxygen concentration; at the vascular endpoints \(\partial\Lambda_{\text{out}}\) null diffusive flux is enforced; and for the boundary of the tissue domain \(\partial\Omega\) we simulate the presence of an adjacent tissue domain with a boundary conductivity \(\beta_{O_{2}}\) and a far-field concentration \(C_{0,t}\). The symbols \(D_{t},D_{v},V_{max},\alpha_{t},p_{m_{50}},k_{1},k_{2},C_{v}^{\gamma},P_{O_{2}},\sigma_{O_{2}},L_{p},\sigma,\pi_{v},\pi_{t}\) are constants independent of the solution of the model. For a detailed description of the physical meaning of these quantities see, e.g., [49]. Comparing model (2) with (3), we observe that the operator \(L_{\mathbf{\mu}_{M}}\) consists of the left hand side of the first equation, where the macroscale parameters are the physical parameters of the operator \(\nabla\cdot(-D_{t}\nabla C_{t}+\mathbf{v}_{t}\ C_{t})+V_{max}C_{t}/(C_{t}+ \alpha_{t}\ p_{m_{50}})\), such as for example \(V_{max}\) that modulates the oxygen consumption by the cells in the tissue domain. Because of the last term, such operator is nonlinear. The solution \(u_{\mathbf{\mu}}\) corresponds to the concentration \(C_{t}\), and the flux \(\phi_{O_{2}}\delta_{\Lambda}\) plays the role of the forcing term \(f_{\mathbf{\mu}_{m}}\). As \(\delta_{\Lambda}\) denotes a Dirac \(\delta\)-function distributed on \(\Lambda\), the term \(\phi_{O_{2}}\delta_{\Lambda}\) represents an incoming mass flux for the oxygen concentration \(C_{t}\), supported on the vascular network \(\Lambda\). In other words, it is a concentrated source defined in \(\Omega\), acting as a microscale forcing term. According to the third equation, the intensity of the oxygen flux depends on both concentrations, namely \(\phi_{O_{2}}=\phi_{O_{2}}(C_{v},C_{t})\). In turn, the vascular oxygen concentration \(C_{v}\) is governed by the second equation and by the inflow boundary conditions \(C_{v}=C_{in}\) on \(\partial\Lambda_{IN}\). This condition plays the role of \(B_{\mathbf{\mu}_{m}}u_{\mathbf{\mu}}=g_{\mathbf{\mu}_{m}}\) on \(\partial\Omega\) in problem (2), and as it depends on the inlet points of the vascular network on the domain boundary, it is classified as a microscale model. In conclusion, we can establish the following connections between problems (2) and (3): \[u_{\mathbf{\mu}} \approx C_{t},\] \[L_{\mathbf{\mu}_{M}}u_{\mathbf{\mu}} \approx\nabla\cdot(-D_{t}\nabla C_{t}+\mathbf{v}_{t}\ C_{t})+V_{ max}\ \frac{C_{t}}{C_{t}+\alpha_{t}\ p_{m_{50}}},\] \[f_{\mathbf{\mu}_{m}} \approx\phi_{O_{2}}(C_{v},C_{t})\delta_{\Lambda},\] \[B_{\mathbf{\mu}_{m}}u_{\mathbf{\mu}} \approx C_{v}|_{\partial\Lambda_{IN}},\] \[g_{\mathbf{\mu}_{m}} \approx C_{in}|_{\partial\Lambda_{IN}}.\] We remark that problem (3) is just an example among many other relevant problems characterized by a microstructure, such as flows through perforated or fractured domains, or the mechanics of fiber-reinforced materials, just to mention a few. Our proposed methodology does not depend on the specific problem and can be applied with suitable adaptations to all these problems thanks to its non-intrusive nature. ### Numerical approximation by the finite element method We consider the finite element method (FEM) as the high-fidelity FOM for the problem described in the previous section, as well as for the other test cases we propose. The central hypothesis underlying this work is that the FEM approximation of the microscale problem resolving all the scales of the microstructure is computationally too expensive for real-life applications. For the sake of simplicity, we present our methodology referring to the general, abstract problem (1). Before introducing its finite element discretization, we address the variational formulation of (1) that, given a suitable Hilbert space \((V,\|\cdot\|)\) depending on the boundary conditions imposed on the problem at hand, reads as follows: for any \(\mathbf{\mu}\in\mathcal{P}\subset\mathbb{R}^{n_{p}}\), find \(u_{\mathbf{\mu}}\in V\) such that \[a_{\mathbf{\mu}}(u_{\mathbf{\mu}},v)=F_{\mathbf{\mu}}(v),\quad\forall v\in V, \tag{4}\] being \(a_{\mathbf{\mu}}:V\times V\mapsto\mathbb{R}\) and \(F_{\mathbf{\mu}}:V\mapsto\mathbb{R}\) two parameter-dependent operators. Depending on the formulation of choice, Problems (1) and (4) naturally define a _parameter-to-solution map_, that is, a map that assigns to each input parameter \(\mathbf{\mu}\in\mathcal{P}\) the corresponding solution \(u_{\mathbf{\mu}}=u(\mathbf{\mu})\), _i.e._, \[u_{\mathbf{\mu}}:\mathcal{P}\,\mapsto\,V;\quad\mathbf{\mu}\,\mapsto\,u(\mathbf{\mu}).\] This also allows us to define the _solution manifold_\(\mathcal{S}=\{u_{\mathbf{\mu}}=u(\mathbf{\mu}):\mathbf{\mu}\in\mathcal{P}\}\). The high-fidelity FOM consists of the Galerkin projection of problem (4) onto a Finite Element (FE) space \(V_{h}\) of dimension \(N_{h}=\dim(V_{h})\), suitably chosen depending on the characteristics of the problem at hand. Assuming for simplicity a fully conformal approximation, given \(V_{h}\subset V\), we aim to find \(u_{\mathbf{\mu},h}\in V_{h}\) such that \[a_{\mathbf{\mu}}(u_{\mathbf{\mu},h},v_{h})=F_{\mathbf{\mu}}(v_{h})\qquad\forall v_{h}\in V _{h}. \tag{5}\] As pointed out for the continuous case, problem (5) defines a mapping \(\mathcal{P}\mapsto V_{h}\) that identifies the discrete solution manifold \(\mathcal{S}_{h}=\{u_{\mathbf{\mu},h}:\mathbf{\mu}\in\mathcal{P}\}\). From the discrete standpoint, problem (5) is equivalent to a (large) system of algebraic equations of the form \[A_{\mathbf{\mu},h}\mathbf{u}_{\mathbf{\mu},h}=\mathbf{F}_{\mathbf{\mu},h}\] where \(\mathbf{u}_{\mathbf{\mu},h}\in\mathbb{R}^{N_{h}}\) is the vector of degrees of freedom of the FE approximation. The need to, e.g., sample adequately the statistical variability of the microstructure, and assess its impact on the problem solution, would ultimately require to repeatedly query such FOM, whence the need of a rapid and reliable ROM. ### Projection-based model order reduction For second order elliptic PDEs, projection-based ROMs represent a consolidated approach that has been successful in many areas of application [50]. The main idea is to generate the ROM by projecting the FOM onto a low-dimensional linear subspace of \(V_{h}\). Precisely, the _reduced basis_ (RB) method aims at building a subspace \(V_{rb}\subset V_{h}\) through the linear combination of a set of \(n_{rb}\ll N_{h}\) basis functions, being \(n_{rb}\) independent of \(h\) and \(N_{h}\): \[V_{rb}=\text{span}\{\psi_{1},\ldots,\psi_{n_{r\!b}}\}.\] The RB approximation \(u_{\mathbf{\mu},h}^{rb}\in V_{rb}\) is then defined as the solution to the Galerkin projection of the FOM onto the RB space, that is, \[a_{\mathbf{\mu}}(u_{\mathbf{\mu},h}^{rb},v_{rb})=F_{\mathbf{\mu}}(v_{rb}),\qquad\forall v _{rb}\in V_{rb}. \tag{6}\] Equivalently, by expanding the RB solution over the reduced basis, the latter can be expressed as \[u_{\mathbf{\mu},h}^{rb}=\sum_{i=1}^{n_{rb}}\mathbf{u}_{\mathbf{\mu},rb}^{(i)}\psi_{i}(x) \tag{7}\] where \(\mathbf{u}_{\mathbf{\mu},rb}=[\mathbf{u}_{\mathbf{\mu},rb}^{(1)},\ldots,\mathbf{u}_{ \mathbf{\mu},rb}^{(n_{rb})}]^{T}\) are the _reduced coefficients_, which, in practice, are obtained by solving the algebraic counterpart of (6), namely \[A_{\mathbf{\mu},rb}\mathbf{u}_{\mathbf{\mu},rb}=\mathbf{F}_{\mathbf{\mu},rb}. \tag{8}\] Overall, equations (6), (7) and (8) illustrate different ways to represent the ROM approximation of the FOM solution. Precisely, \(\mathbf{u}_{\mathbf{\mu},rb}\in\mathbb{R}^{n_{rb}}\) are the reduced coefficients that identify the ROM solution while \(\mathbf{u}_{\mathbf{\mu},h}^{rb}\in\mathbb{R}^{N_{h}}\) is the representation of the same solution in the high-fidelity (FEM) space. We note that, although they have the same dimension, \(\mathbf{u}_{\mathbf{\mu},h}\) and \(\mathbf{u}_{\mathbf{\mu},h}^{rb}\) do not coincide. We stress that this projection determines an approximation of the FOM solution \(u_{\mathbf{\mu},h}\) onto a _linear subspace_ of the discrete solution manifold \(\mathcal{S}_{h}\). As a consequence the corresponding error behaves - at least, theoretically - following the decay of the Kolmogorov \(n_{rb}\)-width [12]. Indeed, a fast-decaying Kolmogorov \(n_{rb}\)-width reflects the approximability of the solution manifold by finite-dimensional linear spaces. From an algebraic standpoint, the reduced basis is encoded in the matrix \(\mathbb{V}=[\mathbf{\psi}_{1}\mid\ldots\mid\mathbf{\psi}_{n_{rb}}]\in\mathbb{R}^{N_{h }\times n_{rb}}\), where \(\mathbf{\psi}_{i}\in\mathbb{R}^{N_{h}}\) are the FOM degrees of freedom of the \(i\)-th basis \(\psi_{i}\), so that (7) can be equivalently rewritten as \(\mathbf{u}_{\mathbf{\mu},h}^{rb}=\mathbb{V}\mathbf{u}_{\mathbf{\mu},rb}\) and the RB problem can be assembled projecting the FOM matrix and right hand side onto the reduced space, \[A_{\mathbf{\mu},rb}=\mathbb{V}^{T}A_{\mathbf{\mu},h}\mathbb{V},\quad\mathbf{F}_{\mathbf{ \mu},rb}=\mathbb{V}^{T}\mathbf{F}_{\mathbf{\mu},h}. \tag{9}\] For a special category of problems, characterized by an affine parameter dependence, the algorithm presented to build and solve (8) can be split into a parameter-independent _offline_ and a parameter-dependent _online_ stage [12]. However, if the affine parameter dependence does not hold, suitable hyper reduction techniques must be called into play to restore it, at least in an approximate way; nevertheless, this is not a focus of this work. Regarding the construction of the subspace \(V_{rb}\), a classical choice is to rely on POD, starting from a collection of FOM solutions, named _snapshots_, for suitably chosen parameters \(\boldsymbol{\mu}^{(i)}\), such that the span of these functions, \(\mathrm{span}\{u_{h}(\boldsymbol{\mu}^{(i)})\}_{i=1}^{N}\) is a reasonable approximation of the discrete solution manifold \(\mathcal{S}_{h}\). By collecting the snapshots FOM degrees of freedom as the columns of a _data matrix_ \[\mathbb{S}=[\mathbf{u}_{h}(\boldsymbol{\mu}^{(1)})|\ldots|\mathbf{u}_{h}( \boldsymbol{\mu}^{(i)})|\ldots|\mathbf{u}_{h}(\boldsymbol{\mu}^{(N)})],\] the singular value decomposition of \(\mathbb{S}\), \[\mathbb{S}=\mathbb{W}\left[\begin{array}{cc}\widetilde{\mathbb{D}}&\mathbf{ 0}\\ \mathbf{0}&\mathbf{0}\end{array}\right]\mathbb{Z}^{T}\] ensures the existence of three matrices \(\mathbb{W}\in\mathbb{R}^{N_{h}\times N_{h}}\), \(\widetilde{\mathbb{D}}=\mathrm{diag}[\sigma_{1},\ldots,\sigma_{R}]\), being \(\sigma_{1}\geq\ldots\geq\sigma_{i}\geq\ldots\geq\sigma_{R}\) the singular values of \(\mathbb{S}\) and \(R\) is its rank, and \(\mathbb{Z}\in\mathbb{R}^{N\times N}\). The columns of \(\mathbb{W}\) and \(\mathbb{Z}\) are named the left and right singular vectors, respectively. The POD basis is then defined as the set of the first \(n_{rb}\) left singular vectors of \(\mathbb{S}\); their collection provides the matrix \(\widehat{\mathbb{W}}(n_{rb})\) that represents the _best_\(n_{rb}\)-rank approximation of \(\mathbb{S}\) with respect to the following error [12], \[E(\widehat{\mathbb{W}}(n_{rb});\mathbb{S})=\min_{\mathbb{M}\in\mathbb{R}^{N_{h }\times N_{h}}}\sum_{j=1}^{N}\|\mathbb{S}(:,j)-\mathbb{MM}^{T}\mathbb{S}(:,j) \|_{\mathbb{R}^{N_{h}}}^{2}.\] In conclusion, a _POD-Galerkin_ ROM consists of solving (8) using the matrix \(\mathbb{V}=\widetilde{\mathbb{W}}(n_{rb})\) to define the reduced space. ### Limitations of linear methods and nonlinear model order reduction Despite mathematically rigorous, POD-Galerkin ROMs might feature some limitations when applied to problems with microstructure, as shown by some simple numerical test cases that we shall discuss in this section. As a prototype problem, we consider (1) with an operator independent on the parameters, _i.e._\(L_{\boldsymbol{\mu}}=-\Delta\), with homogeneous Dirichlet boundary conditions, and the action of the microstructure entirely carried out by the right hand side \(f_{\boldsymbol{\mu}}=f_{\boldsymbol{\mu}_{m}}\). We consider a unit square domain \(\Omega=(-1,1)^{2}\) partitioned using a 50\(\times\)50 computational mesh \(\mathcal{T}_{h}\) of uniform triangular elements. For the sake of simplicity, we discretize the problem using piecewise linear, continuous, Lagrangian finite elements \(V_{h}=X_{h}^{1}(\Omega)\), and consider two representative instances of the following characteristics of the microstructure: _(i)_ a problem with continuously variable scales, and _(ii)_ a problem with scale separation. Problem with continuously variable scalesTo model this feature of the microstructure, we consider \[f_{\boldsymbol{\mu}_{m}}(x,y)=p(x)p(y),\ \ \text{with}\ \ p(z)=\sum_{k=1}^{n}( \alpha_{k}^{z}\sin(k\pi z)+\beta_{k}^{z}\cos(k\pi z))\] and \(\boldsymbol{\mu}_{m}=\{\alpha_{k}^{z},\beta_{k}^{z}\}_{k=1}^{n}\) with \(n=6\) and \(z=x,y\), for a total of 24 parameters modulating the linear combination of 144 trigonometric functions. For the application of the POD-Galerkin method to this test case, we sample the parameter space assuming a uniform probability distribution \(\mathcal{U}[-1,1]\) for all parameter, selecting a total amount of \(N=1000\) snapshots to build the data matrix \(\mathbb{S}\). Equivalent results are obtained for a larger data matrix \(\mathbb{S}\) made of 2000 snapshots- therefore, we keep \(N=1000\) as a reference value for the forthcoming tests. Since we are interested to assess the capability of the POD basis to approximate the solution manifold, in this case we do not calculate the actual solution of (8), rather we consider the projection error obtained when projecting the FOM solution onto the POD basis for an increasing dimension \(n_{rb}\) of the reduced basis, that is, \[E_{POD}(n_{rb};\mathbf{u}_{\boldsymbol{\mu},h})=\frac{\|(\mathbb{I}-\mathbb{V} (n_{rb})\mathbb{V}^{T}(n_{rb}))\mathbf{u}_{\boldsymbol{\mu},h}\|_{2,N_{h}}}{\| \mathbf{u}_{\boldsymbol{\mu},h}\|_{2,N_{h}}} \tag{10}\] where \(\mathbb{V}(n_{rb})\) denotes the matrix encoding the linear approximation space spanned by \(n_{rb}\) POD basis functions and \(\|\mathbb{V}\|_{p,N}=\big{(}\sum_{i=1}^{N}|v_{i}|^{p}\big{)}^{1/p}\) is the \(p\)-norm in the generic vector space \(\mathbb{R}^{N}\). We note that the solution of this problem depends linearly on the parameters, as a result if we apply the POD-Galerkin method with at least 144 basis functions, we shall represent exactly the discrete solution manifold - in other words, \(E_{POD}(n_{rb}=144;\mathbf{u}_{\boldsymbol{\mu},h})=0\) for any \(\mathbf{u}_{\boldsymbol{\mu},h}\in\mathbb{R}^{N_{h}}\). Actually, the desired behavior of the ROM would be to achieve a satisfactory approximation with much less than 144 basis functions. The decay of the singular values of \(\mathbb{S}\) and the convergence rate of the projection error with respect to \(n_{rb}\), measured for a parameter value and corresponding finite element function not included in the snapshot matrix, are reported in Figure 1. These results immediately show that, due to the slow decay of the singular values, the performance of the POD-Galerkin method cannot be satisfactory in this case. _Problem with scale separation_. In this second test case, we discuss the behavior of the POD-Galerkin method for a problem where the forcing term is made of the superimposition of trigonometric functions with different frequencies, with a gap in the frequency spectrum. In the same computational setting described before, we consider the function \[f(x,y)=f_{\boldsymbol{\mu}_{M}}(x,y)+f_{\boldsymbol{\mu}_{m}}(x,y),\ \ \text{with}\ \ f_{\boldsymbol{\mu}_{M}}(x,y)=p_{L}(x)p_{L}(y),\,f_{\boldsymbol{\mu}_{m}}(x,y)=p_{H}(x)p_{H}(y)\] and \[P_{L}(z)=\sum_{k=1}^{2}(\alpha_{k}^{z}\sin(k\pi z)+\beta_{k}^{z}\cos(k\pi z)), \quad P_{H}(z)=\sum_{k=5}^{8}(\widetilde{\alpha}_{k}^{z}\sin(k\pi z)+\widetilde {\beta}_{k}^{z}\cos(k\pi z)),\] for a total of 8 coefficients encoding the low frequency scales and 16 parameters for the high frequency scales, all sampled using a uniform probability function \(\alpha_{k}^{z},\beta_{k}^{z},\widetilde{\alpha}_{k}^{z},\widetilde{\beta}_{k} ^{z}\sim\mathcal{U}[-1,1]\)). These parameters modulate the forcing term made by the linear combination of 80 trigonometric modes. The purpose of this test case is to investigate whether the POD basis with a number of entries comparable to the dimension of the low frequency source terms can approximate well the whole solution space. Figure 2 shows that the decay of the singular values reflects the gap in the frequency spectrum. However, after a reasonably good performance of the POD method in approximating the effect of the low frequencies on the solution, a large region of stagnation of the approximation properties can be highlighted. Therefore, it is mandatory to include the effect of the microscale in the ROM approximation space, with a subsequent increase of the number of POD modes required to achieve a satisfactory approximation of the FOM. Figure 1: Decay of the singular values of \(\mathbb{S}\) and the convergence rate of the projection error with respect to \(n_{rb}\) for a problem with continuously variable scales. The red line in the left panel denotes the threshold of \(n=144\) POD modes. #### 2.5.1 Non intrusive, nonlinear reduced order modeling The previous examples show that a Kolmogorov barrier arises due to the presence of the microstructure, this latter limiting the decay of the error achieved with projection-based ROMs that seek linear approximations in spaces. The main objective of this work is to overcome these limitations resorting to nonlinear model order reduction, implemented by combining a linear ROM approximation with nonlinear maps built through deep feedforward NNs, thus introducing a nonlinear, and non intrusive, ROM. Nonlinear ROMs encompass a wide class of strategies aiming at replacing the linear approximation hypothesis encoded in (7) with more general, nonlinear maps. Rather than replacing the linear combination of POD modes with fully nonlinear approximations, as done, e.g., through deep learning-based ROMs [40; 38; 39], we express our approximation following the approach proposed in [51], as \[\widetilde{u}^{rb}_{\boldsymbol{\mu},h}=\sum_{i=1}^{\widetilde{n}_{r,b}} \widetilde{u}^{(i)}_{\boldsymbol{\mu},rb}\widetilde{\psi}_{i}(x,\widetilde{ \alpha}(\boldsymbol{\mu})), \tag{11}\] where the main difference with respect to (7) is that the basis functions \(\widetilde{\psi}_{i}\) now depend on the parameters of the problem through some _features_, named \(\widetilde{\alpha}(\boldsymbol{\mu})\). Stated differently, the nonlinear approximation (11) is a linear combination of functions \(\widetilde{\psi}_{i}(x,\widetilde{\alpha}(\boldsymbol{\mu}))\), \(i=1,\ldots,\widetilde{n}_{rb}\) that depend through \(\widetilde{\alpha}(\boldsymbol{\mu})\) on the element \(u_{\boldsymbol{\mu},h}\) being approximated; this is indeed different from the linear approximation (7), where the basis functions are fixed and independent of which element \(u_{\boldsymbol{\mu},h}\) of the (discrete) solution manifold is approximated. As we will see in the following, proceeding in this way can be extremely helpful in order to break the Kolmogorov barrier, leading to lower errors for the same number of degrees of freedom. For the particular case of problems with microstructure, we put into action this general nonlinear ROM framework by using the nonlinear features as an additive correction, named _closure_, to the linear approximation obtained through (7). This consists in seeking a reduced approximation of the FOM based on the following representation, \[\widetilde{u}^{rb}_{\boldsymbol{\mu},h}=\sum_{i=1}^{n_{rb}}\mathrm{u}^{(i)}_{ \boldsymbol{\mu},rb}\psi_{i}(x)+\widetilde{\mathrm{u}}_{\boldsymbol{\mu},rb} \widetilde{\psi}(x,\widetilde{\alpha}(\boldsymbol{\mu})), \tag{12}\] where the function \(\widetilde{\psi}(x,\widetilde{\alpha}(\boldsymbol{\mu}))\) will be learned through artificial NNs (introduced in detail in the next section), mapping the parameter space into the full order FOM space, namely \(\mathcal{N}:\mathcal{P}\rightarrow\mathbb{R}^{N_{h}}\equiv V_{h}\). In other terms, the linear approximation space is replaced by \(\widetilde{V}_{rb}=V_{rb}+\mathcal{N}(\mathcal{P})\). The presence of the microstructure poses however an additional challenge to the one of the Kolmogorov barrier discussed so far. Indeed, space-varying parameters deserve a similar dimensionality reduction as done for the state solution, given their dimension comparable to the number Figure 2: Decay of the singular values of \(\mathbb{S}\) and the convergence rate of the projection error with respect to \(n_{rb}\) for a problem with scale separation. The vertical lines in the left panel denote the number of parameters of the low frequency function (\(n=16\)) and the total number of parameters (\(n=80\)). of high-fidelity FOM degrees of freedom. Moreover, the potentially involved nature of space-varying parameters makes the use of classical hyper reduction techniques - that would employ, e.g., a linear POD basis to approximate those data in order to comply with the ROM requirements - not feasible. For this reason, we resort to the use of NN-based approximations, exploiting a particular class of networks, the so-called _mesh informed neural networks_ (MINNs) previously proposed by the authors [43], to handle mesh based functional data in a very efficient way; the resulting strategy, involving a POD-based dimensionality reduction for the state solution, and a MINN-based approximation of space-varying fields, will be referred to as POD-MINN method. A mesh informed architecture is also employed to approximate the ROM coordinates, similarly to the POD-NN approach [21], but extended here to the case of MINNs to handle the high-dimensional input data. The final approximation, obtained when the closure model is applied to the POD-MINN approximation, will be referred to as POD-MINN+. In the next two sections we describe the formulation, the implementation details and the performance of these methodologies applied to problems with microstructure. Then, we will apply them to a real-life problem modeling that, despite some simplifying assumptions, is capable of representing the effect on microcirculation on the tissue microenvironment, involving multiple spatial scales and reaching very tiny detail levels. ## 3 Mesh-informed neural networks _Artificial neural networks_ (ANN) are parametrized functions between two (typically high-dimensional) vector spaces that have been recently employed also for the approximation of PDEs in several contexts. In this section, we discuss how they can be designed and exploited in particular case of problems with microstructure, to obtain non-intrusive and accurate ROMs. Before addressing mesh-informed neural networks, representing a key tool of our methodology, we briefly review feedforward neural networks for the sake of notation. ### The architecture and the training of feedforward neural networks ANNs are computational models obtained by stacking compositions of nonlinear transformations, acting on a collection of nodes called _neurons_, that are organized into building blocks called _layers_, these latter linked together through weighted connections. Given two vector spaces \(V_{m}\subset\mathbb{R}^{m}\) and \(\mathbb{R}^{n}\), a layer \(\mathcal{L}\) that takes a vector in \(\mathbb{R}^{m}\) as input and uses the _activation function_\(\rho\), is a map of the form \[\mathcal{L}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\text{ such that }\mathcal{L}(\mathbf{v})=\rho(\mathbb{W}\mathbf{v}+\mathbf{b}),\] where \(\mathbb{W}\in\mathbb{R}^{n\times m}\) and \(\mathbf{b}\in\mathbb{R}^{m}\) are the _weight matrix_ and the bias of the layer, respectively. The activation function \(\rho\), to be selected, is applied to the linear combination componentwise for each neuron of the same layer and ensures the nonlinearity of the approximation. The topology of the connections between the neurons determines the architecture of the ANN. The most common and known example is the _feedforward neural network_, defined as the composition of multiple layers \(\mathcal{L}_{i}:\mathbb{R}^{n_{i}}\rightarrow\mathbb{R}^{n_{i+1}}\), with \(i=1,\ldots,l\), where \(\mathcal{L}_{l+1}\) is the output layer and the remaining ones are called _hidden layers_. A feedforward neural network produces a map of the form \(\mathcal{N}:=\mathcal{L}_{l+1}\circ\mathcal{L}_{l}\ldots\circ\mathcal{L}_{1}\), so that, denoting with \(\mathbb{W}_{i}\) the weight matrices between the i-th and the \(i+1\)-th layer, \(\mathbf{b}_{i}\) the corresponding bias and \(\rho\) the activation function (generally the same for each layer): \[\mathcal{N}(\mathbf{v};\mathbb{W}_{1},\ldots,\mathbb{W}_{l+1},\mathbf{b}_{1}, \ldots,\mathbf{b}_{l+1})=\rho_{l+1}(\mathbb{W}_{l+1}\rho_{l}(\ldots\mathbb{W}_ {2}\rho_{1}(\mathbb{W}_{1}\mathbf{v}+\mathbf{b}_{1})+\mathbf{b}_{2}\ldots)+ \mathbf{b}_{l+1}).\] For the sake of clarity, we make a distinction between the notation related to an NN seen as an operator - that is, a parametrized function denoted with \(\mathcal{N}\) and depending on the input vector \(\mathbf{v}\in\mathbb{R}^{n_{1}}\), the weights and biases \(\mathbb{W}_{i},\mathbf{b}_{i}\) for \(i=1,\ldots,l+1\) - from the output of the same neural network, that is, a vector \(\mathbf{w}\in\mathbb{R}^{n_{l+1}}\). To lighten the notation, we denote the hyperparameters of the neural network, namely the collection of weights and biases, with the general symbol \(\boldsymbol{\theta}\). The choice of the activation function is left to the user and is problem-dependent. Furthermore, it can be neglected in the output layer [52; 53]. After determining the architecture of the neural network, the hyperparameters for the training process are set and tuned. Hinging upon the choice of the activation function for each layer, the initial state is provided: weights and biases are set to zero or randomly initialized in order to retain the variance of the activation across every layer, avoiding the exponential decrease or increase of the magnitude of input signals. In the context of _supervised learning_, the training of a neural network consists of tuning its weights and biases, simply put \(\mathbf{\theta}\), by means of the minimization of a suitable _loss_ function \(\mathcal{E}\) that measures the discrepancy between a given dataset and the predictions of the neural network. The minimization is typically performed by means of a gradient descent algorithm [54], such as the L-BFGS method [55] (a quasi-Newton optimizer) or ADAM [56]. The computational costs of the training are heavily linked to the _complexity_ of the neural network, that depends on its depth (the number of layers), on the number of neurons within each layer and on possible constraints on the weighted and biases. Regarding the latter, the architecture is _dense_ if no constraints are imposed. In the following subsection, we discuss a different approach, that introduces sparsity in the weights matrix to efficiently approximate FE functions, exploiting their mesh-based information. ### Mesh-informed neural networks for the approximation of finite element functions Despite their incredible expressivity, plain NN architectures based on dense layers can incur into major limitations, especially when dealing with high-dimensional data. In fact, when the dimensions into play become fairly large, dense architectures quickly become intractable, as they result in models that are harder to optimize and often prone to overfitting (thus also requiring more data for their training). Unfortunately, within our context, these issues arise quite naturally. In fact, the geometrical features of the microstructure are encoded at a high-fidelity level: consequently, any architecture working with the microstructure information must be powerful enough to handle an input of dimension \(O(N_{h})\), with \(N_{h}\gg 1\). To overcome this difficulty, we rely on a particular class of sparse architectures called Mesh-Informed Neural Networks (MINNs), originally proposed in [43] as a way to handle high-dimensional data generated by FE discretizations. MINNs are obtained through an a priori pruning strategy that promotes local operations rather than global ones. We can summarize the general idea underlying MINNs as follows. Let \(\mathbb{R}^{N_{h}}\cong V_{h}\) and \(\mathbb{R}^{M_{h^{\prime}}}\cong V_{h^{\prime}}\) be two vector spaces, each associated to a given FE discretization. The two may refer to different discretizations in terms of mesh step size or polynomial degree, but they must be defined over a common spatial domain \(\Omega\). Let \[\{\mathbf{x}_{j}\}_{j=1}^{N_{h}}\subset\Omega\quad\text{and}\quad\{\mathbf{x}_{ i}^{\prime}\}_{i=1}^{M_{h^{\prime}}}\subset\Omega,\] be the coordinates of the nodes for the two FE spaces, respectively. Then, following the definition in [43], a mesh-informed layer with support \(r>0\) is a DNN layer \(\mathcal{L}:\mathbb{R}^{N_{h}}\to\mathbb{R}^{M_{h^{\prime}}}\) whose weight matrix \(\mathbb{W}\) satisfies the sparsity constraint below \[\mathbb{W}_{i,j}=0\quad\forall i,j,\text{ such that }\ d(\mathbf{x}_{j}, \mathbf{x}_{i}^{\prime})>r,\] where \(d:\Omega\times\Omega\to[0,+\infty)\) is a suitable distance function defined over the spatial domain, such as, e.g., the Euclidean distance \(d(\mathbf{x},\mathbf{x}^{\prime}):=|\mathbf{x}-\mathbf{x}^{\prime}|\) - this choice, however, is not restrictive, making then possible to use also a geodesic distance. Here, the intuition is that each entry component \(j\) at input (resp. output) has been associated to some node \(\mathbf{x}_{j}\) representing a degree of freedom in the FESpace at input (resp. output). Then, this interpretation allows one to construct a sparse DNN model whose weight matrix ultimately acts as a local operator in terms of the corresponding FE spaces. In general, for simplicial meshes in \(\Omega\subset\mathbb{R}^{d}\), it can be shown that the weight matrix of a mesh-informed layer with support \(r>0\) has at most \(O(r^{d}N_{h}M_{h^{\prime}})\) nonzero entries [43]. The advantage of such construction is that it reduces the number of hyperparameters to be optimized during the training stage without affecting the overall expressivity of the model. In particular, training a MINN model can be far less demanding than doing the same with its dense counterpart; furthermore, the sparsity constraints naturally introduced in a MINN can have a beneficial impact on the generalization capabilities of the architecture, see, e.g., [43] for further details. In light of these considerations, for what concerns our purposes, we shall exploit both mesh-informed and dense layers: the choice will depend, from time to time, on the nature of the data at hand. To distinguish between MINNs and general dense neural networks, we denote by \(\mathcal{M}\) those architectures that feature the use of mesh-informed layers. ## 4 Non-intrusive and nonlinear ROMs: POD-MINN and POD-MINN\(+\) strategies In this section we illustrate a strategy to use mesh-informed neural networks for the construction of non-intrusive ROMs for problems with microstructure, detailing the approximation introduced in equation (12). We remark that we will exploit MINNs in two distinct steps: 1. in the first step, directly inspired to [21], MINNs are used to approximate the map from the parameters of the problem to the reduced coefficients using the network \[\mathcal{M}_{rb}:\quad\mathcal{P}\rightarrow\mathbb{R}^{n_{rb}},\quad\mathbf{ \mu}\mapsto\widehat{\mathbf{u}}_{\mathbf{\mu},rb};\] 2. in the second step, MINNs are exploited for the _closure_ model, namely a nonlinear correction to the previous ROM (based on a linear approximation space). To this purpose, we introduce the MINN \(\mathcal{M}_{c}\) such that \[\mathcal{M}_{c}:\quad\mathcal{P}\rightarrow\mathbb{R}^{N_{h}},\quad\mathbf{\mu} \mapsto\widetilde{\mathbf{u}}_{\mathbf{\mu},c}.\] The overall framework is summarized in Figure 3. We recall that parameters affecting the differential problem feature a _scale separation property_, with macroscale parameters showing a small dimensional and, conversely, microscale parameters that are instead high-dimensional - that is, \(n_{m}\gg n_{M}\). Approximating quantities depending on the former can thus be done through plain feedforward neural networks, while we decide to encode the latter through a mesh-informed neural network to approximate spatially-dependent parametric fields related to the microscale. We shall detail how in the upcoming subsection. Figure 3: A sketch of the POD-MINN\(+\) method. The macroscale parameters, \(\mathbf{\mu}_{M}\), and the microscale ones, \(\mathbf{\mu}_{m}\), are fed to two separate architectures, \(\mathcal{N}_{rb,M}\) and \(\mathcal{M}_{rb,m}\), whose outputs are later combined to approximate the RB coefficients, cf. Eq. (15). The coefficients are then expanded over the POD basis, \(\mathbb{V}\), and the ROM solution is further corrected with a closure term computed by a third network, \(\mathcal{M}_{c}\), that accounts for the small scales. ### Approximation of the POD coefficients using mesh informed neural networks We observe that the best approximation of the FOM solution \(u_{\mathbf{\mu},h}\) - whose degrees of freedom are collected in the vector \(\mathbf{u}_{\mathbf{\mu},h}\in\mathbb{R}^{N_{h}}\) - in the POD space is given by its projection \[\mathbb{V}\mathbb{V}^{T}\mathbf{u}_{\mathbf{\mu},h}=\sum_{i=1}^{n_{rb}}\big{(} \mathbb{V}^{T}\mathbf{u}_{\mathbf{\mu},h}\big{)}^{(i)}\psi_{i}(x).\] The expression above defines a mapping \[\pi:\quad\mathcal{P}\subset\mathbb{R}^{p}\to\mathbb{R}^{n_{rb}},\quad\mathbf{\mu} \mapsto\mathbb{V}^{T}\mathbf{u}_{\mathbf{\mu},h}, \tag{13}\] whose approximation could provide a way to generate, onto the POD space, the solution of the problem for unseen values of the input parameters. As shown in [21], if the dimension of the reduced space is small enough, it is possible to effectively approximate the map \(\pi\) by means of a neural network. Since for the case of our interest the parameter space \(\mathcal{P}\) includes the high-dimensional data necessary to encode the microscale parameters \(\mathbf{\mu}_{m}\), we use a MINN instead of a generic feedforward neural network. We remark that this method overrides the projection step of the FOM onto the reduced space, (9), and the solution of the reduced problem (8). As a consequence, it turns out to be a completely non-intrusive reduced order modeling strategy that will be particularly advantageous to handle problems that do not enjoy an affine decomposition of the FOM operators, or in which one is interested to approximate only a subset of the variables involved in its formulation. The dataset to train the network \(\mathcal{M}_{rb}\) is the representation in the reduced space of the FOM solutions collected into the data matrix \(\mathbb{S}\). In other words, the training set are the input-output couples \(\{\mathbf{\mu}^{(i)},\mathbb{V}^{T}\mathbb{S}(:,i)\}_{i=1}^{N}\), where \(\mathbf{\mu}^{(i)}\) are the parameter values used to calculate the FOM solution vector \(\mathbb{S}(:,i)=\mathbf{u}_{\mathbf{\mu}^{(i)},h}=\mathbf{u}_{h}(\mathbf{\mu}^{(i)})\). This dataset is used to define the loss function, \[\mathcal{E}(\mathcal{M}_{rb};\mathbf{\theta}_{rb})=\frac{1}{N}\sum_{i=1}^{N}\| \mathbb{V}^{T}\mathbb{S}(:,i)-\mathcal{M}_{rb}(\mathbf{\mu}^{(i)};\mathbf{\theta}_{rb })\|_{2,n_{rb}},\] and the (perfectly) trained neural network is the one characterized by the paramaters \(\mathbf{\theta}_{rb}^{*}\) that minimize the loss, precisely \(\mathbf{\theta}_{rb}^{*}=\text{argmin}_{\mathbf{\theta}}\mathcal{E}(\mathcal{M}_{rb}; \mathbf{\theta}_{rb})\). Once the network has been trained, the model can be used in an online phase similarly to the baseline reduced basis approach: given a new set of parameters \(\mathbf{\mu}\), the mapping through the network \(\widehat{\mathbf{u}}_{\mathbf{\mu},rb}=\mathcal{M}_{rb}(\mathbf{\mu};\mathbf{\theta}_{rb}^ {*})\) and a subsequent linear transformation, \(\mathbb{V}\widehat{\mathbf{u}}_{\mathbf{\mu},rb}\), enable the approximation of the FOM solution \(\mathbf{u}_{\mathbf{\mu},h}(\mathbf{\mu})\). The approximation provided by \(\mathcal{N}_{rb}\) can be represented by the following function, \[\widehat{u}_{\mathbf{\mu},h}^{rb}(x)=\sum_{i=1}^{n_{rb}}\widehat{\mathbf{u}}_{\mathbf{ \mu},rb}^{(i)}\psi_{i}(x)=\sum_{j=1}^{N_{h}}\widehat{\mathbf{u}}_{\mathbf{\mu},h}^{ rb,(j)}\phi_{j}(x), \tag{14}\] where \(\widehat{\mathbf{u}}_{\mathbf{\mu},h}^{rb}=\mathbb{V}\widehat{\mathbf{u}}_{\mathbf{ \mu},rb}\in\mathbb{R}^{N_{h}}\) is the vector collecting its degrees of freedom in the FEM space. The POD-MINN method is based on specific architectures designed with the purpose to manage differently the parameters related to the _micro_ and _macro_ scales, as they belong to spaces with different dimension. The first step, \(\mathcal{M}_{rb}\) combines a MINN that manages the high-dimensional information about \(\mathbf{\mu}_{m}\) with a DNN that deals with \(\mathbf{\mu}_{M}\), leading to the following formulation of an input-output map that approximates the reduced coefficients: \[\mathcal{M}_{rb}(\mathbf{\mu};\mathbf{\theta}_{rb})=\mathcal{M}_{rb,m}(\mathbf{\mu}_{m}; \mathbf{\theta}_{m})\mathcal{N}_{rb,M}(\mathbf{\mu}_{M};\mathbf{\theta}_{M}), \tag{15}\] where: * The output of \(\mathcal{N}_{rb,M}:\mathbb{R}^{n_{M}}\to\mathbb{R}^{k}\) is a vector \(\mathbf{y}_{\mathbf{\mu}_{M}}=\mathcal{N}_{rb,M}(\mathbf{\mu}_{M})\in\mathbb{R}^{k}\); * The network \(\mathcal{M}_{rb,m}:\mathbb{R}^{n_{m}}\rightarrow\mathbb{R}^{n_{rb}\times k}\) gives back a matrix \(\mathbb{X}_{\mathbf{\mu}_{m}}\in\mathbb{R}^{n_{rb}\times k}\). We note that the network \(\mathcal{M}_{rb}\) approximates the mapping \(\pi\) defined in (13). The final output is then represented by \[\widehat{\mathbf{u}}_{\mathbf{\mu},rb}=\mathbb{X}_{\mathbf{\mu}_{m}}\cdot\mathbf{y}_{ \mathbf{\mu}_{M}},\] where \(\cdot\) emphasizes the presence of a matrix-vector multiplication. The training of the network defined in (15) is done all at once using the loss function \(\mathcal{E}(\mathcal{M}_{rb};\mathbf{\theta}_{rb})\). The respective weights and biases, collected in the vectors \(\mathbf{\theta}_{m}\) and \(\mathbf{\theta}_{M}\), are drawn from the normalized initialization proposed in [57]. Although the approach proposed here provides an efficient way to recover the main features of the FOM solution, it is characterised by two main error sources, in fact, regardless of the training of the network, by orthogonality of \(\mathbb{V}\): \[\|\mathbf{u}_{\mathbf{\mu},h}-\mathbb{V}\widehat{\mathbf{u}}_{\mathbf{\mu},rb}\|_{2,N_ {h}}^{2}=\|\mathbf{u}_{\mathbf{\mu},h}-\mathbb{V}\mathbb{V}^{T}\mathbf{u}_{\mathbf{ \mu},h}\|_{2,N_{h}}^{2}+\|\mathbb{V}^{T}\mathbf{u}_{\mathbf{\mu},h}-\widehat{ \mathbf{u}}_{\mathbf{\mu},rb}\|_{2,n_{rb}}^{2}.\] This is to emphasize that the neural network only provides an estimate of the map \(\pi\), meaning that \(\widehat{\mathbf{u}}_{\mathbf{\mu},rb}\approx\mathbb{V}^{T}\mathbf{u}_{\mathbf{\mu},h}\); furthermore, on top of this, the representation of the FOM solution in the reduced space is affected by a projection error \(\|\mathbf{u}_{\mathbf{\mu},h}-\mathbb{V}\mathbb{V}^{T}\mathbf{u}_{\mathbf{\mu},h}\|_{2, N_{h}}\), identifying a lower bound for the approximation of the FOM solution that can not be overcome by the POD-MINN approach. The approximation of problems with microstructure entails the choice of a higher number of reduced basis functions in order to keep the projection error sufficiently small and, consequently, the reconstruction error of the solution. We tackle these issues by means of a closure model, as shown in the following section. ### Mesh informed neural networks for the closure model Designing a closure model for problems with microstructure is particularly challenging for the following reasons: _(i)_ the input of the network \(\mathcal{M}_{c}\) must necessarily include the microscale parameters \(\mathbf{\mu}_{m}\) that are represented in the high-dimensional FE space relative to the FOM model; _(ii)_ the output of the closure model is also isomorphic to the FE space of the FOM; _(iii)_ as a consequence of _(ii)_, the training data of \(\mathcal{M}_{c}\) are also high-dimensional. For these reasons, MINNs play a crucial role for the successful application of the closure model. Let us denote with \(\widehat{\mathbf{u}}_{\mathbf{\mu},c}\) the output of the closure model, namely \(\widetilde{\mathbf{u}}_{\mathbf{\mu},c}=\mathcal{M}_{c}(\mathbf{\mu})\in\mathbb{R}^ {N_{h}}\). Then, following (12), the closure model is easily added to the reduced basis approximation, \[\widetilde{u}_{\mathbf{\mu},h}^{rb}(x)=\sum_{j=1}^{N_{h}}\widehat{\mathbf{u}}_{ \mathbf{\mu},c}^{(j)}\phi_{j}(x)+\sum_{i=1}^{n_{rb}}\widehat{\mathbf{u}}_{\mathbf{\mu},rb}^{(i)}\psi_{i}(x), \tag{16}\] where \(\psi_{i}(x)\) are the reduced bases while \(\phi_{j}(x)\) are the standard FE basis functions used in the FOM model. In vector form, the previous equation is equivalent to \[\widetilde{\mathbf{u}}_{\mathbf{\mu},h}^{rb}=\widetilde{\mathbf{u}}_{\mathbf{\mu},c}+ \mathbb{V}\widehat{\mathbf{u}}_{\mathbf{\mu},rb}.\] The dataset for training the closure model consists of the FOM snapshots collected in the data matrix \(\mathbb{S}\), more precisely the couples \(\{\mathbf{\mu}^{(i)},\mathbb{S}(\cdot;i)\}_{i=1}^{N}\). Moreover, we make the choice to feed the closure model only with the microscale input parameters. Precisely, we have \[\mathcal{M}_{c}(\mathbf{\mu};\mathbf{\theta}_{c})=\mathcal{M}_{c}(\mathbf{\mu}_{m};\mathbf{ \theta}_{c}). \tag{17}\] As a result, we look for a network \(\mathcal{M}_{c}(\mathbf{\mu}_{m};\mathbf{\theta}_{c}):\mathbb{R}^{n_{m}}\rightarrow \mathbb{R}^{N_{h}}\) that minimizes the following loss function, \[\mathcal{E}(\mathcal{M}_{c};\mathbf{\theta}_{c})=\frac{1}{N}\sum_{i=1}^{N}\|\big{(} \mathbb{S}(\cdot;i)-\mathbb{V}\mathcal{M}_{rb}(\mathbf{\mu}^{(i)};\mathbf{\theta}_{rb}^{ *})\big{)}-\mathcal{M}_{c}(\mathbf{\mu}_{m}^{(i)};\mathbf{\theta}_{c})\|_{2,N_{h}},\] where \(\mathbb{S}(\cdot;i)-\mathbb{V}\mathcal{M}_{rb}(\mathbf{\mu}^{(i)};\mathbf{\theta}_{rb}^ {*})\) represents the approximation error generated by the POD-MINN method, related to the \(i^{th}\)-snapshot, which is then corrected by \(\mathcal{M}_{c}\) to further reduce it. The weights and biases of \(\mathbf{\theta}_{c}\) are initialized to zero, since the optimization process starts from an initial state determined by the POD-MINN reconstruction of the solution. As result, the POD-MINN+ approximation of the FOM solution is given by the following expression: \[\widetilde{\mathbf{u}}_{\mathbf{\mu},h}^{rb}=\mathbb{V}\tilde{\mathbf{u}}_{\mathbf{\mu},rb}+\widetilde{\mathbf{u}}_{\mathbf{\mu},c}=\mathbb{V}\mathbb{X}_{\mathbf{\mu}_{m}} \mathbf{y}_{\mathbf{\mu}_{M}}+\mathcal{M}_{c}(\mathbf{\mu}_{m})\approx\mathbf{u}_{\bm {\mu},h}.\] In conclusion, augmenting the POD-MINN trained map through the closure model allows to retrieve also the variability of local features in the solution manifold that POD-MINN can recover only considering a large number of reduced basis functions, including the small frequency POD modes. The method with closure is called POD-MINN+. ### Numerical results obtained with the POD-MINN and POD-MINN+ methods In this section we analyze the performance of the POD-MINN+ method in the approximation of the benchmark problems proposed in section 2.5. To assess the performance of the closure model, we compare the POD-MINN+ reconstruction error with the POD-MINN one and the projection error generated projecting the FOM solution onto the POD space, defined as in (10). The former two are defined as follows: \[E_{POD-MINN}(n_{rb},\mathbf{u}_{\mathbf{\mu},h})=\frac{\|\mathbf{u}_{\mathbf{\mu},h}- \mathbb{V}(n_{rb})\tilde{\mathbf{u}}_{\mathbf{\mu},rb}\|_{2,N_{h}}}{\|\mathbf{u}_ {\mathbf{\mu},h}\|_{2,N_{h}}}, \tag{18}\] \[E_{POD-MINN+}(n_{rb},\mathbf{u}_{\mathbf{\mu},h})=\frac{\|\mathbf{u}_{\mathbf{\mu},h}- \mathbb{V}(n_{rb})\tilde{\mathbf{u}}_{\mathbf{\mu},rb}-\widetilde{\mathbf{u}}_{ \mathbf{\mu},c}\|_{2,N_{h}}}{\|\mathbf{u}_{\mathbf{\mu},h}\|_{2,N_{h}}}. \tag{19}\] For both benchmark problems addressed in Sect. 2.5, we consider the same architectures for the POD-MINN method and the closure model, hinging upon the use of MINNs. The microscale input parameters \(\mathbf{\mu}_{m}\) are the representations of the respective forcing terms \(f(x,y)\) in the finite-element space of the solution \(V_{h}\), with dimension \(N_{h}\). Let \(\mathcal{T}_{h}\) denote the computational mesh (collection of all geometric elements) related to the space \(V_{h}\). We remind that we consider a unit square domain \(\Omega=(-1,1)^{2}\) partitioned using uniform triangles corresponding to a 50\(\times\)50 grid named \(\mathcal{T}_{h}\). As a first step, related to the POD-MINN approach, we introduce the map approximating the reduced coefficients defining the architecture \(\mathcal{M}_{rb}(\mathbf{\mu}_{m};\mathbf{\theta}_{rb}):\mathbb{R}^{N_{h}}\to\mathbb{ R}^{n_{rb}}\), exploiting a mesh-informed layer \(L_{1}^{rb}\) of support \(r=0.6\) and a dense layer \(L_{2}^{rb}\): \[L_{1}^{rb}:\ V_{h}\xrightarrow{r=0.6}V_{2h},\qquad L_{2}^{rb}:V_{2h} \xrightarrow{}\mathbb{R}^{n_{rb}}\] with \(V_{2h}=X_{2h}^{1}(\Omega)\) finite-element space defined over the coarser computational mesh \(\mathcal{T}_{2h}\) (stepsize \(2h\), \(25\times\)\(25\) grid), made by uniform triangles, of the same domain \(\Omega\). For the closure model, let \(\mathcal{T}_{h^{\prime}}\) a coarse mesh in \(\Omega\), with \(h^{\prime}>h\). We use a MINN architecture \(\mathcal{M}_{c}(\mathbf{\mu}_{m};\mathbf{\theta}_{c}):\mathbb{R}^{N_{h}}\to\mathbb{R}^ {N_{h}}\), composed by \(2\) mesh-informed layers \(L_{i}^{c}\) of support \(r=0.6\): \[L_{1}^{c}:\ V_{h}\xrightarrow{r=0.6}V_{h^{\prime}},\qquad L_{2}^{c}:V_{h^{ \prime}}\xrightarrow{r=0.6}V_{h}\] where \(V_{h^{\prime}}=X_{h^{\prime}}^{1}(\Omega)\) is the finite-element space defined from a 35\(\times\)35 computational mesh \(\mathcal{T}_{h^{\prime}}\) of uniform triangles. As result, the MINNs \(\mathcal{M}_{rb}(\mathbf{\mu}_{m};\mathbf{\theta}_{rb})\) and \(\mathcal{M}_{c}(\mathbf{\mu}_{m};\mathbf{\theta}_{c})\) are defined as follows: \[\mathcal{M}_{rb}:V_{h}\xrightarrow{r=0.6}V_{2h}\xrightarrow{}\mathbb{R}^{n_{rb }},\qquad\mathcal{M}_{c}:\ V_{h}\xrightarrow{r=0.6}V_{h^{\prime}}\xrightarrow{ r=0.6}V_{h}.\] For what concerns the training of the neural networks, the \(N=1000\) snapshot functions, taken into account to build the data matrix \(\mathbb{S}\), are partitioned to provide: * a _training_ set of \(N_{train}=750\) samples; * a _validation_ set consisting of \(N_{valid}=50\) snapshots; * a _test_ set of \(N_{valid}=200\) snapshots. The optimization of the loss function is performed through the L-BFGS optimizer with learning rate equal to 1, without batching. The networks are trained for a total of 250 epochs, using an early stopping criterion based on the validation error, that is applied if the following conditions are met: the training error _decreases_ but the validation error _increases_ for at least two consecutive epochs. The training times for the POD-MINN+ method in the two benchmark problems vary between from 98 to 130 seconds for the test case with continuous scales and from 89 to 188 seconds for the test case with continuously variables scales. As a general trend, we denote that an increase of the number of POD basis functions entails a decrease of the training time of the closure model. The comparison between the POD projection error, namely \(E_{POD}\), and \(E_{PODMINN+}\) when varying the number of basis functions \(n_{rb}\) is reported in Figure 4 for the benchmark problems with continuously variable scales and with scale separation. These results confirm that the the POD-MINN approach is not performing well, because it can not provide a better approximation than the POD projection error. The POD-MINN error is indeed very close to the POD projection, but this approximation is qualitatively not satisfactory for both the \(L^{2}\) and \(L^{\infty}\) norms, unless a large number of bases is used. Conversely, the POD-MINN+ method is very effective especially in the regime with a low number of basis functions. In particular, for the case with continuously variable scales the POD-MINN+ method reduces of 2 orders of magnitude the POD error in the region \(n_{rb}<20\), when it is measured in the 2-norm. For the benchmark problem with the scale separation the gain increases to more than three orders of magnitude, see Figure 4 (bottom-left panel). This effect is particularly evident when we approximate this benchmark problem with basis functions only able to capture the low frequency modes, namely the case \(n_{rb}\leq 16\). In this regime the closure model is entirely responsible for the approximation of the high frequency modes. The POD-MINN+ method effectively performs this task with a relative error of \(0.1\%=10^{-3}\). We note that the closure model does not improve the rate of convergence with respect to \(n_{rb}\), it only decreases the magnitude of the error. In general, the error of the POD-Galerkin method Figure 4: The performance of the POD-MINN+ method applied to the benchmark problems with continuous scales (top row) and separate scales (bottom row). We compare the errors \(E_{POD-MINN}\) and \(E_{POD-MINN+}\) defined in (18), to the one of the POD projection defined as \(E_{POD}\) in (10), measured in the norms \(p=2,\infty\). The case with \(p=2\) is reported on the left, the one with \(p=\infty\) is on the right. enjoys an optimal exponential decay with respect to \(n_{rb}\) that can not be increased resorting to the closure model. This observation confirms that the POD-MINN+ method is meaningful for those problems where the Kolmogorov \(n\)-width and consequently the POD projection error decays slowly with respect to \(n_{rb}\). ## 5 Application to oxygen transfer in the microcirculation In this section we present an application of the POD-MINN+ method to model oxygen transfer at the level of microcirculation, described by (3). After introducing the FOM model and its discretization, we discuss its parametrization and finally we present the numerical results that illustrate the advantages of the proposed reduced order model. ### Numerical approximation of oxygen transfer in the microcirculation: the full order model For the spatial approximation of the problem we use the finite-element method. We consider a 3D domain \(\Omega\), identified by a slab of edge \(1\,mm\) and thickness \(0.15\,mm\) and discretized through a \(20\times 20\times 3\) structured computational mesh of tetrahedra. For the interstitial domain we introduce the space of the piecewise linear, continuous, Lagrangian finite-elements \(V_{t,h}=X_{h}^{1}(\Omega)\), with dimension \(N_{h}=1764\). On the other hand, we assume that the 1D vascular network is immersed in the 3D slab and we discretize it by partitioning each vascular branch \(\Lambda_{i}\) into a sufficiently large number of linear segments. A piecewise linear, continuous, Lagrangian finite-element space \(V_{v,h}^{i}=X_{h}^{1}(\Lambda_{i})\) is employed for each branch \(\Lambda_{i}\), where the index \(i\) spans through the vascular branches \(i=1,\ldots,N_{b}\). Hence, we approximate the equations in the whole microvascular network through a finite element space \(V_{v,h}=\Big{(}\bigcup_{i=1}^{N_{b}}V_{v,h}^{i}\Big{)}\bigcap C^{0}(\Lambda)\). It is important to clarify that the finite-element approximation of the oxygen transfer model is performed downstream to the full comprehensive problem for the vascular microenvironment, described in [3; 49]. As it is shown in Figure 5, indeed, we firstly compute via finite-element method the velocity and the pressure in the tissue and the vascular network, together with the discharge hematocrit in the vascular network. We remark that this proposed ROM based on the POD-MINN+ method could have been equivalently applied to any variable in the domain \(\Omega\), such as the pressure or the velocity fields. ### The parametrization of the FOM: inputs and outputs Since the model presented in Section 2 is driven by parametrized PDEs, a crucial aspect of the problem is the choice of which parameters to include as inputs of the model. We consider the parameter space \(\mathcal{P}=\mathcal{P}_{M}\cup\mathcal{P}_{m}\), where the macro- and micro-scale subspaces are selected according to the results of an earlier sensitivity analysis [58]. We then select three physical parameters (see Table 1) among those appearing in the model (3), and two geometrical, mesh-based inputs, Figure 5: General layout of the full order model for the whole vascular microenvironment. encoding the information related to the 1D vascular networks. The range of variation of each input parameter is determined consistently with respect to its physiological bounds. In particular, the vascular architectures are obtained starting from a biomimetic algorithm that replicate the essential traits of angiogenesis, starting from some parameters that characterize the density and the distribution of small vessels in a vascularized tissue. These quantities are the vascular surface per unit volume of tissue, named \(S/V\), and an indicator that governs the distribution of point seeds used to initialize the angiogenesis algorithm. The definitions and the range of these hyper-parameters of the vascular model are reported in Table 1. By sampling these hyper-parameters and applying the angiogenesis algorithm, we obtain a population of admissible vascular networks with sufficient variability. Figure 6 shows some examples of 1D vascular networks with different spatial distributions, spanning from regular structures to less ordered ones. For more details regarding the generation of the embedded microstructures, we refer to [58]. From the computational standpoint, the vascular networks are represented as metric graphs, the vertices of which are points in the 3D space. This description is not practical for the purpose of generating a reduced order model. It is more convenient to transform the vascular graph into two continuous functions. The main one is the distance function from the nearest point of the vascular network, defined on \(\Omega_{t}\) and named \(\mathbf{d}\). The second one is used to identify the intersections of the vascular network with the boundary and it is named \(\boldsymbol{\eta}\). These are the input functions of the ROM and are presented below. #### 5.2.1 Extravascular distance and inlet function As a way to retrieve an equivalent information of the 1D vascular graph we thus choose to introduce the _extravascular distance i.e._ the function mapping each point in the tissue slab to its distance from the closest point of the vascular network. We represent this function as a linear and piecewise continuous finite element function in \(V_{t,h}\). The vector of degrees of freedom of such discrete function, corresponding to its nodal values, is \(\mathbf{d}\in\mathbb{R}^{N_{h}}\). The vector \(\mathbf{d}\) plays the role of the forcing term \(f_{\boldsymbol{\mu}_{m}}\) in the model (2). Furthermore, it is useful to provide the ROM with the information of the inlet points of the vascular network on the boundary. To this purpose, we introduce an _inlet characteristic function_ that assigns a unit value to nodes of the tissue slab grid which are close to the inlets. Again, this function is defined at the discrete level using the space \(V_{t,h}\). In practice, a vertex of the finite element mesh is marked as having non-zero value if and only if one of the surface elements is intersected by an inlet vessel. The degrees of freedom of this function are denoted by \(\boldsymbol{\eta}\in\mathbb{R}^{N_{h}}\). Figure 8 shows the mesh-based data \(\mathbf{d}\) and \(\boldsymbol{\eta}\) for a particular instance of the vascular graph. We point out, without loss of generality, that we can extend this approach encoding the geometry of the vascular network with high-dimensional data defined over coarser meshes (_super-resolution_) or refined ones (_sub-resolution_) [59]. Figure 6: Examples of artificial vascular networks with extravascular distance \(\mathbf{d}\) progressively increasing from left to right. #### 5.2.2 Output of the full order model The output of the FOM is the tissue oxygenation map \(C_{t}\), one of the state variables of the oxygen transfer model, measured in \(mL_{O_{2}}/mL_{B}\). In the equations (2) it represents the high-fidelity solution \(u_{\mathbf{\mu}}\). An example of the oxygen map \(C_{t}\) is shown in Figure 7, where we notice that the embedded vascular microstructure has a significant influence of on the tissue oxygenation. We also note that for the development of the ROM the variable \(C_{t}\) is suitably rescaled by a factor 300. We remark that we can exploit the 3D-1D full order model of the vascular microenvironment to extend the proposed approach for the approximation of other state variables such as the interstitial pressure in the tissue domain \(\Omega_{t}\). On the contrary, the methodology has to be revised if the aim is to reconstruct outputs defined over the vascular network. ### Implementation of the POD-MINN and POD-MINN+ methods Once defined the macroscale and the microscale parameters, namely \(\mathbf{\mu}_{M}=[V_{max},C_{v,in},P_{O_{2}}]\) and \(\mathbf{\mu}_{m}=(\mathbf{d},\mathbf{\eta})\), we apply a Monte Carlo sampling of the parameter space and collect the corresponding input-output pairs provided by the FOM model, named \(\{(\mathbf{\mu}_{M}^{(i)},\mathbf{d}^{(i)},\mathbf{\eta}^{(i)});\mathbf{u}_{\mathbf{\mu}, h}^{(i)}\}_{i=1}^{N}\). We organize these data into the data matrix \(\mathbb{S}\). In particular, for what concerns the microscale parameter space, the sampling of the vascular networks is performed with respect to the hyper-parameters introduced in Table 1. Thanks to the chosen settings, the extravascular distance \(\mathbf{d}\) and the inlets function \(\mathbf{\eta}\) share the same dimension and ordering of the snapshots in the matrix \(\mathbb{S}\), so that the matching between inputs and outputs can be easily established. The entries of the physical parameters \(\mathbf{\mu}_{M}\) are normalized between 0 and 1 with respect to their range of variation reported in Table 1. The available dataset of \(N=1600\) FOM snapshots and parameters samples is partitioned as follows: * \(N_{train}=1220\) training data; \begin{table} \begin{tabular}{||c c c c||} \hline \hline symbol & Parameter & Unit & Range of variation \\ \hline \(P_{O_{2}}\) & \(O_{2}\) wall permeability & \(m/s\) & \(3.5\times 10^{-5}-3.0\times 10^{-4}\) \\ \(V_{max}\) & \(O_{2}\) consumption rate & \(mL_{O_{2}}/cm^{3}/s\) & \(4.0\times 10^{-5}-2.4\times 10^{-4}\) \\ \(C_{v,in}\) & \(O_{2}\) concentration at the inlets & \(mL_{O_{2}}/mL_{B}\) & \(2.25\times 10^{-3}-3.75\times 10^{-3}\) \\ \(\%\frac{SEEDS_{(-)}}{SEEDS_{(+)}}\) & Seeds for angiogenesis & [\%] & \(0-75\) \\ \(S/V\) & Vascular surface per unit volume & \([m^{-1}]\) & \(5\cdot 10^{3}-7\cdot 10^{3}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The first three rows illustrate the biophysical parameters of the ROM and their ranges of variation. The last two rows report the hyper-parameters used to initialize the algorithm that generates the vascular network. Figure 7: The FOM solution representing the tissue oxygenation map (\(mL_{O_{2}}/mL_{B}\)) is reported on the left panel. On the right we show the 1D embedded vascular microstructure that visibly influences the oxygen map, over which the vessel oxygen concentration \(C_{v}\) is represented. * \(N_{valid}=100\) validation data; * \(N_{test}=280\) testing data. We perform a singular value decomposition of the training dataset, building the projection matrix \(\mathbb{V}\), collecting the discrete representation of \(n_{rb}\) basis functions. As a preliminary analysis, we study in Figure 9 the convergence rate of the projection error of the FOM output on the reduced space. These results immediately show that, since the problem is globally diffusion-driven and overall well described by the macroscale features at the low frequencies, the decay of the singular values is relatively quick, meaning that a POD approach is expected to yield good results, as shown in Figure 9 (_left_). Nonetheless, by comparing projection errors measured in the the \(L^{2}\) and the \(L^{\infty}\) norms, reported in Figure 9 (_right_), we see that the \(L^{\infty}\) error dominates over the \(L^{2}\) one, meaning that the local effects due to the microstructure still have a big impact on the whole solution. Rather than including a large number of POD modes, it is therefore preferable to retain just few of them, and then include a suitable closure model, with the aim to retrieve the local features of the oxygen map and improve the overall accuracy of the method. #### 5.3.1 Architectures and training of the POD-MINN method In order to implement the POD-MINN method, we exploit two independent MINNs to process the mesh-based geometrical input data \(\mathbf{d}\) and \(\boldsymbol{\eta}\), using the same architectures in both cases. We recall the formulation of the input-output function \(\mathcal{M}_{rb}\) in (15): \[\mathcal{M}_{rb}(\boldsymbol{\mu};\boldsymbol{\theta}_{rb})=\mathcal{M}_{rb, m}(\boldsymbol{\mu}_{m};\boldsymbol{\theta}_{m})\mathcal{N}_{rb,M}(\boldsymbol{ \mu}_{M};\boldsymbol{\theta}_{M}), \tag{20}\] Figure 8: The nodal distance from the nearest vessel of the vascular network is shown on the left. On the right we plot the indicator function of the vessel inlets on the boundary. Figure 9: (_Left_) The decay of the singular values distribution for the microcirculation problem (_Right_) The POD projection error of the FOM solution on the reduced basis space varying the number of basis functions. where, in this case, the MINN that takes as input the geometrical parametrization is defined as \[\mathcal{M}_{rb,m}(\boldsymbol{\mu}_{m};\boldsymbol{\theta}_{m})=\mathcal{M}_{rb,\eta}(\boldsymbol{\eta};\boldsymbol{\theta}_{\eta})\odot\mathcal{M}_{rb,d}( \mathbf{d};\boldsymbol{\theta}_{d}),\] splitting the contributions of the single geometrical input parameters, with the inlets function \(\boldsymbol{\eta}\) weighting the effect of the extravascular distance \(\mathbf{d}\) through the MINN \(\mathcal{M}_{rb,\eta}\). Here, we denote with \(\odot\) the Hadamard product between two matrices, \[(A\odot B)_{i,j}=(A)_{i,j}(B)_{i,j}\qquad\forall A,B\in\mathbb{R}^{m\times n}, \;m,n\geq 1.\] The neural network \(\mathcal{M}_{rb,d}(\mathbf{d};\boldsymbol{\theta}_{d}):\mathbb{R}^{N_{h}} \rightarrow\mathbb{R}^{n_{rb}\times k}\) is assembled relying on two mesh-informed layers \(L_{1}^{rb,d}\) and \(L_{2}^{rb,d}\) of support \(r=0.3\) and hyperbolic tangent activation function, combined with a dense layer \(L_{3}^{rb,d}\) complemented with a suitable reshape of the output, which results in the following structure: * \(L_{1}^{rb,d}\): \(V_{t,h}\xrightarrow{r=0.3}V_{t,H}\), * \(L_{2}^{rb,d}\): \(V_{t,H}\xrightarrow{r=0.3}V_{t,H}\), * \(L_{3}^{rb,d}\): \(V_{t,H}\xrightarrow{}\mathbb{R}^{n_{rb}k}\xrightarrow{\text{reshape}} \mathbb{R}^{n_{rb}\times k}\), where \(V_{t,H}=X_{H}^{1}(\Omega)\) is the finite-element space determined from the computational mesh \(\mathcal{T}_{H}\) of tetrahedrons, with stepsize \(H>h\) (\(8{\times}8{\times}3\) grid). The network \(\mathcal{M}_{rb,d}(\mathbf{d};\boldsymbol{\theta}_{d})\) generates a matrix in \(\mathbb{R}^{n_{rb}\times k}\). Each column of such matrix is a vector in the space of the reduced coefficients. These \(k=10\) vectors are linearly combined through some hyper-parameters (described below) to obtain the reduced coefficients that best fit the loss function. The same considerations holds true for the neural network taking the inlet function \(\boldsymbol{\eta}\) as input, with the only difference that in the dense layer \(L_{3}^{rb,d}\) no activation is applied, while the dense layer \(L_{3}^{rb,\eta}\) of the MINN \(\mathcal{M}_{rb,\eta}\) is composed with a hyperbolic tangent function. In conclusion: \[\mathcal{M}_{rb,d},\mathcal{M}_{rb,\eta}:V_{t,h}\xrightarrow{r=0.3}V_{t,H} \xrightarrow{r=0.3}V_{t,H}\xrightarrow{}\mathbb{R}^{n_{rb}\times 10},\] giving back two matrices \(\mathbb{X}_{\mathbf{d}}\in\mathbb{R}^{n_{rb}\times 10}\) and \(\mathbb{X}_{\boldsymbol{\eta}}\in\mathbb{R}^{n_{rb}\times 10}\), respectively. For what concerns the physical parameters \(\boldsymbol{\mu}_{M}\in\mathbb{R}^{3}\), a shallow neural network \(\mathcal{N}_{rb,M}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{10}\) with two dense layers \(L_{1}^{rb,m}\) and \(L_{2}^{rb,m}\) is introduced: \[L_{1}^{rb,m}:\mathbb{R}^{3}\xrightarrow{}\mathbb{R}^{15},\qquad L_{2}^{rb,m}: \mathbb{R}^{15}\xrightarrow{}\mathbb{R}^{10},\] where the input layer is composed with a hyperbolic tangent function and the output one has no activation. This leads to the architecture \[\mathcal{N}_{rb,M}:\mathbb{R}^{3}\xrightarrow{}\mathbb{R}^{15}\xrightarrow{ }\mathbb{R}^{10},\] that provides as output a vector \(\mathbf{y}_{\boldsymbol{\mu}_{M}}=\mathcal{N}_{rb,M}(\boldsymbol{\mu}_{M})\in \mathbb{R}^{10}\). In order to obtain the approximation of the reduced coefficients, we hinge on the aforementioned data structures and we perform the following linear combination \[\widehat{\mathbf{u}}_{\boldsymbol{\mu},rb}=(\mathbb{X}_{\boldsymbol{\eta}} \odot\mathbb{X}_{\mathbf{d}})\mathbf{y}_{\boldsymbol{\mu}_{M}}=\mathbb{X}_{ \boldsymbol{\mu}_{m}}\mathbf{y}_{\boldsymbol{\mu}_{M}}.\] The network \(\mathcal{M}_{rb}\) of equation (20) is trained for at most 200 epochs, minimizing the loss function \[\mathcal{E}(\mathcal{M}_{rb};\boldsymbol{\theta}_{rb})=\frac{1}{N_{train}}\sum _{i=1}^{N_{train}}\|\mathbb{V}^{T}\mathbb{S}(:,i)-\mathcal{M}_{rb}(\boldsymbol {\mu}^{(i)};\boldsymbol{\theta}_{rb})\|_{2,n_{rb}},\] relying on the L-BFGS optimizer with learning rate equal to 1 and no batching and applying the same early stopping criterion presented in Section 4.3. We initialize the corresponding weights and biases with the normalized initialization proposed in [57]. #### 5.3.2 Architectures and training of the closure model (POD-MINN+) The augmented POD-MINN approach is carried out exploiting a closure model \(\mathcal{M}_{c}(\boldsymbol{\mu}_{m};\boldsymbol{\theta}_{c}):\mathbb{R}^{N_{h}} \rightarrow\mathbb{R}^{N_{h}}\) that is fed only with the geometrical parameters \(\mathbf{d}\) and \(\boldsymbol{\eta}\), encoding the microscale features of the microcirculation problem. As in the previous case, we introduce two distinct MINNs that handle each of two inputs, i.e. \(\mathcal{M}_{c,d}\) and \(\mathcal{M}_{c,\eta}\), separating their individual effects on the correction of the POD-MINN model: \[\mathcal{M}_{c}(\boldsymbol{\mu}_{m};\boldsymbol{\theta}_{c})=\mathcal{M}_{c, \eta}(\boldsymbol{\eta};\boldsymbol{\theta}_{c,\boldsymbol{\eta}})\odot \mathcal{M}_{c,d}(\mathbf{d};\boldsymbol{\theta}_{c,d}).\] Each of these MINNs is built with a single mesh-informed layer: * \(L_{1}^{c,d}\): \(V_{t,h}\xrightarrow{r=0.5}V_{t,h}\), activation function \(\rho_{d}(x)=0.4\;\tanh(x)\); * \(L_{1}^{c,\eta}\): \(V_{t,h}\xrightarrow{r=0.5}V_{t,h}\), activation function \(\rho_{\eta}(x)=0.05\;\tanh(x)\). The outputs are the vectors \(\mathbf{z_{d}}=\mathcal{M}_{c,d}(\mathbf{d})\in\mathbb{R}^{N_{h}}\) and \(\mathbf{z_{\boldsymbol{\eta}}}=\mathcal{M}_{c,\eta}(\boldsymbol{\eta})\in \mathbb{R}^{N_{h}}\), providing the following POD-MINN+ approximation: \[\widetilde{\mathbf{u}}_{\boldsymbol{\mu},h}^{rb}=\mathbb{V}\widetilde{ \mathbf{u}}_{\boldsymbol{\mu},rb}+\widetilde{\mathbf{u}}_{\boldsymbol{\mu},c}= \mathbb{V}\mathbb{X}_{\boldsymbol{\mu}_{m}}\mathbf{y}_{\boldsymbol{\mu}_{M}}+ (\mathbf{z_{\boldsymbol{\eta}}}\odot\mathbf{z_{d}})\approx\mathbf{u}_{ \boldsymbol{\mu},h}.\] To train the closure model the following loss function is minimized: \[\mathcal{E}(\mathcal{M}_{c};\boldsymbol{\theta}_{c})=\frac{1}{N_{ train}}\sum_{i=1}^{N_{train}}\Bigg{(}\|\big{(}\mathbb{S}(:,i)-\mathbb{V} \mathcal{M}_{rb}(\boldsymbol{\mu}^{(i)};\boldsymbol{\theta}_{rb})\big{)}- \mathcal{M}_{c}(\boldsymbol{\mu}_{m}^{(i)};\boldsymbol{\theta}_{c})\|_{2,N_{h} }+\] \[\|\big{(}\mathbb{S}(:,i)-\mathbb{V}\mathcal{M}_{rb}(\boldsymbol{ \mu}^{(i)};\boldsymbol{\theta}_{rb})\big{)}-\mathcal{M}_{c}(\boldsymbol{\mu}_{ m}^{(i)};\boldsymbol{\theta}_{c})\|_{\infty,N_{h}}\Bigg{)},\] where we have introduced a regularization term to adjust the loss to control the reconstruction error of the solution in the \(L_{\infty}\) norm. The training consists of at most 200 epochs with the same early stopping criterion previously mentioned. Also in this case we rely on the L-BFGS optimizer (learning rate equal to 1 and no batching). Thanks to the fact that the closure model builds upon the POD-MINN method that is already trained, the weights and biases collected into the vector \(\boldsymbol{\theta}_{c}\) are initialized to zero. It is crucial to stress that for this real-life problem with complex input data, an approach based on training directly a parameter-to-solution neural network (based on MINNs) is not feasible. The combination of the linear projection achieved by the POD-MINN followed by the closure model is the key feature for the success of the method. We need firstly to perform a POD projection into a reduced basis space, so that the closure model acts as a correction of an already trained model. #### 5.3.3 Computational performance The speed up provided by the POD-MINN (with or without closure) is significant, unlocking approaches that would be unfeasible for the full order model and allowing to employ the POD-MINN+ model as a surrogate for the numerical solver of the microcirculation problem. The trained parameter-to-solution POD-MINN+ model is able to compute a solution in approximately 0.001 seconds, while a single run of the FOM requires a higher wall computational time, that varies from 15 to 35 minutes, depending on the density of the embedded 1D microstructure. The gain with respect to the FOM is relevant also if we include the training times. In average, the training time of the POD/MINN method oscillates between 50 and 100 seconds, depending on the number of basis functions (but the trend is not monotone). The training time for the closure model is higher, although the optimization process consists of a correction of a trained model, and it is almost independent of the number of the POD basis functions, varying from 244 to 276 seconds. ### Numerical results: comparing linear and nonlinear model order reduction In order to measure the approximation properties of the POD-MINN+ method, we analyze the POD projection error \(E_{POD}\), defined as in (10), and the reconstruction errors \(E_{PODMINN}\) and \(E_{PODMINN+}\). We compare the behaviour of each error varying the number of POD basis functions \(n_{rb}\) in Figure 10. From a first analysis, we notice the good performance of both methods, being able to capture featuring good approximation results for the \(L^{2}\) norm of the errors with few POD basis functions. Indeed oxygen transfer is diffusion dominated and consequently it is well suited for an approximation methods based on POD. This is confirmed by the fast decay of the singular values of \(\mathbb{S}\) described before. On the other hand, a significant difference is detected between the \(L^{2}\) and the \(L^{\infty}\) norm, as the POD-MINN+ shows overall higher accuracy with respect to POD and POD-MINN. The latter approach fails to retrieve the local effects due to the microscale data. Instead, the closure model is able to exploit the information about the microscale geometry to improve the smallest scales, provided that they are resolved by the FOM. The efficacy of the POD/MINN method is more significant with low number of POD basis functions, as the closure model approximates the high-frequency modes that are neglected at POD level: see Table 2 to gain more comprehensive quantitative insights about each proposed method. \begin{table} \begin{tabular}{||l|c c|c c|c c||} \cline{2-7} \multicolumn{1}{c|}{} & \multicolumn{6}{c|}{Test Errors} \\ \cline{2-7} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{5 **POD Modes**} & \multicolumn{2}{c|}{10 **POD Modes**} & \multicolumn{2}{c|}{20 **POD Modes**} \\ \hline Method & \(\|\cdot\|_{2,N_{h}}\) & \(\|\cdot\|_{\infty,N_{h}}\) & \(\|\cdot\|_{2,N_{h}}\) & \(\|\cdot\|_{\infty,N_{h}}\) & \(\|\cdot\|_{2,N_{h}}\) & \(\|\cdot\|_{\infty,N_{h}}\) \\ \hline **POD** & 5.33\% & 16.00\% & 3.72\% & 13.45\% & 2.59\% & 10.80\% \\ **POD-MINN** & 6.19\% & 17.21\% & 4.52\% & 14.48\% & 3.75\% & 12.41\% \\ **POD-MINN+** & 4.85\% & 12.03\% & 3.71\% & 9.91\% & 3.28\% & 8.87\% \\ \hline \end{tabular} \end{table} Table 2: Comparison of POD, POD-MINN, and POD-MINN+ with respect to different choices of the number of reduced basis functions. We report the errors \(\|\mathbf{u}_{\mathbf{\mu},h}-\widetilde{\mathbf{u}}_{\mathbf{\mu},h}^{r}\|_{p,N_{h}}\) and \(\|\mathbf{u}_{\mathbf{\mu},h}-\widetilde{\mathbf{u}}_{\mathbf{\mu},h}^{r}\|_{p,N_{h}}\), for the POD-MINN and POD-MINN+ respectively, in the Euclidean (\(p=2\)) and maximum norm (\(p=\infty\)). Figure 10: These plots compare the error \(E_{POD-MINN}\) and \(E_{POD-MINN+}\) defined in (18), to the one of the POD projection defined as \(E_{POD}\) in (10), measured in the norms \(p=2,\infty\) computed for the oxygen transfer problem (3). The case with \(p=2\) is reported on the left, the one with \(p=\infty\) is on the right. The plots are shown in logarithmic scale. This analysis becomes more evident when we compare in Figure 11 the visualization of a FOM solution with the three reconstructions, obtained respectively using POD, POD-MINN and POD-MINN+. It is clear that the POD and POD-MINN methods are over-diffusive, failing in representing the details of the solution at the smallest scale that are filtered out due to the projection on the reduced basis space of only \(n_{rb}=10\) basis functions. On the other hand, the POD-MINN+ ensures the microstructures are correctly captured, and yields a much more appreciable description of the true behaviour of the solution over the entire domain. ## 6 Conclusions Reduced order modeling plays a crucial role in approximating the behaviour of continuous media with microstructure. These complex systems exhibit intricate geometries and intricate physics, making their direct numerical simulation computationally expensive and time-consuming. Figure 11: Plots of the high-fidelity FOM solution in a) is compared with the corresponding POD projection in b) and the reconstructions with the POD-MINN and POD-MINN+ approaches in c) and d) respectively: through the former method, given a small number of POD basis functions (\(n_{rb}=10\)), it is possible to retrieve only the global features of the solution at the macroscale, while the local effects associated to the macroscale are captured thanks to the closure model. Reduced order modeling offers an efficient approach to capture the essential features of the system while significantly reducing computational costs. However, well established techniques such as proper orthogonal decomposition (POD) and reduced basis methods, exploiting the projection of functions belonging to high-dimensional spaces onto a low-dimensional subspace, fail in preserving the typical high frequency modes of the true solution in presence of a microstructure. On the other hand, neural networks possess a remarkable ability to approximate functions in high dimensions. This flexibility allows neural networks to capture intricate patterns and relationships even in high-dimensional spaces. Leveraging on these properties, we have developed a correction of the POD-Galerkin approximation that restores the fine scale details into the reduced solution. We perform this task in two steps: first we use MINNs to approximate the map from the parameters of the problem to the reduced POD coefficients, yielding the POD-MINN method; second we enhance the approach by including an additive closure model, that is a correction term ultimately providing the POD-MINN+ method. The whole procedure can be defined and successfully trained thanks to a new family of neural network architectures, called mesh-informed neural networks (MINNs), which take advantage of the mesh structure by considering connectivity information between neighboring points or elements in the mesh. This information can be used to design specialized layers that exploit the spatial relationships within the mesh. By incorporating the mesh structure into the network architecture, the model can learn more effectively from the data and capture spatial dependencies that would be difficult to capture using traditional neural network architectures. The resulting ROM is accurate, efficient, and non intrusive, thus applicable to many scenarios where a FOM capable to capture the effect of microstructures is available but extremely expensive to query. The POD-MINN+ strategy not only accelerates simulations, but is also potentially able to enhance parametric studies such as sensitivity analysis and uncertainty quantification, making it a valuable tool for understanding the role of microstructure in many physical systems. ## Acknowledgments The present research is part of the activities of project Dipartimento di Eccellenza 2023-2027, funded by MUR, and of project FAIR (Future Artificial Intelligence Research) project, funded by the NextGenerationEU program within the PNRR-PE-AI scheme (M4C2, Investment 1.3, Line on Artificial Intelligence). The authors are members of Gruppo Nazionale per il Calcolo Scientifico (GNCS) of Istituto Nazionale di Alta Matematica (INdAM).
2309.16918
ACGAN-GNNExplainer: Auxiliary Conditional Generative Explainer for Graph Neural Networks
Graph neural networks (GNNs) have proven their efficacy in a variety of real-world applications, but their underlying mechanisms remain a mystery. To address this challenge and enable reliable decision-making, many GNN explainers have been proposed in recent years. However, these methods often encounter limitations, including their dependence on specific instances, lack of generalizability to unseen graphs, producing potentially invalid explanations, and yielding inadequate fidelity. To overcome these limitations, we, in this paper, introduce the Auxiliary Classifier Generative Adversarial Network (ACGAN) into the field of GNN explanation and propose a new GNN explainer dubbed~\emph{ACGAN-GNNExplainer}. Our approach leverages a generator to produce explanations for the original input graphs while incorporating a discriminator to oversee the generation process, ensuring explanation fidelity and improving accuracy. Experimental evaluations conducted on both synthetic and real-world graph datasets demonstrate the superiority of our proposed method compared to other existing GNN explainers.
Yiqiao Li, Jianlong Zhou, Yifei Dong, Niusha Shafiabady, Fang Chen
2023-09-29T01:20:28Z
http://arxiv.org/abs/2309.16918v2
# ACGAN-GNNExplainer: Auxiliary Conditional Generative Explainer for Graph Neural Networks ###### Abstract. Graph neural networks (GNNs) have proven their efficacy in a variety of real-world applications, but their underlying mechanisms remain a mystery. To address this challenge and enable reliable decision-making, many GNN explainers have been proposed in recent years. However, these methods often encounter limitations, including their dependence on specific instances, lack of generalizability to unseen graphs, producing potentially invalid explanations, and yielding inadequate fidelity. To overcome these limitations, we, in this paper, introduce the Auxiliary Classifier Generative Adversarial Network (ACGAN) into the field of GNN explanation and propose a new GNN explainer dubbed _ACGAN-GNNExplainer_. Our approach leverages a generator to produce explanations for the original input graphs while incorporating a discriminator to oversee the generation process, ensuring explanation fidelity and improving accuracy. Experimental evaluations conducted on both synthetic and real-world graph datasets demonstrate the superiority of our proposed method compared to other existing GNN explainers. graph neural networks; explanations; graph neural network explainer; conditional generative adversarial network + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnmnn](https://doi.org/10.1145/mnmnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnmnn](https://doi.org/10.1145/mnmnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-x/YY/MM... $15.00 [https://doi.org/10.1145/mnn](https://doi.org/10.1145/mnn) + Footnote †: journal: Acm Mech. ISBN 978-x-xxx-xxx-xxx-xxx-x/YY/MM... $15. addition, a discriminator is adopted to distinguish whether the generated explanations are "real" or "fake" and to designate a prediction label to each explanation. In this way, the discriminator could provide "feedback" to the generator and further monitor the entire generation process. Through iterative iterations of this interplay learning process between the generator and the discriminator, the generator ultimately is able to produce explanations akin to those deemed "real"; consequently, the quality of the final explanation is enhanced, and the overall explanation accuracy is significantly increased. Although ACGAN has been widely used in various domains (e.g., image processing (Krizhevsky et al., 2014), data augmentation (Krizhevsky et al., 2014), medical image analysis (Krizhevsky et al., 2014), etc.), to the best of our knowledge, this is the first time that ACGAN has been used to explain GNN models. Our method _ACGAN-GNNExplainer_ has the following merits: 1) it learns the underlying pattern of graphs, thus naturally providing explanations on a goal scale; 2) after learning the underlying pattern, it can produce explanations for unseen graphs without retraining; 3) it is more likely to generate valid important subgraphs with the consistent monitoring of the discriminator; 4) it is capable of performing well under different tasks, including node classification and graph classification. Our main contributions to this paper could be summarized as the following points: * We present a novel explainer, dubbed _ACGAN-GNNExplainer_, for GNN models, which employs a generator to generate explanations and a discriminator to consistently monitor the generation process; * We empirically evaluate and demonstrate the superiority of our method _ACGAN-GNNExplainer_ over other existing methods on various graph datasets, including synthetic and real-world graph datasets, and tasks, including node classification and graph classification. ## 2. Related Work ### Generative Adversarial Networks Generative Adversarial Networks (GANs) (Goodfellow et al., 2016) are composed of two neural networks: a generator and a discriminator, trained in a game-like manner. The generator takes random noise as input and generates samples intended to resemble the training data distribution. On the contrary, the discriminator takes both real and generated samples as input and distinguishes between them. The generator tries to fool the discriminator by generating realistic samples, while the discriminator learns to accurately distinguish between real and fake samples. GANs have demonstrated successful applications across a wide range of tasks, including image generation, style transfer, text-to-image synthesis, and video generation. Furthermore, the increasing utilization of GANs has led to the proposal of various variations, reflecting ongoing innovation and refinement within the field. These novel approaches introduce new architectural designs, optimization techniques, and training strategies to improve the stability, convergence, and overall quality of GAN models. Specifically, one strategy for expanding GANs involves incorporating side information. For instance, CGAN (Gan et al., 2017) proposes providing both the generator and discriminator with class labels to produce class conditional samples. Researchers in (Krizhevsky et al., 2014) demonstrate that class conditional synthesis significantly improves the quality of generated samples. Another avenue for expanding GANs involves tasking the discriminator with reconstructing side information. This is achieved by modifying the discriminator to include an auxiliary decoder network that outputs the class label of the training data or a subset of the latent variables used for sample generation. For example, Chen et al. (Chen et al., 2018) propose InfoGAN, a GAN-based model that maximizes the mutual information between a subset of latent variables and the observations. It is known that incorporating additional tasks can enhance performance on the original task. In the paper (Krizhevsky et al., 2014), the auxiliary decoder leverage pre-trained discriminators, such as image classifiers, to further improve the quality of synthesized images. Motivated by the aforementioned variations, Odena et al. (Odena et al., 2018) introduce the Auxiliary Classifier Generative Adversarial Network (ACGAN), a model combining both strategies to leverage side information. Specifically, the proposed model is class-conditional, incorporating an auxiliary decoder tasked with reconstructing class labels. ACGAN is an extension of CGANs. ACGANs are designed not only to generate samples that are similar to the training data and conditioned on the input information, but also to classify the generated samples into different categories. In ACGANs, both the generator and the discriminator are conditioned on auxiliary information, such as class labels. The generator takes random noise as input and generates samples conditioned on the input information and a set of labels, while the discriminator not only distinguishes between real and fake samples but also classifies them into different categories based on the input information. ACGANs provide a way to generate diverse samples that are conditioned on the input information and classified into different categories, making them useful tools in many applications, such as image processing, data augmentation, and data balancing. In particular, the authors (Krizhevsky et al., 2014) propose a semi-supervised image classifier based on ACGAN. Waheed et al. (Waheed et al., 2014) apply ACGAN in medical image analysis. Furthermore, in (Krizhevsky et al., 2014), authors augment the data by applying ACGAN in the electrocardiogram classification system. Ding et al. (Ding et al., 2017) propose a tabular data sampling method that integrates the Knearest neighbour method and tabular ACGAN to balance normal and attack samples. ### Graph Neural Network Explainers Explaining the decision-making process of graph neural networks (GNNs) is a challenging and important research topic, as it could greatly benefit users by improving safety and promoting trust in these models. To achieve this goal, several popular approaches have emerged in recent years that aim to explain GNN models by leveraging the unique properties of graph features and structures. In this regard, we briefly review several representative GNN explainers below. GNNExplainer (Zhou et al., 2017) is a seminal method in the field of explaining GNN models. By identifying the most relevant features and subgraphs that are essential to the predictions made by the GNN model, GNNExplainer is able to provide local-scale explanations for GNN models. _PGExplainer_(Krizhevsky et al., 2014) generates explanations for GNN models by using a probabilistic graph. Compared to GNNExplainer, it provides model-level explanations for each instance and has strong generalizability. Recent Gem (Gem, 2017) offers local and global explanations and operates in an inductive configuration, allowing it to explain GNN models without retraining. Lin et al. (Lin et al., 2017) later propose OrphicX, which uses latent causal factors to generate causal explanations for GNN models. However, Gem (Gem, 2017) and OrphicX (Lin et al., 2017) face difficulties in achieving consistently accurate explanations on real-world datasets. Therefore, in this paper, we attempt to develop a GNN explainer that is capable of generating explanations with high fidelity and precision for both synthetic and real-world datasets. Furthermore, reinforcement learning is another prevalent technique employed for explicating GNN models. For example, Yuan et al. (Yuan et al., 2017) propose XGNN, a model-level explainer that trains a graph generator to generate graph patterns to maximize a specific prediction of the model. Wang et al. (Wang et al., 2018) introduce RC-Explainer, which generates causal explanations for GNNs by combining the causal screening process with a Markov Decision Process in reinforcement learning. Furthermore, Shan et al. (Shan et al., 2018) propose RG-Explainer, a reinforcement learning enhanced explainer that can be applied in the inductive setting, demonstrating its better generalization ability. In addition to the works we have mentioned above, there is another line of work that is working on generating counterfactual explanations. For example, the CF-GNNExplainer (Lin et al., 2017) generates counterfactual explanations for the majority of instances of GNN explanations. Furthermore, Bajaj et al. (Bajaj et al., 2017) propose an RCExplainer that generates robust counterfactual explanations, and Wang et al. (Wang et al., 2018) propose ReFine, which pursues multi-grained explainability by pre-training and fine tuning. ## 3. Method ### Problem Formulation The notions of "interpretation" and "explanation" are crucial in unravelling the underlying working mechanisms of GNNs. Interpretation entails understanding the decision-making process of the model, prioritizing transparency, and the ability to trace the trajectory of decisions. In contrast, an explanation furnishes a rationale or justification for the predictions of GNNs, striving to present a coherent and succinct reasoning for the outcomes. In this paper, we attempt to identify the subgraphs that significantly impact the predictions of GNNs. We represent a graph as \(\mathbf{G}=(\mathbf{V},\mathbf{A},\mathbf{X})\), where \(\mathbf{V}\) is the set of nodes, \(\mathbf{A}\in\{0,1\}\) denotes the adjacency matrix that \(\mathbf{A}_{ij}=1\) if there is an edge between node \(i\) and node \(j\), otherwise \(\mathbf{A}_{ij}=0\), and \(\mathbf{X}\) indicates the feature matrix of the graph \(\mathbf{G}\). We also have a GNN model \(f\) and \(\hat{l}\) denotes its prediction, \(f(\mathbf{G})\rightarrow\hat{l}\). We further define \(E(f(\mathbf{G}),\mathbf{G}))\rightarrow\mathbf{G}^{s}\) as the explanation of a GNN explainer. Ideally, when feeding the explanation into the GNN model \(f\), it would produce the exact same prediction \(\hat{l}\), which means that \(f(\mathbf{G})\) equals \(f(E(f(\mathbf{G}),\mathbf{G}))\). We also expect that the explanation \(E(f(\mathbf{G}),\mathbf{G}))\rightarrow\mathbf{G}^{s}\) should be a subgraph of the original input graph \(\mathbf{G}\), which means that \(\mathbf{G}^{s}\in\mathbf{G}\), so that the explained graph is a valid subgraph. ### Obtaining Causal Real Explanations Our objective in this paper is to elucidate the reasoning behind the predictions made by the target GNN model \(f\). To achieve this, we regard the target GNN model \(f\) as a black box and refrain from investigating its internal mechanisms. Instead, we attempt to identify the subgraphs that significantly affect the predictions of the target GNN model \(f\). In particular, we employ a generative model to autonomously generate these subgraphs/explanations. In order Figure 1. The framework of ACGAN-GNNExplainer. The \(\odot\) means element-wise multiplication. This figure includes two phases: the training phase and the test phase. During the Training Phase, the objective is to train the generator and discriminator of the ACGAN-GNNExplainer model. After successful training, the Test Phase then utilizes the trained generator to generate explanations for the testing data. for the generative model to generate faithful explanations, it must first be trained under the supervision of "real" explanations (ground truth). However, these ground truths are typically unavailable in real-world applications. In this paper, we employ Granger causality (Golovolov et al., 2015), which is commonly used to test whether a specific variable has a causal effect on another variable, to circumvent this difficulty. Specifically, in our experiments, we mask an edge and then observe its effect on the prediction of the target GNN model \(f\). We then calculate the difference between the prediction probability of the original graph and the masked graph and set this difference as an edge weight to indicate its effect on the prediction of the target GNN model \(f\). After that, we sort all edges of the graph according to the weight values we have obtained and save the resulting weighted graph. Therefore, edges with the highest weights correspond to actual explanations (important subgraphs). However, it should also be noted that using Granger causality (Golov et al., 2015) directly to explain a target GNN model \(f\) is computationally intensive and has limited generalizability. Our method, on the other hand, could naturally overcome this challenge, as our parameterized explainer could capture the fundamental patterns shared by the same group and is adaptable and transferable across different graphs once the shared patterns have been comprehensively learned. ### ACGAN-GNNExplainer Using the generating capacity of ACGAN, in this paper, we propose an ACGAN-based explanation method for GNN models, which is termed ACGAN-GNNExplainer. It consists of a generator (\(\mathcal{G}\)) and a discriminator (\(\mathcal{D}\)). The generator \(\mathcal{G}\) is used to generate the explanations, while the discriminator \(\mathcal{D}\) is used to monitor the generation process. The detailed framework of our method ACGAN-GNNExplainer is depicted in Figure 1. In contrast to the conventional strategy of training an ACGAN, in which random noise \(\mathbf{z}\) is fed into the generator \(\mathcal{G}\), our model feeds the generator \(\mathcal{G}\) with the original graph \(\mathbf{G}\), which is the graph we want to explain, and the label \(L\), which is predicted by the target GNN model \(f\). Employing this strategy, we ensure that the explanation produced by the generator \(\mathcal{G}\), which plays a crucial role in determining the predictions of the GNN model \(f\), corresponds to the original input graph \(\mathbf{G}\). In addition, the generator \(\mathcal{G}\) trained under this mechanism can be easily generalized to unseen graphs without significant retraining, thus saving computational costs. For the generator \(\mathcal{G}\), we employ an encoder-decoder network where the encoder would project the original input graph \(\mathbf{G}\) into a compact hidden representation, and the decoder would then reconstruct the explanation from the compact hidden representation. In our case, the reconstructed explanation is a mask matrix that indicates the significance of each edge. Conceptually, the generator \(\mathcal{G}\) is capable of generating any explanation (valid or invalid) if it is sufficiently complex, which contradicts the objective of _explaining_ a GNN. Inspired by ACGAN, we adopt a discriminator \(\mathcal{D}\) to monitor the generating process of \(\mathcal{G}\). Specifically, our discriminator \(\mathcal{D}\) is a graph classifier with five convolutional layers. It is fed with the real explanation and the explanation generated by our generator \(\mathcal{G}\). It attempts to identify whether the explanation is "real" or "fake" and, at the same time classify the explanation, which serves as "feedback" to the generator \(\mathcal{G}\) and further encourages the generator \(\mathcal{G}\) to produce faithful explanations. In addition, in order to train our generator \(\mathcal{G}\) and discriminator \(\mathcal{D}\), we need to obtain the "real" explanations first. To achieve this goal, we incorporate pre-processing in our framework (Fig. 1), which uses the Granger causality (Golov et al., 2015) to acquire the "real" explanations. The details can be found in Section in 3.2. Once the input graph \(\mathbf{G}\), its corresponding real subgraphs (ground truth), and the labels have been acquired. We can train our ACGAN-GNNExplainer to produce a weighted mask that effectively highlights the edges and nodes in the original input graph \(\mathbf{G}\) that significantly contribute to the decision-making process of the given GNN model \(f\). Then, by multiplying the mask by the original adjacency matrix of the input graph, we obtain the corresponding explanations/important subgraphs. These explanations are particularly useful for comprehend the reasoning behind the complex GNN model. ### Improved Loss Function The generator \(\mathcal{G}\) generates explanations/subgraphs \(\mathbf{G}^{S}\in\mathbf{G}\) based on two essential inputs: the original graph \(\mathbf{G}\) and its associated label \(l\), as expressed by \(\mathbf{G}^{s}\leftarrow\mathcal{G}(\mathbf{G},l)\). Concurrently, the discriminator \(\mathcal{D}\) assesses the probabilities of origins (\(S=\{{}^{\star}real^{\star},{}^{\star}fake^{\star}\}\)) denoted as \(P(S\mid\mathbf{G})\), and the probabilities of class classification (class labels \(L=\{l_{1},\cdots,l_{n}\}\)) denoted as \(P(L\mid\mathbf{G})\). Consequently, the loss function of the discriminator comprises two components: the log-likelihood of the correct source, \(\mathcal{L}_{S}\), defined in Equation 1, and the log-likelihood of the correct class, \(\mathcal{L}_{L}\), defined in Equation 2. \[\begin{split}\mathcal{L}_{S}=&\mathbb{E}\left[ \log P\left(S=\text{ real }\mid\mathbf{G}\right)\right]+\\ &\mathbb{E}\left[\log P\left(S=\text{ fake }\mid\mathbf{G}^{s}\right)\right]\end{split} \tag{1}\] \[\begin{split}\mathcal{L}_{L}=&\mathbb{E}\left[ \log P\left(L=l\mid\mathbf{G}\right)\right]+\\ &\mathbb{E}\left[\log P\left(L=l\mid\mathbf{G}^{s}\right)\right] \end{split} \tag{2}\] The discriminator \(\mathcal{D}\) and the generator \(\mathcal{G}\) play a minimax game, engaging in competitive interactions. The primary objective of the discriminator \(\mathcal{D}\) is to maximize the probability of accurately classifying real and fake graphs (\(\mathcal{L}_{S}\)), as well as correctly predicting the class label (\(\mathcal{L}_{L}\)) for all graphs, resulting in a combined objective such as maximizing (\(\mathcal{L}_{S}+\mathcal{L}_{L}\)). Conversely, the generator \(\mathcal{G}\) aims to minimize the discriminator's capacity to identify real and fake graphs while simultaneously maximizing the discriminator's ability to classify them, as indicated by a combined objective like maximizing (\(-\mathcal{L}_{S}+\mathcal{L}_{L}\)). Thus, based on the Equation 1 and Equation 2, the objective function of the \(\mathcal{D}\) and \(\mathcal{G}\) are formulated in Equation 3 and Equation 4, respectively. \[\begin{split}\mathcal{L}_{(\mathcal{D})}=&-\mathbb{E} _{\mathbf{G}^{\text{gt}}\sim P(\mathbf{G}^{\text{gt}})}\log\mathcal{D}(\mathbf{ G}^{\text{gt}})\\ &-\mathbb{E}_{\mathbf{G}^{\text{gt}}\sim P(\mathbf{G}^{\text{gt}})} P(L\mid\mathbf{G}^{\text{gt}})\\ &-\mathbb{E}_{\mathbf{G}^{\text{gt}}\sim P(\mathbf{G}^{\text{gt}})} P(L\mid\mathbf{G}^{\text{gt}})\\ &-\mathbb{E}_{\mathbf{G}^{\text{gt}}\sim P(\mathbf{G})}\log(P(L \mid\mathbf{G}(\mathbf{G},l))\end{split} \tag{3}\] \[\begin{split}\mathcal{L}_{(\mathcal{G})}=&-\mathbb{E} _{\mathbf{G}^{\text{gt}}\sim P(\mathbf{G})}\log\mathcal{D}(\mathcal{G}( \mathbf{G},l))\\ &-\mathbb{E}_{\mathbf{G}^{\text{gt}}\sim P(\mathbf{G})}\log P(L \mid\mathbf{G}(\mathbf{G},l))\end{split} \tag{4}\] Here, \(\mathbf{G}\) represents the original graph that requires explanation, while \(\mathbf{G}^{gt}\) signifies its corresponding actual explanation (e.g., the real important subgraph). Using the objective functions as detailed in Equation 3 and Equation 4 for training the discriminator \(\mathcal{D}\) and generator \(\mathcal{G}\), it is observed that fidelity of the generated explanation is not satisfactory. This could be attributed to the fact that \(\mathcal{L}_{(\mathcal{G})}\) as in Equation 4 does not explicitly incorporate the fidelity information from a target GNN model \(f\). To overcome this limitation and enhance both the fidelity and accuracy of the explanation, we intentionally integrate the fidelity of the explanation into our objective function. Finally, we derive an enhanced generator (\(\mathcal{G}\)) loss function, as defined in Equation 5. \[\mathcal{L}_{(\mathcal{G})}= -\mathbb{E}_{\mathbf{G}\sim P(\mathbf{G})}\log\mathcal{D}( \mathcal{G}(\mathbf{G},l))\] \[-\mathbb{E}_{\mathbf{G}\sim P(\mathbf{G})}\log P(L\mid\mathcal{G} (\mathbf{G},l))\] \[+\lambda\mathcal{L}_{Fid} \tag{5}\] \[\mathcal{L}_{Fid}=\frac{1}{N}\sum_{i=1}^{N}||f(\mathbf{G})-f(\mathcal{G}( \mathbf{G}))||^{2} \tag{6}\] In this context, \(\mathcal{L}_{Fid}\) represents the loss function component associated with fidelity. \(f\) symbolizes a pre-trained target GNN model, \(N\) signifies the count of node set of \(\mathbf{G}\), and \(\mathbf{G}\) represents the original graph intended for explanation. Correspondingly, \(\mathbf{G}^{gt}\) stands for the explanation ground truth associated with it (e.g., the real important subgraph). Within this framework, \(\lambda\) is a trade-off hyperparameter responsible for adjusting the relative significance of the ACGAN model and the explanation accuracy obtained from the pre-trained target GNN \(f\). Setting \(\lambda\) to zero results in Equation 5 being precisely equivalent to Equation 4. Notably, for our experiments, we selected \(\lambda=2.0\) for synthetic graph datasets, \(\lambda=4.0\) for the Mutagenicity dataset, and \(\lambda=4.5\) for the NCI1 dataset. ### Pseudocode of ACGAN-GNNExplainer In Sections 3.3 and 3.4, we have described the framework and loss functions of our method in detail. To further elucidate our method, we provide its pseudocode in Algorithm 1. ``` Input: Graph data \(G=\{g_{1},\cdots,g_{n}\}\) with labels \(L=\{l_{1},\cdots,l_{n}\}\), real explanations for graph data \(G^{gt}=\{g_{1}^{gt},\cdots,g_{n}^{gt}\}\) (obtained in preprocessing phase), a pre-trained GNN model \(f\) Output: A well-trained Generator \(\mathcal{G}\), a well-trained discriminator \(\mathcal{D}\) 1 Initialize the Generator \(\mathcal{G}\) and the Discriminator \(\mathcal{D}\) with random weights 2forepoch in epochsdo 3 Sample a minibatch of \(m\) real data samples \(\{g^{(1)},\cdots,g^{(m)}\}\) and real labels \(\{l^{(1)},\cdots,l^{(m)}\}\) 4 Generate fake data samples \(\{g^{s(1)},\cdots,g^{s(m)}\}\leftarrow\mathcal{G}(g^{(1)},\cdots,g^{(m)})\) and obtain their labels \(\{l^{(s)},\cdots,l^{(m)}\}\) 5 Update \(\mathcal{D}\) with the gradient: \[\nabla_{\theta_{d}}\frac{1}{m}\sum_{i=1}^{m}\big{[}\mathcal{L}_{(\mathcal{D})} \big{]}\] 6 Update \(\mathcal{G}\) with the gradient: \[\nabla_{\theta_{g}}\frac{1}{m}\sum_{i=1}^{m}\big{[}\mathcal{L}_{(\mathcal{G})} \big{]}\] 7 8 end for ``` **Algorithm 1**Training a ACGAN-GNNExplainer ## 4. Experiments In this section, we undertake a comprehensive evaluation of the performance of our proposed method, ACGAN-GNNExplainer. We first introduce the datasets we used in our experiments, as well as the implementation details in Section 4.1. After that, we show the quantitative evaluation of our method in comparison with other representative GNN explainers on synthetic datasets (see Section 4.2) and real-world datasets (see Section 4.3). Finally, we also provide a qualitative analysis and visualize several explanation samples generated by our method, as well as other representative GNN explainers in Section 4.4. ### Implementation Details _Datasets_. We focus on two widely used synthetic node classification datasets, including BA-Shapes and Tree-Cycles (Srivastava et al., 2017), and two real-world graph classification datasets, Mutagenicity (Kalal et al., 2017) and NCI1 (Kal et al., 2018). Details of these datasets are shown in Table 1. _Baseline Approaches_. Due to the growing prevalence of GNN in a variety of real-world applications, an increasing number of research studies seek to explain GNN, thereby enhancing its credibility and nurturing trust. Among them, we identify three representative GNN explainers as our competitors: _GNNExplainer_(Srivastava et al., 2017), _Orphick_(Kalal et al., 2017) and _Gem_(Gem, 2018). For these competitors, we adopt their respective official implementations. _Different Top Edges (\(K\) or \(R\))_. After obtaining the weight (importance) of each edge for the input graph \(\mathbf{G}\), it is also important to select the right number of edges to serve as the explanations as selecting too few edges may lead to an incomplete explanation/subgraph while selecting too many edges may introduce a lot of noisy information into our explanation. To overcome this uncertainty, we specifically define a top \(K\) (for synthetic datasets) and a top \(R\) (for real-world datasets) to indicate the number of edges we would like to select. We test different \(K\) and \(R\) to show the stability of our method. To be specific, we set \(K=\{5,6,7,8,9\}\) \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Node Classification} & \multicolumn{2}{c}{Graph Classification} \\ \cline{2-5} & BA-Shapes & Tree-Cycles & Mutagenicity & NCI1 \\ \hline \# of Graphs & 1 & 1 & 4,337 & 4110 \\ \# of Edges & 4110 & 1950 & 266,894 & 132,753 \\ \# of Nodes & 700 & 871 & 131,488 & 122,747 \\ \# of Labels & 4 & 2 & 2 & 2 \\ \hline \hline \end{tabular} \end{table} Table 1. Details of synthetic and real-world datasets. for the BA-Shapes dataset, \(K=\{6,7,8,9,10\}\) for the Tree-Cycles dataset, and \(R=\{0.5,0.6,0.7,0.8,0.9\}\) for real-world datasets. _Data Split._ To maintain consistency and fairness in our experiments, we divide the data into three sets: 80% for training, 10% for validation, and 10% for testing. Testing data remain untouched throughout the experiments. _Evaluation Metrics._ A good GNN explainer should be able to generate concise explanations/subgraphs while maintaining high prediction accuracy when these explanations are fed into the target GNN. Therefore, it is desirable to evaluate the method with different metrics [9]. In our experiments, we use the accuracy and fidelity of the explanation as our performance metrics. In particular, we generate explanations for the test set using _GN-NExplainer_[25], _Gem_[10], _OrphickX_[11], and ACGAN-GNNExplainer (our method), respectively. We then feed these generated explanations to the pre-trained target GNN \(f\) to compute the accuracy, which can be formally defined as Equation 7: \[ACC_{exp}=\frac{|f(\mathbf{G})=f(\mathbf{G}^{s})|}{|T|} \tag{7}\] where \(\mathbf{G}\) signifies the initial graph necessitating explanation, and \(\mathbf{G}^{s}\) denotes its associated explanation (e.g. the significant subgraph); \(|f(\mathbf{G})=f(\mathbf{G}^{s})|\) represents the count of accurately classified instances in which the predictions of \(f\) on \(\mathbf{G}\) and \(\mathbf{G}^{s}\) are exactly the same, and \(|T|\) is the total number of instances. In addition, fidelity is a measure of how faithfully the explanations capture the important subgraphs of the input original graph. In our experiments, we employ the \(Fidelity^{+}\) and \(Fidelity^{-}\)[28] to evaluate the fidelity of the explanations. _Fidelity\({}^{+}\)_ quantifies the variation in the predicted accuracy between the original predictions and the new predictions generated by excluding the important input features. On the contrary, \(Fidelity^{-}\) denotes the changes in prediction accuracy when significant input features are retained while non-essential structures are removed. Evaluation of both \(Fidelity^{+}\) and \(Fidelity^{-}\) provides a comprehensive insight into the precision of the explanations to capture the behaviour of the model and the importance of different input features. \(Fidelity^{+}\) and \(Fidelity^{-}\) are mathematically described in Equation 8 and Equation 9, respectively. \[Fid^{+}=\frac{1}{N}\sum_{i=1}^{N}(f(\mathbf{G}_{i})_{l_{i}}-f(\mathbf{G}_{i}^ {1-s})_{l_{i}}) \tag{8}\] \[Fid^{-}=\frac{1}{N}\sum_{i=1}^{N}(f(\mathbf{G}_{i})_{l_{i}}-f(\mathbf{G}_{i}^ {s})_{l_{i}}) \tag{9}\] In these equations, \(N\) denotes the total number of samples, and \(l_{i}\) represents the class label for instance \(i\). \(f(\mathbf{G}_{i})_{l_{i}}\) and \(f(\mathbf{G}_{i}^{1-s})_{l_{i}}\) correspond to the prediction probabilities for class \(l_{i}\) using the original graph \(\mathbf{G}_{i}\) and the occluded graph \(\mathbf{G}_{i}^{1-s}\), respectively. The occluded graph is derived by removing the significant features identified by the explainers from the original graph. A higher value of \(Fidelity^{+}\) is preferable, indicating a more essential explanation. In contrast, \(f(\mathbf{G}_{i}^{s})_{l_{i}}\) represents the prediction probability for class \(l_{i}\) using the explanation graph \(\mathbf{G}_{i}^{s}\), which encompasses the crucial structures identified by explainers. A lower \(Fidelity^{-}\) value is desirable, signifying a more sufficient explanation. In general, the accuracy of the explanation (\(ACC_{exp}\)) assesses the accuracy of the explanations, while \(Fidelity^{+}\) and \(Fidelity^{-}\) assess their necessity and sufficiency, respectively. A higher \(Fidelity^{+}\) suggests a more essential explanation, while a lower \(Fidelity^{-}\) implies a more sufficient one. Through comparison of accuracy and fidelity across different explainers, we can derive valuable insights into the performance and suitability of each approach. \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c c|c c c} \hline \hline K & \multicolumn{3}{c|}{5} & \multicolumn{3}{c|}{6} & \multicolumn{3}{c|}{7} & \multicolumn{3}{c|}{8} & \multicolumn{3}{c}{9} \\ \cline{2-13} (top edges) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) \\ \hline GNNExplainer & 0.7059 & 0.1471 & 0.7941 & 0.6765 & 0.0588 & 0.8824 & 0.7059 & 0.0294 & 0.9118 & 0.7353 & 0.0000 & 0.9412 & 0.7353 & 0.0294 & 0.9118 \\ Gem & 0.5588 & **0.0000** & **0.9412** & 0.5588 & **-0.0294** & **0.9706** & 0.5882 & **-0.0294** & **0.9706** & 0.5882 & **-0.0294** & **0.9706** & 0.5882 & -0.0294 & 0.9706 \\ OrphickX & **0.7941** & 0.2059 & 0.7353 & **0.7941** & 0.2059 & 0.7353 & **0.7941** & 0.0882 & 0.8529 & **0.7941** & 0.0588 & 0.8824 & **0.7941** & 0.0588 & 0.8824 \\ Our Method & 0.6471 & 0.1471 & 0.7941 & 0.5882 & 0.0882 & 0.8529 & 0.6176 & **-0.0294** & **0.9706** & 0.6471 & **-0.0294** & **0.9706** & 0.6471 & **-0.0588** & **1.0000** \\ \hline \hline \end{tabular} \end{table} Table 2. The fidelity and accuracy of explanations on BA-Shapes dataset: \(Fid^{+}(\uparrow)\), \(Fid^{-}(\downarrow)\), \(ACC_{exp}(\uparrow)\). \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c c c} \hline \hline K & \multicolumn{3}{c|}{6} & \multicolumn{3}{c|}{7} & \multicolumn{3}{c|}{8} & \multicolumn{3}{c}{9} \\ \cline{2-13} (top edges) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) \\ \hline GNNExplainer & 0.9143 & 0.8000 & 0.1714 & 0.9429 & 0.4571 & 0.5143 & **0.9714** & 0.1714 & 0.8000 & **0.9714** & 0.0571 & 0.9143 & **0.9714** & 0.0571 & 0.9143 \\ Gem & **0.9714** & 0.2571 & 0.7143 & **0.9714** & 0.1429 & 0.8286 & **0.9714** & 0.2571 & 0.7143 & **0.9714** & 0.1143 & 0.8571 & **0.9714** & 0.0857 & 0.8857 \\ OrphickX & 0.9429 & **0.0000** & **0.9714** & 0.9429 & 0.0000 & 0.9714 & 0.9429 & **-0.0286** & **1.0000** & 0.9429 & **-0.0286** & **1.0000** & 0.9429 & **-0.0286** & **1.0000** \\ Our Method & **0.9714** & **0.0000** & **0.9714** & **0.9714** & **-0.0286** & **1.0000** & **0.9714** & 0.0286 & 0.9429 & **0.9714** & 0.0571 & 0.9143 & **0.9714** & 0.0000 & 0.9714 \\ \hline \hline \end{tabular} \end{table} Table 3. The fidelity and accuracy of explanations on Tree-Cycles dataset: \(Fid^{+}(\uparrow)\), \(Fid^{-}(\downarrow)\), \(ACC_{exp}(\uparrow)\). ### Experiments on Synthetic Datasets We first conduct experiments on two common synthetic datasets including BA-Shapes and Tree-Cycles (Srivastava et al., 2017), of which the details can be found in Section 4.1. We assess the fidelity and accuracy of the explanations generated by GNNExplainer, Gem, OrphicX, and our proposed ACGAN-GNNExplainer (our method). Table 2 and Table 3 present the fidelity and accuracy of explanations for BA-Shapes and Tree-Cycles datasets across different \(K\), respectively. When examining the results for the BA-Shapes, as shown in Table 2, it is evident that no single model consistently surpasses the others across all metrics. However, as the value of \(K\) increases, ACGAN-GNNExplainer progressively achieves competitive explanation accuracy \(ACC_{exp}\) and better performance of \(Fidelity^{-}\). On the contrary, OrphicX consistently exhibits higher \(Fidelity^{+}\) values for various \(K\), highlighting its proficiency in capturing essential subgraphs. However, its performance in terms of explanation accuracy \(ACC_{exp}\) and \(Fidelity^{-}\) lags behind, indicating that it struggles to provide comprehensive and precise explanations. Upon analyzing the results presented in Table 3, it is evident that all methods demonstrate a commendable performance on the Tree-Cycles datasets with different \(K\) values. However, no single method consistently outperforms the others in all evaluation metrics, which shows a trend similar to the results in the BA shapes (see Table 2). Notably, within the range of \(K=\{6,7\}\), ACGAN-GNNExplainer emerges as the superior choice among all the alternatives. It maintains the highest fidelity compared to the other methods on all \(K\) values. Although outperformed by Orphicx in terms of \(Fidelity^{-}\) and accuracy \(ACC_{exp}\) when \(K\) is in the range of \(\{8,9,10\}\), ACGAN-GNNExplainer still shows competitive performance. In summary, all GNN explainers manifest robust performance in synthetic datasets, largely attributed to their intrinsic simplicity in contrast to real-world datasets. Notably, ACGAN-GNNExplainer consistently outperforms alternative methods in several scenarios. Moreover, even in situations where ACGAN-GNNExplainer does not outshine its counterparts, it maintains competitive levels of performance. To offer a comprehensive evaluation of ACGAN-GNNExplainer, we extend our exploration to real-world datasets in the forthcoming Section 4.3, facilitating a thorough analysis. ### Experiments on Real-world Datasets Here we further experiment with our method with two popular real-world datasets including Mutagenicity (Krishnan et al., 2017) and NCI1 (Krishnan et al., 2018). The experimental results for Mutagenicity and NCI1 are shown in Table 4 and Table 5, respectively. From Table 4, it can be seen that ACGAN-GNNExplainer demonstrates superior performance in both fidelity (\(Fidelity^{+}\), \(Fidelity^{-}\)) and accuracy \(ACC_{exp}\) in most settings where \(K\) ranges from 0.5 to 0.8. While OrphicX marginally outperforms ACGAN-GNNExplainer in terms of explanation accuracy \(ACC_{exp}\) when \(R=0.9\), its fidelity lags behind. However, maintaining high fidelity without sacrificing accuracy is crucial when explaining GNNs in practice. From this perspective, our method shows an obvious advantage over others. Similarly, from Table 5, one can observe that ACGAN-GNNExplainer consistently outperforms other competitors in terms of fidelity and accuracy in different \(R\). Our method consistently yields higher \(Fidelity^{+}\) scores, suggesting that our generated explanations have successfully covered the important subgraphs. On the other hand, our method achieved lower \(Fidelity^{-}\) scores compared to other methods. This highlights the sufficiency of our explanations, as they effectively conveyed the necessary information for accurate predictions while mitigating inconsequential noise. Furthermore, in terms of accuracy, our method consistently yields higher explanation accuracy compared with other methods, underscoring its proficiency in effectively capturing the underlying rationale of the GNN model. In general, these results \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c c|c c c} \hline \hline R & \multicolumn{3}{c|}{0.5} & \multicolumn{3}{c|}{0.6} & \multicolumn{3}{c|}{0.7} & \multicolumn{3}{c|}{0.8} & \multicolumn{3}{c}{0.9} \\ \cline{2-13} (edge ratio) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) \\ \hline GNNExplainer & 0.3618 & **0.2535** & **0.6175** & 0.3825 & 0.2742 & 0.5968 & 0.3963 & 0.2396 & 0.6313 & 0.3641 & 0.1774 & 0.6935 & 0.3641 & 0.0899 & 0.7811 \\ Gem & 0.3018 & 0.2972 & 0.5737 & 0.3295 & 0.2696 & 0.6014 & 0.2857 & 0.2120 & 0.6590 & 0.2581 & 0.1475 & 0.7235 & 0.2120 & 0.0806 & 0.7903 \\ OrphicX & 0.2419 & 0.4171 & 0.4539 & 0.2949 & 0.3111 & 0.5599 & 0.2995 & 0.2465 & 0.6244 & 0.3157 & 0.1613 & 0.7097 & 0.2949 & **0.0599** & **0.8111** \\ Our Method & **0.3963** & **0.2535** & **0.6175** & **0.3828** & **0.2673** & **0.6037** & **0.3986** & **0.1636** & **0.7074** & **0.3602** & **0.1037** & **0.7673** & **0.3871** & 0.0806 & 0.7903 \\ \hline \hline \end{tabular} \end{table} Table 4. The fidelity and accuracy of explanations on Mutagenicity dataset: \(Fid^{+}(\uparrow)\), \(Fid^{-}(\downarrow)\), \(ACC_{exp}(\uparrow)\). \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c c|c c c} \hline \hline R & \multicolumn{3}{c|}{0.5} & \multicolumn{3}{c|}{0.6} & \multicolumn{3}{c|}{0.7} & \multicolumn{3}{c}{0.8} & \multicolumn{3}{c}{0.9} \\ \cline{2-13} (edge ratio) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) & \(Fid^{+}\) & \(Fid^{-}\) & \(ACC_{exp}\) \\ \hline GNNExplainer & 0.3358 & 0.2749 & 0.5961 & 0.3625 & 0.2603 & 0.6107 & 0.3844 & 0.1922 & 0.6788 & 0.3747 & 0.1095 & 0.7616 & 0.3236 & 0.0584 & 0.8127 \\ Gem & 0.3796 & 0.3066 & 0.5645 & 0.4307 & 0.2628 & 0.6083 & 0.4282 & 0.1873 & 0.6837 & 0.4404 & 0.1192 & 0.7518 & 0.3212 & 0.0389 & 0.8321 \\ OrphicX & 0.3114 & 0.3090 & 0.5620 & 0.3431 & 0.3236 & 0.5474 & 0.3382 & 0.2628 & 0.6083 & 0.3698 & 0.1630 & 0.7080 & 0.3139 & 0.0608 & 0.8102 \\ Our Method & **0.4015** & **0.2141** & **0.6569** & **0.4523** & **0.2214** & **0.6496** & **0.4453** & **0.1849** & **0.6861** & **0.4672** & **0.0779** & **0.7932** & **0.3942** & **0.0254** & **0.8446** \\ \hline \hline \end{tabular} \end{table} Table 5. The fidelity and accuracy of explanations on NCI1 dataset: \(Fid^{+}(\uparrow)\), \(Fid^{-}(\downarrow)\), \(ACC_{exp}(\uparrow)\). highlight the effectiveness of our proposed method in producing faithful explanations. ### Qualitative Analysis Qualitative evaluation is another effective way to compare explanations generated by different explainers. Here, we present visualizations of the explanations on NCI1 with \(R=0.5\) and visualize two examples of explanations--the target GNN model \(f\) successfully classifies one example but fails to classify the other one. We try to investigate the factors that affect the predictions of the target GNN model \(f\)--resulting in a correct prediction or causing a wrong prediction. Specifically, when the target GNN model \(f\) yields a correct prediction (e.g., the first-row visualization example in Figure 2), our objective is to provide an explanation that would highlight the key elements that lead to the correct prediction. Conversely, when the target GNN model \(f\) produces an incorrect prediction (e.g., the second-row visualization example in Figure 2), we hope to offer an explanation that elucidates the factors contributing to the incorrect prediction. Therefore, our goal is to ensure that the explanation generated by our proposed method aligns well with the prediction made by the target GNN model \(f\). In particular, when the target GNN model \(f\) accurately predicts the label for a given graph, we expect our explanation to yield the same prediction. As illustrated in the first row of Figure 2, we observe that GNNExplainer, Orphicx, and ACGAN-GNNExplainer provide correct explanations for the graph that the GNN correctly predicts. However, it is worth noting that the explanation subgraph generated by ACGAN-GNNExplainer exhibits the closest resemblance to the real explanation subgraph extracted in the preprocessing phase. Furthermore, when examining another graph for which the target GNN model \(f\) makes an incorrect prediction, we find that only ACGAN-GNNExplainer is capable of producing a correct explanation. Notably, the ACGAN-GNNExplainer model demonstrates a tendency to select other molecules as part of the explanation subgraph rather than the Cl circles. In contrast, other methods we have compared tend to include the Cl molecule circle as part of the explanation subgraph. Visually, our method demonstrates a higher degree of visual similarity to the actual explanation in comparison to other competing methods. This observation provides additional evidence supporting the efficacy of our method in producing faithful explanations. ## 5. Conclusion Unboxing the intrinsic operational mechanisms of a GNN is of paramount importance in bolstering trust in model predictions, ensuring the reliability of real-world applications, and advancing the establishment of trustworthy GNNs. In pursuit of these objectives, many methods have emerged in recent years. Although they demonstrate commendable functionality in certain aspects, most of them struggle in obtaining good performance on real-world datasets. Figure 2. The explanation visualization on NCI1 when \(R=0.5\). \(f(\cdot)\rightarrow\{0,1\}\) means predictions made by the target GNN model \(f\). The \(1^{st}\) column contains the initial graph. The \(2^{nd}\) column showcases the real explanation that we obtained during the preprocessing stage. The \(3^{rd}\) to \(5^{th}\) columns are the explanations produced by GNNExplainer, Gem, OrphicX and ACGAN-GNNExplainer, respectively. On analyzing the first row, we observe that GNNExplainer, OrphicX, and ACGAN-GNNExplainer successfully obtain the explanations that are successfully classified by the target GNN model \(f\). However, upon examining the visualization of the explanation subgraph, it is obvious that the explanation produced by ACGAN-GNNExplainer exhibits the closest resemblance to the real explanations. Moving on to the second row, we find that ACGAN-GNNExplainer tends to select molecules other than the Cl circle as part of the explanation subgraph. In contrast, other competitors have a tendency to include the Cl molecule circle as part of the explanation subgraph. To address this limitation, we, in this paper, propose an ACGAN-based explainer, dubbed ACGAN-GNNExplainer, for graph neural networks. This framework comprises a generator and a discriminator, where the generator is used to generate the corresponding explanations for the original input graphs and the discriminator is used to monitor the generation process and signal feedback to the generator to ensure the fidelity and reliability of the generated explanations. To assess the effectiveness of our proposed method, we conducted comprehensive experiments on synthetic and real-world graph datasets. We performed fidelity and accuracy comparisons with other representative GNN explainers. The experimental findings decisively establish the superior performance of our proposed ACGAN-GNNExplainer in terms of its ability to generate explanations with high fidelity and accuracy for GNN models, especially on real-world datasets.
2309.10164
Asynchronous Perception-Action-Communication with Graph Neural Networks
Collaboration in large robot swarms to achieve a common global objective is a challenging problem in large environments due to limited sensing and communication capabilities. The robots must execute a Perception-Action-Communication (PAC) loop -- they perceive their local environment, communicate with other robots, and take actions in real time. A fundamental challenge in decentralized PAC systems is to decide what information to communicate with the neighboring robots and how to take actions while utilizing the information shared by the neighbors. Recently, this has been addressed using Graph Neural Networks (GNNs) for applications such as flocking and coverage control. Although conceptually, GNN policies are fully decentralized, the evaluation and deployment of such policies have primarily remained centralized or restrictively decentralized. Furthermore, existing frameworks assume sequential execution of perception and action inference, which is very restrictive in real-world applications. This paper proposes a framework for asynchronous PAC in robot swarms, where decentralized GNNs are used to compute navigation actions and generate messages for communication. In particular, we use aggregated GNNs, which enable the exchange of hidden layer information between robots for computational efficiency and decentralized inference of actions. Furthermore, the modules in the framework are asynchronous, allowing robots to perform sensing, extracting information, communication, action inference, and control execution at different frequencies. We demonstrate the effectiveness of GNNs executed in the proposed framework in navigating large robot swarms for collaborative coverage of large environments.
Saurav Agarwal, Alejandro Ribeiro, Vijay Kumar
2023-09-18T21:20:50Z
http://arxiv.org/abs/2309.10164v1
# Asynchronous Perception-Action-Communication with Graph Neural Networks ###### Abstract Collaboration in large robot swarms to achieve a common global objective is a challenging problem in large environments due to limited sensing and communication capabilities. The robots must execute a Perception-Action-Communication (PAC) loop--they perceive their local environment, communicate with other robots, and take actions in real time. A fundamental challenge in decentralized PAC systems is to decide _what_ information to communicate with the neighboring robots and _how_ to take actions while utilizing the information shared by the neighbors. Recently, this has been addressed using Graph Neural Networks (GNNs) for applications such as flocking and coverage control. Although conceptually, GNN policies are fully decentralized, the evaluation and deployment of such policies have primarily remained centralized or restrictively decentralized. Furthermore, existing frameworks assume sequential execution of perception and action inference, which is very restrictive in real-world applications. This paper proposes a framework for asynchronous PAC in robot swarms, where decentralized GNNs are used to compute navigation actions and generate messages for communication. In particular, we use aggregated GNNs, which enable the exchange of hidden layer information between robots for computational efficiency and decentralized inference of actions. Furthermore, the modules in the framework are asynchronous, allowing robots to perform sensing, extracting information, communication, action inference, and control execution at different frequencies. We demonstrate the effectiveness of GNNs executed in the proposed framework in navigating large robot swarms for collaborative coverage of large environments. Graph Neural Networks, Decentralized Control, Multi-Robot Systems, Robot Swarms ## I Introduction Decentralized collaboration for navigation of robot swarms through an environment requires high-fidelity algorithms that can efficiently and reliably handle Perception-Action-Communication (PAC) in a feedback loop (Figure 1). The primary challenge in such systems is to decide _what_ a robot should communicate to its neighbors and _how_ to use the communicated information to take appropriate actions. Graph Neural Networks (GNNs) [1] are particularly suitable for this task as they can operate on a communication graph and can learn to aggregate information from neighboring robots to take decisions [2, 3]. They have been shown to be an effective learning-based approach for several multi-robot applications, such as flocking [2, 4], coverage control [5], path planning [6], and target tracking [7]. Furthermore, GNNs exhibit several desirable properties for decentralized systems [8]: (1) _transferability_ to new graph topologies not seen in the training set, (2) _scalability_ to large teams of robots, and (3) _stability_ to graph deformations due to positioning errors. Neural networks on graphs have been developed in the past decade for a variety of applications such as citation networks [1], classification of protein structures [9], and predicting power outages [10]. A fundamental difference between these applications and the collaboration of robot swarms is that the graph is generally static, whereas the robots are continuously moving, resulting in a dynamic and sparse communication graph. Moreover, control policies are executed on robots in real time with limited computational capabilities, which makes it imperative to provide an efficient framework for decentralized inference. Some of these challenges have been addressed in recent works on GNNs for multi-robot systems. Tolstaya _et al._[2] proposed an aggregated GNN model that uses aggregated features with hidden outputs of the internal network layers for decentralized GNNs. Gama _et al._[4] proposed a framework for learning GNN models using imitation learning for decentralized controllers. Despite these recent advances and the conceptually decentralized nature of GNNs, evaluation and deployment of GNN policies have largely remained centralized or assume fully connected communication graphs [5, 11]. A primary reason for this is that training of GNNs is usually performed in a centralized setting for efficiency, and there is Fig. 1: Perception-Action-Communication (PAC) in robots: The perception module utilizes the sensor data to perform tasks such as SLAM, semantic mapping, and object detection. The communication module is responsible for the exchange of information between robots via message buffers, thereby enabling the coordination and collaboration of robots. Limited communication capabilities restrict robots to exchanging messages only with neighboring robots. The planning and control module takes actions from the action module, plans a trajectory, and sends control commands to the actuators. In this paper, we use Graph Neural Networks (GNNs) to compute the messages to be exchanged, the aggregation of received messages, and action inferencing. a lack of asynchronous PAC frameworks for evaluation and deployment of these policies in decentralized settings. Recently, Blumenkamp _et al._[12] proposed a framework for running decentralized GNN-based policies on multi-robot systems, along with a taxonomy of network configurations. While our framework focuses more on the architecture design for PAC systems and less on networking protocols, a key difference is that we enable the asynchronous execution of different modules in the framework. In real-world applications with robots running PAC loops, a robot may need to perform perception tasks such as sensing, post-processing of sensor data, constructing semantic maps, and SLAM, which results in a computationally expensive perception module. Most prior work on GNNs for multi-robot systems [2, 4] do not consider the entire PAC loop and focus on action inference. This results in a sequential computation--first, the system evolves, providing a new state, and then the GNN module is executed to generate the message and action, and the process is repeated. In robotics, it is desirable to perform computations asynchronously, which is generally already done for low-level sensing, communication, and control. This motivates the need for a fully asynchronous PAC framework, where the GNN module can be executed at a higher frequency than the perception module. An asynchronous GNN module enables multiple rounds of GNN message aggregation, thereby diffusing information over a larger portion of the communication graph. Furthermore, perception tasks, execution of the current action, and computation of the next action can all be performed asynchronously and concurrently. This has the potential to significantly improve the performance of the system, especially in multi-core processors. The primary _contribution_ of this paper is a learnable PAC framework composed of four asynchronous modules: perception, inter-robot communication, decentralized GNN message aggregation and inference, and low-level controller. The two key salient features of the framework are: **(1) Decentralized GNN:** We leverage the aggregated GNN model [2] for decentralized message aggregation and inferencing. The model uses aggregated features comprising hidden outputs of the internal network layers. As the hidden layer outputs from neighboring robots are encoded in the messages, robots need not recompute graph convolutions performed by other robots, thereby distributing the computation over the robots. **(2) Asynchronous modules:** The framework is designed to execute perception and GNN computations asynchronously. This allows the message aggregation and inferencing to be performed at a much higher frequency than the computationally intensive perception tasks. We also provide a ROS2 [13] compatible open-source implementation of the framework written primarily in C++ for PAC in robot swarms using GNNs. ## II Navigation Control Problem The navigation control problem, in the paper, considers a homogeneous team of \(N\) mobile robots that needs to navigate through a \(d\)-dimensional environment \(\mathcal{W}\subseteq\mathbb{R}^{d}\) to minimize the expected value of an application-defined cost function. We denote the set of robots by \(\mathcal{V}=\{1,\ldots,N\}\), with the position of the \(i\)-th robot denoted by \(\mathbf{p}_{i}(t)\in\mathcal{W}\) at time \(t\). The state of the \(i\)-th robot at time \(t\) is denoted by \(\mathbf{x}_{i}(t)\in\mathbb{R}^{m}\), which in addition to robot positions may comprise velocities and other sensor measurements. The states of the multi-robot system and the control actions are collectively denoted by: \[\mathbf{X}(t)=\begin{bmatrix}\mathbf{x}_{1}(t)\\ \mathbf{x}_{2}(t)\\ \vdots\\ \mathbf{x}_{N}(t)\end{bmatrix}\in\mathbb{R}^{N\times m},\quad\mathbf{U}(t)= \begin{bmatrix}\mathbf{u}_{1}(t)\\ \mathbf{u}_{2}(t)\\ \vdots\\ \mathbf{u}_{N}(t)\end{bmatrix}\in\mathbb{R}^{N\times d}.\] We consider the multi-robot system to evolve as per a Markov model \(\mathbb{P}\), i.e., the state of the system at time \(t+\Delta t\) depends only on the state of the system and the control actions at time \(t\), where \(\Delta t\) is the time required for a single step. \[\mathbf{X}(t+\Delta t)=\mathbb{P}(\mathbf{X}\mid\mathbf{X}(t),\mathbf{U}(t)) \tag{1}\] The control problem can then be posed as an optimization problem, where the system incurs a cost given by a cost function \(c(\mathbf{X}(t),\mathbf{U}(t))\) for state \(X(t)\) and control action \(U(t)\) taken at time \(t\), and the goal is to find a policy \(\Pi_{c}^{*}\) that minimizes the expected cost [4]: \[\Pi_{c}^{*}=\operatorname*{argmin}_{\Pi_{c}}\mathbb{E}\left[\sum_{t=0}^{ \infty}\gamma^{t}c(\mathbf{X}(t),\mathbf{U}(t))\right].\] Here, the control actions are drawn from a conditional distribution \(\mathbf{U}(t)=\Pi_{c}(\mathbf{U}\mid\mathbf{X}(t))\). Note that the policy \(\Pi_{c}^{*}\) is centralized as the control actions, in the above formulation, require complete knowledge of the state of the system. ### _Decentralized Navigation Control_ In the decentralized navigation control problem, the robots take control actions based on their local information and the information communicated by their neighbors. Thus, we consider that the robots are equipped with a communication device for exchanging information with other robots that are within a given communication radius \(r_{c}\). The problem can Fig. 2: Learning Perception-Action-Communication (PAC) loops using an architecture composed of three Neural Networks (NN): The perception NN, usually a Convolution NN, computes features using the sensor data obtained through perception. The core of the architecture is a collaborative NN, a Graph NN in our case, which computes the messages to be exchanged between robots and aggregates received messages. The GNN also computes features for the action NN, often a shallow Multi-Layer Perceptron (MLP), the output of which is sent to a planning and control module. then be formulated on a _communication graph_\(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) represents the set of robots and \(\mathcal{E}\) represents the communication topology of the robots. A robot \(i\) can communicate with another robot \(j\) if and only if their relative distance is less than a given communication radius \(r_{c}\), i.e., an edge \(e=(i,j)\in\mathcal{E}\) exists if and only if \(\|\mathbf{p}_{i}-\mathbf{p}_{j}\|_{2}\leq r_{c}\). We assume bidirectional communication and, therefore, the communication graph is assumed to be undirected. A robot \(i\) can communicate with its neighbors \(\mathcal{N}(i)\), defined as \(\mathcal{N}(i)=\{j\in\mathcal{V}\mid(j,i)\in\mathcal{E}\}\). Information can also be propagated through multi-hop communication. The set of \(k\)-hop neighbors that the robot \(i\) can communicate with at most \(k\) communication hops is defined recursively as: \[\mathcal{N}_{k}(i)=\mathcal{N}_{k-1}(i)\cup\bigcup_{j\in\mathcal{N}_{k-1}(i)} \mathcal{N}(j),\quad\text{with }\mathcal{N}_{0}(i)=\{i\}.\] Let \(\frac{1}{\Delta t_{c}}\) be the frequency at which the robots communicate with each other, i.e., \(\Delta t_{c}\) is the time required for a single communication to take place. Then the total information acquired by robot \(i\) is given by: \[\mathcal{X}_{i}(t)=\bigcup_{k=0}^{\lfloor\frac{\sigma_{i}^{k}}{\Delta t_{c}} \rfloor}\{\mathbf{x}_{j}(t-k\Delta t_{c})\mid j\in\mathcal{N}_{k}(i)\}.\] Now, the control actions can be defined in terms of a decentralized control policy \(\pi_{i}\): \[\mathbf{u}_{i}(t)=\pi_{i}(\mathbf{u}_{i}\mid\mathcal{X}_{i}(t)).\] Denoting by \(\mathbf{U}(t)=[\mathbf{u}_{1}(t),\dots,\mathbf{u}_{N}(t)]^{\top}\) and \(\mathcal{X}(t)=[\mathcal{X}_{1}(t),\dots,\mathcal{X}_{N}(t)]^{\top}\), the decentralized control problem can be formulated as: \[\Pi^{*}=\operatorname*{argmin}_{\Pi}\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma ^{t}c(\mathcal{X}(t),\mathbf{U}(t))\right].\] Computing the optimal decentralized policy is much more challenging than the centralized control as it contains the trajectory histories, unlike the Markov centralized controller [4]. The problem is computationally intractable for complex systems [14], which motivates the use of learning-based approaches. GNNs, in particular, are well-suited for decentralized control for multi-robot systems as they operate on the communication graph topology and can be used to learn a decentralized control policy from training data generated using a centralized controller. **Remark 1**.: **Asynchronous Communication and Inference:** The formulation for the centralized and decentralized control problems follows the structure given by Gama _et al._[4]. However, similar to several other works on decentralized control, the formulation in [4] assumes that a single step of message aggregation and diffusion is performed at the same time step as the evolution of the system and perception tasks. In contrast, our formulation separates these tasks and executes them asynchronously at a higher frequency by explicitly parametrizing two different time steps. Asynchronous and decentralized execution of modules results in higher fidelity of the overall system. ## III Decentralized Graph Neural Networks Graph Neural Networks (GNNs) [1] are layered information processing architecture that operate on a graph structure and make inferences by diffusing information through the graph. In the context of multi-robot systems, the graph is imposed by the communication graph, i.e., the graph nodes \(\mathcal{V}\) represent the robots, and the edges represent the communication links \(\mathcal{E}\), as discussed in Section II-A. In this paper, we focus on Graph Convolutional Neural Networks (GCNNs), a layered composition of convolution graph filters with point-wise nonlinearities (\(\sigma\)). The architecture is defined by \(L\) layers, \(K\) hops of message diffusion, and a _shape shift operator_\(\mathbf{S}\in\mathbb{R}^{N\times N},\,N=|\mathcal{V}|\) that is a based on the communication graph. The elements \([\mathbf{S}]_{ij}\) can be non-zero only if \((i,j)\in\mathcal{E}\). The input to the GCNN is a collection of features \(\mathbf{X}_{0}\in\mathbb{R}^{N\times d_{0}}\), where each element \(\mathbf{x}_{i}\) is a feature vector for robot \(i\), \(\forall i\in\mathcal{V}\). The weight parameters for learning on GNNs are given by \(\mathbf{H}_{lk}\in\mathbb{R}^{d_{(l-1)}\times d_{l}},\,\forall l\in\{1,\cdots,L \},\,\forall k\in\{1,\cdots,K\}\), where \(d_{l}\) is the dimension of the hidden layer \(l\) with \(d_{0}\) as the dimension of the input feature vectors. The convolution graph filters are polynomials of the shape shift operator \(\mathbf{S}\) with coefficients defined by the input and the weight parameters \(\mathbf{H}_{lk}\). The output \(\mathbf{Z}_{l}\) of the filter is processed by a point-wise non-linearity \(\sigma\) to obtain the output of layer \(l\), i.e., \(\mathbf{X}_{l}=\sigma(\mathbf{Z}_{l})\). The final output of the GCNN is given by \(\mathbf{X}_{L}\) and the entire network is denoted by \(\Phi(\mathbf{X};\mathbf{S},\mathcal{H})\). Figure 3 shows an architecture with two layers. To see that GNNs are suitable for decentralized robot swarms, consider the computation \(\mathbf{Y}_{kl}=(\mathbf{S})^{k}\mathbf{X}_{(l-1)}\) for some layer \(l\). These computations can be done recursively: \(\mathbf{Y}_{kl}=\mathbf{S}(\mathbf{S})^{(k-1)}\mathbf{X}_{(l-1)}=\mathbf{S} \mathbf{Y}_{(k-1)l}\). For a robot \(i\), vector \((\mathbf{y}_{i})_{kl}=[\mathbf{Y}_{kl}]_{i}\) can be computed as: \[(\mathbf{y}_{i})_{kl}=\sum_{j\in\mathcal{N}(i)}[\mathbf{S}]_{ij}(\mathbf{y}_{j} )_{(k-1)l} \tag{2}\] Fig. 3: A Graph Convolution Neural Network (GCNN) with two layers. Each layer is made up of a convolution graph filter followed by a point-wise non-linearity. GNNs are particularly suitable for decentralized robot swarms as the computations respect the locality of the communication graph. Here, \(\mathcal{N}(i)\) is the set of neighbors of robot \(i\), and \([\mathbf{S}]_{ij}\) is the \((i,j)\)-th element of the shape shift operator \(\mathbf{S}\). Since the value of \([\mathbf{S}]_{ij}\) is non-zero only if \((i,j)\in\mathcal{E}\), the computation of \((\mathbf{y}_{i})_{kl}\) only involves the features of the neighbors of robot \(i\), i.e., the computation respects the locality of the communication graph. Thus, the robot \(i\) needs to only receive \((\mathbf{y}_{j})_{(k-1)l}\) from its neighbors to compute \((\mathbf{y}_{i})_{kl}\), which makes the overall system decentralized. The collection of hidden feature vectors \((\mathbf{y}_{i})_{kl}\) forms the _aggregated message_[2]\(\mathbf{Y}_{i}\) for robot \(i\), which is precisely the information the robot \(i\) needs to communicate to its neighboring robots. **Definition 1**.: _Aggregated Message \(\mathbf{Y}_{i}\):_ The aggregated message for a robot \(i\) is defined as: \[\mathbf{Y}_{i}=\begin{bmatrix}\left(\mathbf{y}_{i}\right)_{01}=\left(\mathbf{ x}_{i}\right)_{0}&\left(\mathbf{y}_{i}\right)_{11}&\cdots&\left(\mathbf{y}_{i} \right)_{(K-1)1}\\ \vdots&\vdots&\ddots&\vdots\\ \left(\mathbf{y}_{i}\right)_{0l}=\left(\mathbf{x}_{i}\right)_{l-1}&\left( \mathbf{y}_{i}\right)_{1l}&\cdots&\left(\mathbf{y}_{i}\right)_{(K-1)l}\\ \vdots&\vdots&\ddots&\vdots\\ \left(\mathbf{y}_{i}\right)_{0L}=\left(\mathbf{x}_{i}\right)_{L-1}&\left( \mathbf{y}_{i}\right)_{1L}&\cdots&\left(\mathbf{y}_{i}\right)_{(K-1)L}\end{bmatrix}\] where, \(\left(\mathbf{x}_{i}\right)_{0}\) is the input feature for robot \(i\), \(\left(\mathbf{x}_{i}\right)_{l}\) is the output of layer \(l\) of the GNN, and \[\begin{split}\left(\mathbf{y}_{i}\right)_{kl}=\sum_{j\in\mathcal{ N}(i)}\left[\mathbf{S}\right]_{ij}(\mathbf{y}_{j})_{(k-1)l},\\ \forall k\in\{1,\cdots,K-1\},\,\forall l\in\{1,\cdots,L\}\end{split} \tag{3}\] Note that the dimension of each vector in a row is same but the dimension across rows may be different, i.e., \(\mathbf{Y}_{i}\) is a collection of matrices and not a proper tensor. The overall algorithm for message aggregation and inference using a GNN is given in Algorithm 1. The output of the GNN is the output of the last layer \(\mathbf{X}_{L}\). To completely diffuse a signal \(\mathbf{x}_{i}\) across the network, each robot needs to exchange messages and execute Algorithm 1 for \(KL\) times. This would require that the GNN message aggregation must be executed at a frequency that is \(KL\) times higher than the frequency of perception. Depending on the application and the number of layers of the network, this may not be always feasible. However, due to the stability properties of the GNNs, they perform well even when the message aggregation is executed at a lower frequency. ``` Input : Messages \(\{\mathbf{Y}_{j}\mid j\in\mathcal{N}(i)\}\), model parameters \(\mathcal{H}\) Output : Inference output \((\mathbf{x}_{i})_{L}\), messages \(\mathbf{Y}_{i}\) \((\mathbf{x}_{i})_{0}\leftarrow\mathbf{x}_{i}\);// Input feature for robot \(i\) 1for\(l=1\)do 2\((\mathbf{y}_{i})_{l-1}\leftarrow(\mathbf{x}_{i})_{l-1}\); 3\(\mathbf{z}_{l}\leftarrow(\mathbf{y}_{i})_{0l}\,\mathbf{H}_{lk}\); 4for\(k=1\)to\(K\)do 5\((\mathbf{y}_{i})_{kl}\leftarrow\mathbf{0}\); 6for\(j\in\mathcal{N}(i)\)do 7\((\mathbf{y}_{i})_{kl}\leftarrow\left(\mathbf{y}_{i}\right)_{kl}+\left[\mathbf{S }\right]_{ij}(\mathbf{y}_{j})_{(k-1)l}\); 8 9 end for 10\(\mathbf{z}_{l}\leftarrow\mathbf{z}_{l}+\left(\mathbf{y}_{i}\right)_{kl}\, \mathbf{H}_{lk}\); 11 12 end for 13\(\mathbf{z}_{l}\leftarrow\mathbf{z}_{l}+\mathbf{b}_{l}\); // If bias is used 14\((\mathbf{x}_{i})_{l}\leftarrow\sigma(\mathbf{z}_{l})\); // Point-wise non-linearity 15 16 end for 17\(\mathbf{Y}_{i}\leftarrow\left[\mathbf{x}_{i},(\mathbf{y}_{i})_{kl}\right]\); // Def. 1 ``` **Algorithm 1**GNN Aggregation and Inference **Remark 2**.: The computation of each individual element of \((\mathbf{y}_{i})_{kl}\) is a single matrix multiplication, as the computation of \((\mathbf{y}_{j})_{(k-1)l}\) is already performed by the neighboring robot \(j\). Thus, aggregated GNNs naturally distribute computation across robots. Furthermore, the size of the aggregated message \(\mathbf{Y}_{i}\) is defined by the architecture of the GNN, the dimension of the input feature vector, and the dimension of the output. It is independent of the number of robots in the system, making it scalable to large robot swarms. These properties make the aggregated GNNs suitable for decentralized robot swarms and, therefore, are used in the proposed framework for asynchronous PAC. ## IV Asynchronous PAC Framework The framework is designed to efficiently perform decentralized and asynchronous Perception-Action-Communication (PAC) loops in robot swarms. It comprises four primary modules for different PAC subtasks that are executed asynchronously: (1) Perception, (2) Inter-Robot communication, (3) GNN message aggregation and inference, and (4) Low-level actuator controller. The asynchronous design of the framework is motivated by two main advantages: **Concurrent execution:** Asynchronous modules allow non-related subtasks to be executed concurrently, especially the computationally expensive perception module and relatively faster GNN module. As a result, the GNN module can perform several steps of message aggregation, thereby diffusing information over a larger number of nodes in the communication graph, while the perception module is still processing the sensor data. Furthermore, concurrent execution significantly reduces the computation time when executed on multi-core processors, thereby enabling better real-time performance. **Variable frequency:** Asynchronous modules allow different subtasks to be executed at different frequencies. In particular, the GNN module can be executed at a higher frequency than the perception module, which has not been possible in prior work. Similarly, as is often the case, the communication and the low-level actuator controller modules can run at a higher frequency than the GNN message aggregation module. Generally, a GNN policy computes a high-level control action, which is then executed by a low-level controller for several time steps. Even in the case where the perception task is minimal, asynchronous execution allows the GNN module to perform several steps of message aggregation while the low-level controller is executing the previous control action. In prior work, the perception and GNN computations were considered to be executed synchronously, even if communication and low-level controller are asynchronous, affecting the computation time and the number of message aggregations that can be performed by the GNN module. This is mitigated in our proposed framework by allowing asynchronous execution. We now describe the different modules of the framework. ### _Perception_ The perception module is responsible for getting the sensor data and processing it to obtain the input features for the GNN. The module may also contain application-specific tasks such as semantic mapping, object detection, and localization, which makes the module computationally expensive. The entire perception module is typically executed at a low frequency in applications that require significant computation. In our specific implementation for the coverage control problem in Section V, we use a CNN to generate a low-dimensional feature vector for the GNN module. ### _Inter-Robot Communication_ Robots _broadcast_ their message \(\mathbf{Y}_{i}\), for robot \(i\), to other robots within their communication range \(r_{c}\), i.e., the neighboring robots \(\mathcal{N}(i)\). They also receive messages from their neighbors: \(\mathbf{Y}_{j},\forall j\in\mathcal{N}(i)\). Generally, communication hardware may allow either receiving a message or transmitting a message at a given time. Thus, the communication module may need to be executed twice the frequency of the GNN message aggregation module. We use two buffers to maintain messages: a transmitter buffer \(\mathrm{T_{x}}\), which stores the message to be transmitted, and a receiver buffer \(\mathrm{R_{x}}\), which stores the most recent message received from each neighbor. The module is composed of three submodules: a message buffer manager, a transmitter, and a receiver. _Message Buffer Manager:_ The message buffer manager handles the transmitter \(\mathrm{T_{x}}\) and receiver \(\mathrm{R_{x}}\) buffers. When a new message is generated by the GNN module, the message buffer performs five sequential actions (1) momentarily locks the transmitter and receiver to avoid race conditions in writing and reading the buffers, (2) loads the new message \(\mathbf{Y}_{i}\), received from the GNN module, onto the transmitter buffer, (3) sends the contents of the receiver buffer to the GNN module, (4) clears the receiver buffer, and (5) releases the lock on the buffers. Since having a lock on the communication buffers is not desirable, our implementation makes efficient use of _smart memory pointers_ in C++ to exchange the contents from the GNN module and the buffers, i.e., the actual contents are not loaded to the buffers, but the pointers to the memory locations are exchanged. Clearing the receiver buffer is critical to ensure old messages, which would have been used in the previous GNN message aggregation, are not considered further. _Transmitter:_ The transmitter submodule broadcasts the message \(\mathbf{Y}_{i}\), using the \(\mathrm{T_{x}}\) message buffer, to neighboring robots \(\mathcal{N}(i)\). Additionally, an identification is attached to the message so that the receiving robot can rewrite old messages in the buffer with the most recent message from the same robot. _Receiver:_ The receiver submodule receives the messages broadcast by the neighboring robots \(\mathcal{N}(i)\). If a message is received from a robot that already has a message in the receiver buffer, the message is overwritten with the most recent message, i.e., only the most recent message from a neighboring robot is stored in the receiver buffer. The size of the buffer needs to be dynamic as it is dependent on the number of neighbors, which may change over time. ### _GNN Message Aggregation and Inference_ The GNN message aggregation module has two tasks: (1) generate messages to be communicated to neighboring robots in the next time step, and (2) perform inference for control actions. Our framework uses the aggregated GNN model, described in Section III. The system is fully decentralized, i.e., each robot has its own GNN model and the inference is performed locally. An important attribute of the aggregated GNN model is that the size of the messages is dependent on the number of layers in the network, and is independent of the number of robots. Thus, the system is highly scalable to large robot swarms. The module is executed at a higher frequency than the perception module. ### _Low-Level Controller_ The low-level controller interfaces the framework with the robot actuators. It receives the control action at the end of the computation of the GNN module and executes it for a pre-defined time interval. The controller is executed at a very high frequency to ensure that the control actions are reliably executed in real-time while correcting the control commands using a feedback loop. Since the implementation of the framework is designed to work with ROS2, any existing package for low-level control can be used with the framework. ## V Evaluation on the Coverage Control Problem We evaluate our approach for asynchronous Perception-Action-Communication (PAC) on the coverage control problem [15] for a swarm of 32 robots in simulation. Coverage control requires the collaboration of a robot swarm to provide sensor coverage to monitor a phenomenon or features of interest in an environment. An _importance density function_ (IDF) [5] is used to model the probability distribution of features of interest. The problem is widely studied in robotics and has applications in various domains, including mobile networking [16], surveillance [17], and target tracking [18]. We consider the decentralized setup where the robots have limited sensing and communication capabilities. Furthermore, the environment is not known a priori, and the robots use their sensor to make localized observations of the environment. Coverage control is posed as an optimization problem using _Voronoi partitions_\(\mathcal{P}_{i}\) to assign each robot a distinct subregion of the environment [19, 20, 5]. The objective of the coverage problem is to minimize the cost to cover the environment, weighted by the IDF \(\Phi(\cdot)\), see [20] for details. \[\mathcal{J}(\mathbf{p}_{1},\ldots,\mathbf{p}_{|\mathcal{V}|})=\sum_{i=1}^{| \mathcal{V}|}\int_{\mathcal{P}_{i}}\|\mathbf{p}_{i}-\mathbf{q}\|^{2}\Phi( \mathbf{q})\mathbf{d}\mathbf{q} \tag{4}\] We model a 1024 m\(\times\)1024 m rectangular environment and robots that make 64 m\(\times\)64 m localized sensor observations. Each robot maintains a local map of size 256 m\(\times\)256 m with cumulatively added observations and an obstacle map of the same size for the boundaries of the environment. Based on our contemporary work on coverage control, these maps are processed by a CNN with three layers of 32 latent size each. Additionally, relative positions of neighboring robots, obtained either through sensing or communication, within a communication radius of 128 m is mapped onto a two-channel (\(x\) and \(y\)) normalized heatmap. These four maps are concatenated to form the input to a CNN, which is composed of three layers of 32 latent size and generates a 32 dimensional feature vector for each robot. The sensing and processing of maps by CNN constitute the perception module of the PAC framework. The output of the CNN is augmented with the normalized position of the robot to form a 34-dimensional feature vector, which is used as the input to the GNN with two layers of 256 latent size and \(K=3\) hops of communication. The output of the GNN is processed by a multi-layer perceptron (MLP), i.e., an action NN, with one hidden layer and 32 latent size to compute the control velocities for the robot. These CNN and GNN architectures are part of an ongoing work on coverage control1. Footnote 1: Appropriate references will be added in the final publication. The entire model is trained using imitation learning with the ground truth generated using a centralized clairvoyant Lloyd's algorithm that has complete knowledge of the IDF and the positions of the robots. We compare our approach with a decentralized and a centralized Lloyd's algorithm [19]. The decentralized algorithm is only aware of its maps and the relative positions of neighboring robots, whereas the centralized algorithm has the combined knowledge of all the robots and their global positions. We executed these algorithms in our asynchronous PAC framework for four different operating frequencies from (1.25 Hz to 5 Hz) of the perception module. Our approach performs significantly better than the decentralized and centralized algorithms, see Figure 4. We also evaluated our approach with noisy position information, see Figure 5. These results show the efficacy of asynchronous PAC systems for decentralized control of robot swarms. ## VI Conclusion We presented a framework for asynchronous Perception-Action-Communication with Graph Neural Networks (GNNs). Perception comprises application-specific tasks such as sensing and mapping, while the action module is responsible for executing controls on the robot. The GNN bridges these two modules and provides learned messages for collaboration with other agents. The decentralized nature of the GNN enables scaling to large robot swarms. The asynchronous capability Fig. 4: Evaluation of coverage control algorithms, within our decentralized PAC framework, for four different operating frequencies (1.25 Hz to 5 Hz) of system evolution, which includes all perception tasks. The decentralized GNN, with CNN for perception, is trained using imitation learning with the ground truth generated using a clairvoyant Lloyd’s algorithm. The plots show the coverage cost, averaged over 30 environments, for 600 time steps and are normalized by the cost at the starting configuration of the system. Our approach significantly outperforms decentralized and centralized Lloyd’s algorithms. Fig. 5: Evaluation of coverage algorithms with a Gaussian noise \(\epsilon\) added to the position of each robot, i.e., the sensed position \(\bar{\mathbf{p}}_{i}=\mathbf{p}_{i}+\epsilon\). The perception module is executed at 2.5 Hz for all four cases. It is interesting that the performance of Lloyd’s algorithm increases slightly with noise, as this leads to discovering more of the IDF. However, the algorithms converge to a stable configuration after more number of steps. The decentralized GNN approach outperforms the two Lloyd’s algorithms in all cases, thereby demonstrating the robustness of our approach to noisy information. of our system allows the execution of GNN-based message aggregation and inferencing at a frequency in between that of perception and action modules. We demonstrated the effectiveness of using learnable PAC policies in a decentralized manner for the coverage control problem. The framework will allow evaluation and deployment of asynchronous PAC systems with GNNs with large robot swarms in real-world applications. Future work entails validating the system on real robots--an essential yet challenging task due to difficulties operating a large number of robots. We also plan to evaluate our framework on other applications, such as flocking and target tracking, and analyze the compatibility of our framework with other GNN architectures.
2309.10616
Enhancing quantum state tomography via resource-efficient attention-based neural networks
Resource-efficient quantum state tomography is one of the key ingredients of future quantum technologies. In this work, we propose a new tomography protocol combining standard quantum state reconstruction methods with an attention-based neural network architecture. We show how the proposed protocol is able to improve the averaged fidelity reconstruction over linear inversion and maximum-likelihood estimation in the finite-statistics regime, reducing at least by an order of magnitude the amount of necessary training data. We demonstrate the potential use of our protocol in physically relevant scenarios, in particular, to certify metrological resources in the form of many-body entanglement generated during the spin squeezing protocols. This could be implemented with the current quantum simulator platforms, such as trapped ions, and ultra-cold atoms in optical lattices.
Adriano Macarone Palmieri, Guillem Müller-Rigat, Anubhav Kumar Srivastava, Maciej Lewenstein, Grzegorz Rajchel-Mieldzioć, Marcin Płodzień
2023-09-19T13:46:21Z
http://arxiv.org/abs/2309.10616v2
# Enhancing quantum state tomography via resource-efficient attention-based neural networks ###### Abstract Resource-efficient quantum state tomography is one of the key ingredients of future quantum technologies. In this work, we propose a new tomography protocol combining standard quantum state reconstruction methods with an attention-based neural network architecture. We show how the proposed protocol is able to improve the averaged fidelity reconstruction over linear inversion and maximum-likelihood estimation in the finite-statistics regime, reducing at least by an order of magnitude the amount of necessary training data. We demonstrate the potential use of our protocol in physically relevant scenarios, in particular, to certify metrological resources in the form of many-body entanglement generated during the spin squeezing protocols. This could be implemented with the current quantum simulator platforms, such as trapped ions, and ultra-cold atoms in optical lattices. + Footnote †: These authors contributed equally. + Footnote †: These authors contributed equally. + Footnote †: These authors contributed equally. ## I Introduction Modern quantum technologies are fuelled by resources such as coherence, quantum entanglement, or Bell nonlocality. Thus, a necessary step to assess the advantage they may provide is the certification of the above features [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. The resource content of a preparation is revealed from the statistics (e.g. correlations) the device is able to generate. Within the quantum formalism, such data is encoded in the density matrix, which can only be reconstructed based on finite information from experimentally available probes - a process known as quantum state tomography (QST) [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. Hence, QST is a prerequisite to any verification task. On the other hand, the second quantum revolution brought new experimental techniques to generate and control massively correlated states in many-body systems [22; 23; 24; 25; 26], challenging established QST protocols. Both reasons elicited an extremely active field of research that throughout the years offered a plethora of algorithmic techniques [27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. Over the recent years, machine learning (ML), artificial neural networks (NN), and deep learning (DL) entered the field of quantum technologies [37], offering many new solutions to quantum problems, also in the context of QST [38; 39; 40; 41; 42; 43]. The supervised DL approaches were already successfully tested on experimental data [44; 45; 46; 47], wherein a study of neural network quantum states beyond the shot-noise appeared only recently [48]. Nevertheless, neural network-based methods are not exempted from limitations, i.e. NN can provide non-physical outcomes [44; 46; 49], and are limited in ability to learn from a finite number of samples [50]. In this work, we address the abovementioned drawback by offering a computationally fast general protocol to back up any finite-statistic QST algorithms within a DL-supervised approach, greatly reducing the necessary number of measurements performed on the unknown quantum state. We treat the finite-statistic QST reconstructed density matrix as a noisy version of the target density matrix, and we employ a deep neural network as a denoising filter, which allows us to reconstruct the target density matrix. Furthermore, we connect the measured error loss function with quantum fidelity between the target and the reconstructed state. We show that the proposed protocol greatly reduces the required number of prepared measurements allowing for high-fidelity reconstruction \(\sim 98\%\) of both mixed and pure states. We also provide an illustration of the power of our method in, a physically useful, many-body spin-squeezing experimental protocol. The paper is organized as follows: in Sec. II we introduce the main concepts behind the QST; in Sec. III we introduce the data generation protocol and neural network architecture, as well as define QST as a denoising task. Sec. IV is devoted to benchmarking our method against known approaches, and we test it on quantum states of physical interest. In Sec. V, we provide practical instructions to implement our algorithm in an experimental setting. We conclude in Sec. VI with several future research directions. ## II Preliminaries Let us consider \(d\)-dimensional Hilbert space. A set of informationally complete (IC) measurement operators \(\hat{\mathbf{\pi}}=\{\hat{\pi}_{i}\}\), \(i=1,\ldots,d^{2}\), in principle allows to unequivocally reconstruct the underlying target quantum state \(\hat{\tau}\in\mathbb{C}^{d\times d}\) in the limit of an infinite number of ideal measurements [51; 52]. After infinitely many measurements, one can infer the mean values: \[p_{i}=\mathrm{Tr}[\hat{\tau}\hat{\pi}_{i}], \tag{1}\] and construct a valid vector of probabilities \(\mathbf{p}=\{p_{i}\}\) for any proper state \(\hat{\tau}\in\mathcal{S}\), whereby \(\mathcal{S}\) we denote the set of \(d\)-dimensional quantum states, i.e. containing all unit-trace, positive semi-definite (PSD) \(d\times d\) Hermitian matrices. Moreover, \(\hat{\mathbf{\pi}}\) can be considered as a set of operators spanning the space of Hermitian matrices. In such a case, \(\mathbf{p}\) can be evaluated from multiple measurement settings (e.g., Pauli basis) and is generally no longer a probability distribution. In any case, there exists a one-to-one mapping \(Q\) from the mean values \(\mathbf{p}\) to the target density matrix \(\hat{\tau}\): \[Q: \mathcal{F}_{\mathcal{S}}\longrightarrow\mathcal{S} \tag{2}\] \[\mathbf{p}\longmapsto Q[\mathbf{p}]=\hat{\tau},\] where \(\mathcal{F}_{\mathcal{S}}\) is the space of accessible probability vectors. In particular, by inverting the Born's rule Eq. (1), elementary linear algebra allows us to describe the map \(Q\) as, \[Q[\mathbf{p}]=\mathbf{p}^{T}\mathrm{Tr}[\hat{\mathbf{\pi}}\hat{\mathbf{ \pi}}^{T}]^{-1}\hat{\mathbf{\pi}}. \tag{3}\] The inference of the mean values \(\mathbf{p}\) is only perfect in the limit of an infinite number of measurement shots, \(N\rightarrow\infty\). In a realistic scenario, with a finite number of experimental runs \(N\), we have access to frequencies of relative occurrence \(\mathbf{f}=\{f_{i}:=n_{i}/N\}\), where \(n_{i}\) is the number of times the outcome \(i\) is observed. Such counts allow us to estimate \(\mathbf{p}\) within an unavoidable error dictated by the shot noise and of amplitude typically scaling as \(1/\sqrt{N}\)[53]. With only the frequencies \(\mathbf{f}\) available, we can use the mapping \(Q\) for an estimate \(\hat{\rho}\) of the target density matrix \(\hat{\tau}\), i.e., \[\hat{\rho}=Q[\mathbf{f}], \tag{4}\] In the infinite number of trials \(N\rightarrow\infty\), \(f_{i}=p_{i}\) and \(\hat{\rho}=\hat{\tau}\). Yet, in the finite statistics regime, as considered in this work, the application of the mapping as defined in Eq. (3) to the frequency vector \(\mathbf{f}\) will generally lead to nonphysical results (i.e. \(\hat{\rho}\) not PSD). In such case, as an example of proper mapping \(Q\), we can consider different methods for standard tomography tasks, such as Linear Inversion (LI), or Maximum Likelihood Estimation (MLE), see Appendix A. As operators \(\hat{\mathbf{\pi}}\), we consider positive operator-valued measures (POVMs) and the more experimentally appealing Pauli basis (check Appendix B). ## III Methods This section describes our density matrix reconstruction protocol, data generation, neural network (NN) training, and inference procedure. In Fig. 1, we show how such elements interact within the data flow. In the following paragraphs, we elaborate on the proposed protocol in detail. The first step in our density matrix reconstruction protocol, called _pre-processing_, is a reconstruction of density matrix \(\hat{\rho}\) via finite-statistic QST with frequencies \(\mathbf{f}\) obtained from measurement prepared on target state \(\hat{\tau}\). Next, we feed forward the reconstructed density matrix \(\hat{\rho}\) through our neural network acting as a noise filter - we call this stage a _post-processing_. In order to enforce the positivity of the neural network output, we employ the so-called Cholesky decomposition to the density matrices, i.e. \(\hat{\rho}=C_{\rho}C_{\rho}^{\dagger}\), and \(\hat{\tau}=C_{\tau}C_{\tau}^{\dagger}\), where \(C_{\rho,\tau}\) are lower-triangular matrices. Such decomposition is unique, provided that \(\hat{\rho}\), \(\hat{\tau}\) is positive [127]. We treat the Cholesky matrix \(C_{\rho}\) obtained from finite-statistic QST protocol, as a _noisy_ version of the target Cholesky matrix \(C_{\tau}\) computed from \(\hat{\tau}\). The aim of the proposed neural network architecture is to learn a _denoising filter_ allowing reconstruction of the target \(C_{\tau}\) from the _noisy_ matrix \(C_{\rho}\) obtained via finite-statistic QST. Hence, we cast the neural network training process as a supervised denoising task. Figure 1: Schematic representation of the data pipeline of our QST hybrid protocol. Panel (a) shows data acquisition from a generic experimental set-up, during which the frequencies \(\mathbf{f}\) are collected. Next, panel (b) presents standard density matrix reconstruction; in our work, we test the computationally cheap LI method together with the expensive MLE, in order to better analyse the network reconstruction behaviour and ability. Panel (c) depicts the matrix-to-matrix deep-learning strategy for Cholesky matrices reconstruction. The architecture herein considered is a combination of convolutional layers for input and output and a transformer model in between. Finally, we compare the reconstructed state \(\hat{\rho}\) with the target \(\hat{\tau}\). ### Training data generation To construct the training data set, we first start with generating \(N_{\text{train}}\) Haar-random \(d\)-dimensional target density matrices, \(\{\hat{\tau}_{m}\}\), where \(m=1,\ldots,N_{\text{train}}\). Next, we simulate experimental measurement outcomes \(\mathbf{f}_{m}\), for each \(\hat{\tau}_{m}\), in one of the two ways: 1. _Directly_ : When the measurement operators \(\mathbf{\hat{\pi}}\) form an IC-POVM, we can take into account the noise by simply simulating the experiment and extracting the corresponding frequency vector \(\mathbf{f}_{m}=\{n_{i}/N\}_{m}\), where \(N\) is the total number of shots (i.i.d. trials) and the counts \(\{n_{i}\}_{m}\) are sampled from the multinomial distribution. 2. _Indirectly_ : As introduced in the preliminaries (Sec. II), if a generic basis is used \(\mathbf{\hat{\pi}}\), \(\mathbf{p}_{m}\) is no longer necessarily a probability distribution. This is the case with the Pauli basis (as defined in Appendix B), that we exploit in our examples. Then, we can add a similar amount of noise, obtaining \(\mathbf{f}_{m}=\mathbf{p}_{m}+\delta\mathbf{p}_{m}\), where \(\delta\mathbf{p}_{m}\) is sampled from the multi-normal distribution \(\mathcal{N}(\mathbf{0},\sim 1/(2\sqrt{N}))\) of mean zero and isotropic variance, saturating the shot noise limit. Having prepared frequency vectors \(\{\mathbf{f}_{m}\}\), we apply QST via mapping \(Q\), Eq. (4), obtaining set of reconstructed density matrices \(\{\hat{\rho}_{m}\}\). We employ the most rudimentary and scalable method, i.e. the linear inversion QST, however, other QST methods can be utilized as well. Finally, we construct the training dataset as \(N_{\text{train}}\) pairs \(\big{\{}\vec{C}_{\rho},\vec{C}_{\tau}\big{\}}\), where we use \(\vec{C}\) to indicate the vectorization (flattening) of the Cholesky matrix \(C\) (see Appendix C for definition). ### Neural network training The considered neural network, working as a denoising filter, is a non-linear function \(h_{\mathbf{\theta}}\) preparing matrix-to-matrix mapping, in its vectorized form, \(h_{\mathbf{\theta}}:\vec{C}_{\rho}\to\vec{C}_{\mathbf{\theta}}\), where \(\mathbf{\theta}\) incorporates all the variational parameters such as weights and biases to be optimized. The neural network training process relies on minimizing the cost function defined as a mean-squared error (MSE) of the network output with respect to the (vectorization of) target density matrix \(\hat{\tau}\), i.e. \[\mathcal{L}^{\text{MSE}}(\mathbf{\theta})=\|\vec{C}_{\tau}-\vec{C}_{\mathbf{\theta}} \|^{2}, \tag{5}\] via the presentation of \(K\) training samples \(\{\hat{\rho}_{l}\}_{l=1}^{K}\). The equivalence between MSE and Hilbert-Schmidt (HS) distance is discussed in detail in Appendix C, where we also demonstrate that the mean-squared error used in the cost function, Eq. (5), is a natural upper bound of the quantum fidelity. Hence, the choice of the cost function, Eq. (5), not only is the standard cost function for the neural network but also approximates the target state in a proper quantum metric. To make the model's optimization more efficient and avoid overfitting, we add a regularizing term resulting in the total cost function \(\mathcal{L}^{\text{MSE}}(\mathbf{\theta})+\text{Tr}[C_{\mathbf{\theta}}C_{\mathbf{\theta} }^{\dagger}]\) (chapter 7 of Ref. [54]). The training process results in an optimal set of parameters of the neural network \(\mathbf{\bar{\theta}}=\text{arg min}_{\mathbf{\theta}}\mathcal{L}\), and trained neural network \(h_{\mathbf{\theta}}\), which allows for the reconstruction of the target density matrix \(\hat{\tau}\) via Cholesky matrix \(C_{\bar{\rho}}\)[128], i.e. \[\hat{\bar{\rho}}=\frac{C_{\bar{\rho}}C_{\bar{\rho}}^{\dagger}}{\text{Tr} \Big{[}C_{\bar{\rho}}C_{\bar{\rho}}^{\dagger}\Big{]}}\simeq\hat{\tau}, \tag{6}\] where \(C_{\bar{\rho}}\) is reshaped from \(\vec{C}_{\bar{\rho}}=h_{\mathbf{\theta}}(\vec{C}_{\rho})\). ### Neural network architecture Our proposed architecture gets inspiration from the other recent models [55; 56], combining convolutional layers with a transformer layer implementing a self-attention mechanism [57; 58]. A convolutional neural network extracts important features from the input data, while a transformer block distils correlations between features via the self-attention mechanism. The self-attention mechanism utilizes the representation of the input data as nodes inside a graph [59] and aggregates relations between the nodes. The architecture of considered neural network \(h_{\mathbf{\theta}}\) contains two convolutional layers \(h_{\text{cnn}}\) and transformer layer \(h_{\text{tr}}\) in between, i.e.: \[h_{\mathbf{\theta}}(\vec{C}_{\rho})=\tanh[h_{\text{cnn}}\circ h_{\text{tr}}\circ \gamma(h_{\text{cnn}})](\vec{C}_{\rho}), \tag{7}\] where \(\gamma(y)=1/2y(1+\text{Erf}(y)/\sqrt{2})\), \(y\in\mathbb{R}\), is the Gaussian Error Linear Unit (GELU) activation function [60] broadly used in the modern transformers architectures, and \(\tanh(y)\) is the hyperbolic tangent, acting element-wise of NN nodes. The first layer \(h_{\text{cnn}}\) applies a set of \(K\) fixed-size trainable one-dimensional convolutional kernels to \(\vec{C}_{\rho}\) followed by non-linear activation function, i.e. \(\gamma(h_{\text{cnn}}(\vec{C}_{\rho}))\to\{\mathbf{F}_{\text{cnn}}^{1},\ldots,\mathbf{F }_{\text{cnn}}^{K}\}\). During the training process, convolutional kernels learn distinct features of the dataset, which are next feedforward to the transformer block \(h_{\text{tr}}\). The transformer block \(h_{\text{tr}}\) distills the correlations between the extracted features from kernels via the self-attention mechanism, providing a new set of vectors, i.e. \(h_{\text{tr}}(\mathbf{F}_{\text{cnn}}^{1},\ldots,\mathbf{F}_{\text{cnn}}^{K})\to\{\mathbf{F}_ {\text{tr}}^{1},\ldots,\mathbf{F}_{\text{tr}}^{K}\}\). The last convolutional layer \(h_{\text{cnn}}\) provides an output \(\vec{C}_{\mathbf{\theta}}\), \(\tanh(h_{\text{cnn}}(\mathbf{F}_{\text{tr}}^{1},\ldots,\mathbf{F}_{\text{tr}}^{K}))\to \vec{C}_{\mathbf{\theta}}\), where all filter outputs from the last layer are added. Finally, we reshape the output into the lower-triangular form and reconstruct the density matrix via Eq. (6). The training data and the considered architecture allow interpretation of the trained NN as a conditional de biaser (for details see Appendix D). The proposed protocol cannot improve the predictions of unbiased estimators, however, any estimator that outputs valid quantum states (e.g., LI, MLE) must be biased due to boundary effects. In the given framework, the task of the NN is to learn such skewness and drift the distribution towards the true mean. ## IV Results and discussion Having introduced the features of our NN-based QST enhancer in the previous section, here, we demonstrate its advantages in scenarios of physical interest. To this aim, we consider two examples. As the first example, we consider an idealized random quantum emitter (see e.g. Refs. [61; 62] for recent experimental proposals) that samples high-dimensional mixed states from the Hilbert-Schmidt distribution. After probing the system via single-setting square-root POVM, we are able to show the usefulness of our NN upon improving LI and MLE preprocessed states. We evaluate the generic performance of our solution and compare it with recent proposals of NN-based QST, Ref. [46]. In the second example, we focus on a specific class of muti-qubit pure states of special physical relevance, i.e. with metrological potential as quantified by the quantum Fisher information (QFI). Such states are generated via the famous one-axis twisting dynamics [63; 64]. Here, the system is measured using operators \(\mathbf{\hat{\pi}}\) via local symmetric, informationally complete, positive operator-valued measures (SIC-POVM) or experimentally relevant the Pauli operators. In the following, we discuss the two abovementioned scenarios. ### Reconstructing high-dimensional random quantum states **Scenario**.- As a first illustration, we consider a set of \(N_{\rm trial}\) random trial states \(\{\hat{\tau}_{j}\}\), \(j=1,\ldots,N_{\rm train}\) with Hilbert space dimension \(d=9\) sampled from the HS distribution (see Appendix E). The first task consists of assessing the average quality reconstruction over such an ensemble to benchmark the generic performance of our NN. We prepare measurements on each trial state \(\hat{\tau}_{j}\) via informationally complete (IC) square-root POVMs as defined in Eq. (14), and obtain state reconstruction \(\hat{\rho}_{j}\) via two standard QST protocols, i.e. bare LI and MLE algorithms, as well as by our neural network enhanced protocols denoted as LI-NN, and MLE-NN, see Fig. 1. Finally, we evaluate the quality of the reconstruction as the average of the square of the Hilbert-Schmidt distance \[D_{\rm HS}^{2}(\hat{\rho}_{j},\hat{\tau}_{j})={\rm Tr}[(\hat{\rho}_{j}-\hat{ \tau}_{j})^{2}]. \tag{8}\] **Benchmarking**.- Fig. 2(a) presents averaged HS distance square \(\overline{D^{2}}_{\rm HS}\) as a function of the number of trials states \(N_{\rm trial}\), obtained with bare LI, MLE algorithms, and neural network enhanced LI (LI-NN), and neural network enhanced MLE (MLE-NN). The training dataset contains \(N_{\rm train}=1250\) HS distributed mixed states. Our neural network-enhanced protocol improves over the LI, and MLE, i.e. \(\overline{D^{2}}_{\rm HS}\) is lower for LI-NN, and MLE-NN compared to LI and MLE algorithms for the same \(N_{\rm trial}\). For fewer trials \(N_{\rm trial}<10^{3}\), the post-processing marginally improves over the states reconstructed only from MLE. For a larger number of trials, the lowest \(\overline{D^{2}}_{\rm HS}\) is obtained for MLE results, performing better than other considered tomographic methods. To enhance MLE in the regime of a few samples, we propose an alternative method by incorporating, as a depolarization channel, the statistical noise (see Appendix F). Next, Fig. 2(b) presents comparison between our protocol with the state-of-the-art density matrix reconstruction via neural network, Ref. [46], where the authors report better performance than the MLE and LI algorithms having \(N_{\rm train}=8\cdot 10^{5}\) training samples, for states belonging to a Hilbert space of dimension \(d\geq 3\). Fig. 2(b) presents \(\overline{D^{2}}_{\rm HS}\) as a function of \(N_{\rm trial}/N_{\rm train}\), where \(N_{\rm train}=1250\) for MLE-NN, and \(N_{\rm train}=5000\) for LI-NN. The advantage of our protocol lies in the fact, that the amount of the necessary training data is significantly smaller compared to the training data size in Ref. [46], in order to obtain similar reconstruction qual Figure 2: Evaluation of the QST reconstruction measured by the mean value of the Hilber-Schmidt distance square, \(\overline{D^{2}}_{\rm HS}\), for different QST protocols, averaged over \(N_{\rm test}=1000\) target states. Panel (a) shows mean \(D_{\rm HS}^{2}\) for four QST protocols: LI (green dots), MLE (red squares), NN-enhanced LI (blue diamonds), and NN-enhanced MLE (orange crosses). Panel (b) shows \(\overline{D^{2}}_{\rm HS}\) as a function of ratio \(N_{\rm trial}/N_{\rm train}\) for our LI-NN model (blue squares), MLE-NN model (orange crosses), and for the network model proposed in Ref. [46] (violet triangles). A lower number \(N_{\rm train}\) shifts lines to the left, indicating resource efficiency. Our proposed protocol achieves competitive averaged HS reconstruction for the size of training data of an order of magnitude smaller compared to the method proposed in Ref. [46] (note \(x\)-axis values increase to the left). During models’ training, we set \(N_{\rm train}=1250\) for MLE-NN protocol, and \(N_{\rm train}=5000\) for the LI-NN. Lines are to guide the eye, shadow areas represent one standard deviation. ity, which is visible as a shift of the lines towards higher values of \(N_{\rm trial}/N_{\rm train}\). ### Certifying metrologically useful entanglement depth in many-body states **Scenario.**- In the last experiment, we reconstruct a class of physically relevant multi-qubit pure states. Specifically, we consider a chain of \(L=4\) spins-\(1/2\) (Hilbert space of dimension \(d=16\)). The target quantum states are dynamically generated during the one-axis twisting (OAT) protocol [63; 64] \[\ket{\Psi(t)}=e^{-itJ_{z}^{2}}\ket{+}^{\otimes L}\, \tag{9}\] where \(\hat{J}_{z}\) is the collective spin operator along \(z\)-axis and \(\ket{+}^{\otimes L}=[(|\uparrow\rangle+|\downarrow\rangle)/\sqrt{2}]^{\otimes L}\) is the initial state prepared in a coherent spin state along \(x\)-axis (orthogonal to \(z\)). The OAT protocol generates spin-squeezed states useful for high-precision metrology, allowing to overcome the shot-noise limit [65; 66; 67; 68], as well as many-body entangled and the many-body Bell correlated states [69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82]. OAT states have been extensively studied theoretically [83; 84; 85; 86; 87; 88; 89; 90; 91; 92], and can be realized with a variety of ultra-cold systems, utilizing atom-atom collisions [93; 94; 95; 96], and atom-light interactions [97; 98]. The recent theoretical proposals for the OAT simulation with ultra-cold atoms in optical lattices effectively simulate Hubbard and Heisenberg models [99; 100; 101; 102; 103; 104; 105; 106; 107]. **Preliminary results.**- For the first task, we generate our data for testing and training with SIC-POVM operators. For the test set, we select 100 OAT states in evenly spaced times \(t\in(0,\pi)\) and assess the average reconstruction achieved by our trained NN on a training dataset with \(N_{\rm train}=10,500\) Haar random pure states. We compare the values with the average score obtained for generic Haar distributed states, which is the set the model was trained on (see Appendix E). The qualities of reconstructions are shown in Table 1. First, we verify that the NN is able to improve substantially the OAT states, even though no examples of such a class of states were given in the training phase, which relied only on Haar-random states. Moreover, the OAT-averaged reconstruction values exceed the Haar reconstruction ones. We conjecture that this stems from the bosonic symmetry exhibited by the OAT states. This symmetry introduces redundancies in the density matrix which might help the NN to detect errors produced by the statistical noise. Finally, let us highlight that the network also displays good robustness to noise. Indeed, when we feed the same network with states prepared for \(N_{\rm trial}=10^{3}\) trials, we increase the reconstruction fidelity moving to 87% from 67%. **Inferring the quantum Fisher information (QFI).**- Finally, we evaluate the metrological usefulness of the reconstructed states as measured by the quantum Fisher information, \(F_{Q}[\hat{\rho},\hat{G}]\). The QFI is a nonlinear function of the state and quantifies the sensitivity upon rotations generated by \(\hat{G}\). For more details, we refer the reader to Appendix G. Here, we use the collective spin component \(\hat{J}_{\bf v}\) as the generator \(\hat{G}=\hat{J}_{\bf v}\), where the orientation \(\mathbf{v}\in\mathbb{S}^{2}\) is chosen so that it delivers the maximal sensitivity. The QFI with respect to collective rotations can also be used to certify quantum entanglement [68], in particular, the entanglement depth \(k\), which is the minimal number of genuinely entangled particles that are necessary to describe the many-body state. If \(F_{Q}[\hat{\rho},\hat{J}_{\bf v}]>kL\), then, the quantum state \(\hat{\rho}\) has at least \(k+1\) entanglement depth [108; 109]. In particular, for states with depth \(k=1\) (i.e., separable), due to the lack of entanglement detected, the metrological power is at most the shot-noise limit [110]. This bound is saturated by coherent spin states - like our initial (\(t=0\)) state for OAT evolution, \(\ket{+}^{\otimes L}\). In Fig. 3, we present the evolution of the QFI (normalized by the coherent limit, \(L=4\)) for the OAT target states (top solid blue lines). The top row corresponds to SIC-POVM measurement operators \(\mathbf{\hat{\pi}}\), while the bottom row corresponds to projective measurements with Pauli operators. In all experiments, we use the same neural network previously trained on random Haar states with frequencies \(\{\mathbf{f}_{m}\}\) obtained from \(N_{\rm trial}=10^{4}\) measurement trials. The LI algorithm by itself shows entanglement (QFI \(>L\)) for a few times \(t\). By enhancing the LI algorithm via our NN protocol, we surpass the three-body bound (QFI/\(L=3\)), thus revealing genuine 4-body entanglement, which is the highest depth possible in this system (as it is of size \(L=4\)). For instance, let us note that at time \(t=\pi/2\), the OAT dynamic generates the cat state, \(\ket{\Psi(t=\pi/2)}=(e^{-i\pi/4}|+\rangle^{\otimes 4}+e^{+i\pi/4}|-\rangle^{ \otimes 4})/\sqrt{2}\), which is genuinely \(L\)-body entangled, and so it is certi \begin{table} \begin{tabular}{|c||l|l|l|} \hline & \(10^{5}\) trials & \(10^{4}\) trials & \(10^{3}\) trials \\ \hline LI-OAT & \(94.3\pm 0.4\%\) & \(86.5\pm 0.1\%\) & \(67.0\pm 2.0\%\) \\ \hline NN-OAT & \(98.6\pm 0.2\%\) & \(97.8\pm 0.5\%\) & \(87.6\pm 4.5\%\) \\ \hline NN-Haar & \(96.9\pm 3.0\%\) & \(94.2\pm 3.3\%\) & \(81.1\pm 4.1\%\) \\ \hline \end{tabular} \end{table} Table 1: Comparison of average fidelity and its standard deviation between the reconstructed and the target states of size \(d=16\) for various QST methods (rows), with varying size of measurement trials \(N_{\rm trial}=10^{5},10^{4},10^{3}\), as indicated by the consecutive columns. The first row presents the average fidelity reconstruction for linear inversion QST, averaged over OAT states, evenly sampled from \(t=0\) to \(t=\pi\). Employing our neural network presents an enhancement over the bare LI, as shown in the second row for the same target set. Finally, the third row also shows data for NN-enhanced LI, but averaged over general Haar-random states. All the initial Born values are calculated by noiseless SIC-POVM. ### Discussion Currently, existing algorithms for QST tend to suffer from either of two main problems: unphysicality of results or biasedness. We tackle the problem of increasing the quality of state reconstruction by enhancing any biased QST estimator with an attention-based neural network working as a denoising filter, see Figure 1. Such an approach corrects both unphysicality and bias, at least partially. Our NN architecture is based on convolutional layers extracting crucial features from input data, and attention-based layers distilling correlations between extracted features. This choice of architecture is motivated by its improved generalization ability enabling a significant reduction of the necessary amount of training data, compared to NN-based approaches not utilizing an attention mechanism. From the examples provided in this section, we infer that our NN-enhanced protocol outperforms other methods of QST in the domain of a small number of measurement samples, see Figure 2. This is especially important due to applications in realistic scenarios, where the trade-off between accuracy and resource cost is crucial. The results presented in this contribution prove that, with the small number of samples, the comparison of neural network enhancement to pure linear inversion protocol favors our implementation, as can be deduced by fidelity of reconstruction in Table 1. Although the NN was trained on Haar-random pure states, it achieves even better performance on its subset of measure zero, namely one-axis twisted states. We conjecture that this is due to their underlying symmetries, which allows to efficiently learn and correct the noise pattern. Furthermore, the metrological usefulness of our method is visible through its certification of quantum Fisher information and entanglement depth, see Fig. 3. The bare QST setup, without our NN post-processing, is not able to show entanglement (QFI \(>L\)) at any finite time, as well as it never certifies the full genuine 4-body entanglement. Both of these problems are dealt with by the action of NN-enhancement. ## V Concrete experimental implementation for quantum state verification To recapitulate this contribution, as a complement to Fig. 1 and our repository provided in Ref. [111], we summarize the practical implementation of the protocol introduced in this work. 1. _Scenario_: We consider a finite-dimensional quantum system prepared in a target state \(\hat{\tau}\). Here, we aim to verify the preparation of a quantum state \(\hat{\tau}\) via QST. To this end, we set a particular measurement basis \(\hat{\mathbf{\pi}}\) to probe the system. 2. _Experiment_: After a finite number of experimental runs, from the countings, we construct the frequency vector \(\mathbf{f}\). 3. _Preprocessed quantum state tomography_: From the frequency vector \(\mathbf{f}\) and the basis \(\hat{\mathbf{\pi}}\), we infer the first approximation of the state \(\hat{\rho}\) via the desired QST protocol (e.g., one of those introduced in Appendix A). 4. _Assessing pre-reconstruction_: We evaluate the quality of the reconstruction by e.g., computing \(D_{\mathrm{HS}}^{2}(\hat{\tau},\hat{\rho})\), quantum fidelity or any other meaningful quantum metric. To improve such a score, we resort to our NN solution to complete a denoising task. As with any deep-learning method, training is required. Figure 3: Time evolution of the normalized QFI during the OAT protocol for \(L=4\) qubits system. Solid blue lines represent QFI calculated for target quantum states. The mean values of QFI calculated from tomographically reconstructed density matrices are denoted by green-dashed (reconstruction via LI), and red-dotted lines (reconstruction via neural network post-processed LI outputs). Shaded areas mark one standard deviation after averaging over 10 reconstructions. Panels (a) and (b) correspond to LI protocol with SIC-POVM data, whereas (c) and (d) denote LI reconstruction inferred from Pauli measurements. In the upper row, the left (right) column corresponds to \(N_{\mathrm{trial}}=10^{3}\) (\(10^{4}\)) trials; in the lower row, the left (right) column reproduces an LI initial fidelity reconstruction of \(\sim 74\%(\sim 86\%)\). The red lines represent the whole setup with NN post-processing of data from corresponding green lines, indicating improvement over the bare LI method. The NN advantage over the bare LI method can be characterized by entanglement depth certification, as shown by the horizontal lines denoting the entanglement depth bounds ranging from the separable limit (bottom line, bold), to the genuine \(L\)-body limit (top line). In particular, the presence of entanglement is witnessed by QFI \(>L\), as shown by the violation of the separable bound (bold horizontal line). 5. _Training neural network_: Different training strategies can be implemented: 1. Train over uniform ensembles (e.g., Haar, HS, Bures etc.) if \(\hat{\tau}\) is a typical state or we do not have information about it. If we know certain properties of the target state, we can take advantage of it (see next items). 2. Train over a subspace of states of interest. For example, if we reconstruct OAT states (Section IVb), we may train only on the permutation-invariant sector. 3. Train with experimental data. For example, if we have a quantum random source to characterize (Section IVa), experimental data can be used in the training set [44]. In such a case, our demonstrated reduction of the training set size translates also to a reduction of the experimental effort. 6. _Feeding the neural network_: We feed-forward the preprocessed state \(\hat{\rho}\) in our trained matrix-to-matrix NN to recover the enhanced quantum state \(\hat{\rho}\). 7. _Assessing the neural network_: We compute the updated reconstruction metric on the post-processed state \(D_{\text{HS}}^{2}(\hat{\tau},\hat{\bar{\rho}})\). Finally, we assess the usefulness of the NN by comparing how smaller such value is compared to the pre-processed score \(D_{\text{HS}}^{2}(\hat{\tau},\hat{\rho})\). The strength of our proposed protocol lies in its broad applicability, as the choice of the basis \(\hat{\mathbf{\pi}}\) and QST pre-processing method is arbitrary. ## VI Conclusions We proposed a novel deep learning protocol improving standard finite-statistic quantum state tomography methods, such as Linear Inversion and Maximum Likelihood Estimation. Our network, based on the attention mechanism and convolutional layers, greatly reduces the number of required measurements, serving as a denoising filter to the standard tomography output. The versatility of our approach comes from the fact that the measurement basis and reconstruction method have only an implicit impact as our central algorithm works directly with the density matrix. The proposed method reduces the number of necessary measurements on the target density matrix by at least an order of magnitude, compared to DL-QST protocols finite-statistic methods. We verified that our proposed method is able to improve over LI and MLE preprocessed states. Moreover, the inference stage was performed on out-of-distribution data, i.e., we tested our model on density matrices forming an infinitesimally small fraction of the training dataset, indicating the robustness of the proposed method. In particular, we test our model on 4-qubits spin-squeezed, and many-body Bell correlated states, generated during one-axis twisting protocol, with average fidelity \(\sim 98\%\). We demonstrated our NN improves the reconstruction of a class of physically relevant multi-qubit states, paving the way to use such novel methods in current quantum computers and quantum simulators based on spin arrays. Our protocol can greatly advance other QST methods, for both arbitrary states as well as for special classes that scale reasonably with the number of particles, such as symmetric states [112, 113]. **Data and code availability**.- Data and code are available at Ref. [111]. ###### Acknowledgements. We thank Leo Zambrano, Federico Bianchi, and Emilia Witkowska for the fruitful discussions. We acknowledge support from: ERC AdG NOQIA; Ministerio de Ciencia y Innovation Agencia Estatal de Investigaciones (PGC2018-097027-B-I00/10.13039/501100011033, CEX2019-000910-S/10.13039/501100011033, Plan National FIDEUA PID2019-106901GB-I00, FPI, QUANTERA MAQS PC2019-111828-2, QUANTERA DYNAMITE PCI2022-132919, Proyectos de I+D+1 "Retos Colaboracion" QUSPIN RTC2019-007196-7); MICIIN with funding from European Union NextGenerationEU(PRTR-C17.I1) and by Generalitat de Catalunya; Fundacio Cellex; Fundacio Mir-Puig; Generalitat de Catalunya (European Social Fund FEDER and CERCA program, AGAUR Grant No. 2021 SGR 01452, QuantumCAT U16-011424, co-funded by ERDF Operational Program of Catalonia 2014-2020); Barcelona Supercomputing Center MareNostrum (FI-2022-1-0042); EU (PASQuanS2.1, 101113690); EU Horizon 2020 FET-OPEN OPTologic (Grant No 899794); EU Horizon Europe Program (Grant Agreement 101080086 -- NeQST), National Science Centre, Poland (Symmonia Grant No. 2016/20/W/ST4/00314); ICFO Internal "QuantumGaudi" project; European Union's Horizon 2020 research and innovation program under the Marie-Sklodowska-Curie grant agreement No 101029393 (STREDCH). AKS acknowledges support from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant Agreement No. 847517. M.P. acknowledges the support of the Polish National Agency for Academic Exchange, the Bekker programme no: PPN/BEK/2020/1/00317. Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union, European Commission, European Climate, Infrastructure and Environment Executive Agency (CINEA), or any other granting authority. Neither the European Union nor any granting authority can be held responsible for them.
2305.19487
SPGNN-API: A Transferable Graph Neural Network for Attack Paths Identification and Autonomous Mitigation
Attack paths are the potential chain of malicious activities an attacker performs to compromise network assets and acquire privileges through exploiting network vulnerabilities. Attack path analysis helps organizations to identify new/unknown chains of attack vectors that reach critical assets within the network, as opposed to individual attack vectors in signature-based attack analysis. Timely identification of attack paths enables proactive mitigation of threats. Nevertheless, manual analysis of complex network configurations, vulnerabilities, and security events to identify attack paths is rarely feasible. This work proposes a novel transferable graph neural network-based model for shortest path identification. The proposed shortest path detection approach, integrated with a novel holistic and comprehensive model for identifying potential network vulnerabilities interactions, is then utilized to detect network attack paths. Our framework automates the risk assessment of attack paths indicating the propensity of the paths to enable the compromise of highly-critical assets (e.g., databases) given the network configuration, assets' criticality, and the severity of the vulnerabilities in-path to the asset. The proposed framework, named SPGNN-API, incorporates automated threat mitigation through a proactive timely tuning of the network firewall rules and zero-trust policies to break critical attack paths and bolster cyber defenses. Our evaluation process is twofold; evaluating the performance of the shortest path identification and assessing the attack path detection accuracy. Our results show that SPGNN-API largely outperforms the baseline model for shortest path identification with an average accuracy >= 95% and successfully detects 100% of the potentially compromised assets, outperforming the attack graph baseline by 47%.
Houssem Jmal, Firas Ben Hmida, Nardine Basta, Muhammad Ikram, Mohamed Ali Kaafar, Andy Walker
2023-05-31T01:48:12Z
http://arxiv.org/abs/2305.19487v2
SPGNN-API: A Transferable Graph Neural Network for Attack Paths Identification and Autonomous Mitigation ###### Abstract Attack paths are the potential chain of malicious activities an attacker performs to compromise network assets and acquire privileges through exploiting network vulnerabilities. Attack path analysis helps organizations to identify new/unknown chains of attack vectors that reach critical assets within the network, as opposed to individual attack vectors in signature-based attack analysis. Timely identification of attack paths enables proactive mitigation of threats. Nevertheless, manual analysis of complex network configurations, vulnerabilities, and security events to identify attack paths is rarely feasible. This work proposes a novel transferable graph neural network-based model for shortest path identification. The proposed shortest path detection approach, integrated with a novel holistic and comprehensive model for identifying potential network vulnerabilities interactions, is then utilized to detect network attack paths. Our framework automates the risk assessment of attack paths indicating the propensity of the paths to enable the compromise of highly-critical assets (e.g., databases) given the network configuration, assets' criticality, and the severity of the vulnerabilities in-path to the asset. The proposed framework, named SPGNN-API, incorporates automated threat mitigation through a proactive timely tuning of the network firewall rules and zero-trust policies to break critical attack paths and bolster cyber defenses. Our evaluation process is twofold; evaluating the performance of the shortest path identification and assessing the attack path detection accuracy. Our results show that SPGNN-API largely outperforms the baseline model for shortest path identification with an average accuracy \(\geq\) 95% and successfully detects 100% of the potentially compromised assets, outperforming the attack graph baseline by 47%. Graph Neural Network, Automated risk identification, zero trust, autonomous mitigation, risk assessment. ## I Introduction Cyber attacks have become not only more numerous and diverse but also more damaging and disruptive. New attack vectors and increasingly sophisticated threats are emerging every day. Attack paths, in general, are the potential chain of malicious activities an attacker performs to compromise assets and acquire network privileges through exploiting network vulnerabilities. Attack path analysis helps organizations identify previously unknown or unfamiliar chains of attack vectors that could potentially compromise critical network assets. This approach contrasts with signature-based attack analysis approaches such as vulnerability scanning, which typically focus on detecting individual attack vectors. Timely identification of attack paths enables proactive mitigation of threats before damage takes place. Nevertheless, manual processes cannot always provide the proactivity, fast response, or real-time mitigation required to deal with modern threats and threat actors, and constantly growing and dynamic network structure. An automated and efficient threat identification, characterization, and mitigation process is critical to every organization's cybersecurity infrastructure. The existing literature proposes various approaches based on attack graphs and attack trees that assess the interdependencies between vulnerabilities and the potential impact of exploitation [1, 2, 3, 4]. While these techniques provide a systematic perspective on potential threat scenarios in networks, their effectiveness is constrained by their inability to dynamically adapt to changes in the network structure, thus requiring the re-evaluation of the entire process. Several approaches based on deep learning (DL) have been proposed in the literature [5, 6, 7] to address this issue. For such models, network structure information is not learned, unlike Graph Neural Networks (GNN), but rather provided as input to the DL models. Consequently, the structure-based input must be re-generated every time there is a change in the network structure. This can potentially necessitate the entire DL models to be retrained, causing additional computational overhead. Another limitation of existing approaches is either being restricted to a set of predefined attacks [6] or using a set of predefined rules to define the potential interplay between vulnerabilities [8]. Given the rising complexity of cyber-attacks, a comprehensive approach is vital to ensure the security of network assets and sensitive data. **Challenges.** There are three major challenges for attack path detection: (1) **Adaptiveness**: How to develop an automated and adaptive identification of attack paths given the dynamic nature of the network structure driven by trends such as remote users, bring-your-own devices, and cloud assets? (2) **Agility**: With attackers constantly finding new ways to exploit vulnerabilities, how to comprehensively identify the potential interplay between vulnerabilities without being bound to a pre-defined set of rules or attack scenarios? (3) **Efficiency**: How to efficiently characterize and rank the risks of attack paths, and autonomously triage the ones requiring prompt response without disrupting the network functionalities? **Our Work.** Considering these challenges, we devise "Shortest Path Graph Neural Network-API" (SPGNN-API), a framework offering an autonomous identification of potential attack paths and associated risks of compromising critical assets. It further incorporates proactive mitigation of high-risk paths. (1) To address the adaptiveness challenge, we develop a novel GNN approach for attack path identification. The inductive property of GNNs enables them to leverage feature information of graph elements to efficiently generate node embeddings for previously unseen data. Additionally, GNNs incorporate network structural information as learnable features. This renders GNN-based approaches self-adaptive to dynamic changes in the network structure. (2) To tackle the agility challenge, we assume that an attacker who has compromised an asset can exploit all the underlying vulnerabilities. We rely on the GNN efficiency of graph representation learning to learn all potential vulnerability interactions that could compromise critical assets based on the CVSS base score metrics [9]. (3) To address the efficiency challenge, we automate the risk analysis of attack paths to determine their likelihood of compromising critical assets, based on factors such as network configuration, assets' criticality, and the severity of the vulnerabilities [10] in-path to the asset. We then develop autonomous mitigation of high-risk attack paths by automatically tuning the network zero-trust policies (See Section III-A) to disrupt the paths without impacting the network functionalities. In this work, we address a key limitation of existing GNNs that fail to capture the positional information of the nodes within the broader context of the graph structure [11, 12]. For instance, when two nodes share the same local neighborhood patterns but exist in different regions of the graph, their GNN representations will be identical. To address this, we introduce the SPGNN-API, which extends the Positional Graph Neural Network model [13] to achieve a transferable model for computing shortest paths to a predefined set of nodes representing highly-critical network assets. **Evaluation.** We conduct a three-fold evaluation process: Firstly, we evaluate the performance of the SPGNN shortest path calculation in a semi-supervised setting. Secondly, we assess the performance in a transfer-learning setting. Thirdly, we evaluate the accuracy of identifying critical attack paths. To carry out our evaluation, we use two synthetic network datasets, two real-world datasets obtained from middle-sized networks, and two widely used citation network datasets: Cora [14] and Citeseer [15]. We compare the GNN path identification performance with the state-of-the-art GNN path identification model "SPAGAN" [11]. Additionally, we compare the performance of the SPGNN-API with a state-of-the-art approach for attack path generation, "MulVAL" [8]. **Contributions.** In summary, our research contributions are: * We develop a novel transferable GNN for shortest path calculation that relies exclusively on nodes' positional embedding, regardless of other features. The presented approach is able to transfer previous learning to new tasks, hence alleviating the lack of labeled data problems. * We propose a novel GNN-based approach for network vulnerability assessment and potential attack path identification that leverages the inductive ability of the GNNs to accommodate the dynamic nature of enterprise networks without requiring continuous retraining. * We demonstrate that, unlike traditional GNN, the performance of positional GNN models is enhanced by removing self-loops achieving an average improvement \(\approx\) 5% on our six datasets with a maximum of 9%. * We develop a novel comprehensive model for learning the propensity of vulnerabilities to contribute to attacks compromising critical assets based on the CVSS base metrics without being bound to specific attack signatures or pre-defined set of rules for vulnerabilities interactions. * We formulate an autonomous risk characterization of the detected attack paths based on the network connectivity structure, asset configurations, criticality, and underlying vulnerabilities CVSS base score metrics. * We automate the mitigation of high-risk attack paths that could potentially compromise critical assets by tuning the network's zero-trust policies to break the path without disrupting the network functionalities. * We evaluate our proposed approach, the SPGNN-API, against two baseline models: the SPAGAN [11] for GNN-based shortest paths detection and MulVAL [8] for attack paths identification. Our results show that SPGNN-API outperforms the baseline models, achieving an average accuracy of over 95% for GNN shortest path identification. Moreover, our approach successfully identifies 47% more potentially compromised assets that were not detected by the baseline model, MulVAL. The rest of the paper is organized as follows: In Section II, we survey the literature. In Section III, we overview the zero-trust network architecture on which we base the attack paths risk assessment and mitigation. We further review different GNN architectures and limitations. Section IV details the design of our SPGNN-API framework. We evaluate our model and present our results in Section V. Finally, Section VI concludes our paper. ## II Related Work This work has two major contributions; a novel GNN approach for shortest path identification and an autonomous framework for detecting and mitigating attack paths in dynamic and complex networks. To highlight the novelty of our work, in this section, we survey the literature and differentiate our contributions from previous studies related to network vulnerability assessment and attack graphs generation (Sec. II-A) and GNN-based distance encoding and shortest path identification (Sec. II-B). ### _Network Attack Graph and Vulnerability Assessment_ We classify the existing approaches for vulnerability assessment into three main categories: traditional attack graphs/trees, ML/DL-based frameworks, and GNN-based approaches. **Traditional attack graphs/trees vulnerabilities assessment frameworks.** This class of models examines the interplay between the network vulnerabilities and the extent to which attackers can exploit them, offering a structured representation of the sequence of events that can potentially lead to the compromise of network assets [1, 2, 3, 4]. However, a major limitation of these models is their inability to adapt to dynamic changes in the network structure. Any modification to the network structure requires the regeneration of the attack graph. **Deep learning vulnerabilities assessment frameworks.** Previous studies have explored the use of deep learning-based (DL) approaches for vulnerability assessment and attack path detection [5, 6, 7]. To identify potential attack paths in a network, information about the network structure and configurations is essential. However, in DL-based approaches, the network structure information is not learned, unlike GNN, and instead, provided as input to the DL model. Therefore, the structure-based input needs to be re-generated every time there is a change in the network structure, which may also require retraining the entire DL model. **Graph neural network vulnerabilities assessment frameworks.** Recently, several approaches based on GNN have been proposed for cyber security tasks such as vulnerabilities detection [16, 17], anomaly detection [18], malware detection [19] and intrusion detection [20]. However, these approaches, in particular the vulnerability detection models, do not include any risk evaluation process that can help prioritize the detected threats for proactive mitigation. ### _GNN Shortest Path Identification_ The goal of graph representation learning is to create representation vectors of graphs that can precisely capture their structure and features. This is particularly important because the expressive power and accuracy of the learned embedding vectors impact the performance of downstream tasks such as node classification and link prediction. However, the existing GNN architectures have limited capability for capturing the position/location of a given node relative to other nodes in the graph [21] (See Sec. III-E). GNN iteratively updates the representation of each node by aggregating representations of its neighbors. Many nodes may share a similar neighborhood structure, and thus, the GNN produces the same representation for them although the nodes may be located at different locations in the graph. Several recent works have addressed this limitation of GNNs. Although some of these approaches have been successful, we present the first GNN-based method that is transferable and can accurately calculate shortest paths using only distance information, without relying on other node or edge features. For instance, in [12], the authors propose a general class of structure-related features called distance encoding, which captures the distance between the node set whose representation is to be learned and each node in the graph. These features are either used as extra node attributes or as controllers of message aggregation in GNNs. The Positional Graph Neural Network (PGNN) [13] approach randomly samples sets of anchor nodes. It then learns a non-linear vector of distance-weighted aggregation scheme over the anchor sets that represents the distance between a given node and each of the anchor sets. Another approach, SPAGAN [11], conducts paths-based attention in node-level aggregation to compute the shortest path between a center node and its higher-order neighbors. SPAGAN, therefore, allows more effective aggregation of information from distant neighbors into the center node. ## III Background In this section, we overview the Zero-Trust architecture and related policies' governance and compliance on which we base the risk assessment, triage, and mitigation of the detected attack paths (Sec. III-A, III-B). As the proposed framework relies on shortest paths calculation to identify attack paths, we briefly explain the shortest path identification problem (Sec. III-C) and discuss the processing of graph data with GNNs (Sec. III-D). We highlight the limitations of existing GNN architectures (Sec. III-E) that have motivated our novel GNN-based model for shortest path identification. ### _Zero-Trust Architecture_ Zero-trust (ZT) is a comprehensive approach to secure corporate or enterprise resources and data, including identity, credentials, access management, hosting environments, and interconnecting infrastructure. ZT architecture (ZTA) can be enacted in various ways for workflows. For instance, micro-segmentation [22] enforces ZTA by creating secure zones in cloud and data-center environments, isolating and securing different application segments independently. It further generates dynamic access network-layer control policies that limit network and application flows between micro-segments based on the characteristics and risk appetite of the underlying network's assets. Micro-segmentation is implemented via a distributed virtual firewall that regulates access based on network-layer security policies for each micro-segment. By limiting access to only what is necessary, micro-segmentation helps to prevent the spread of attacks within a network. The ZT micro-segmentation policies are defined as: **Definition 1**.: _ZT policies refer to network layer policies that the micro-segmentation distributed firewalls enforce to control the internal communications of the network. These policies follow the format: < Source Micro-Segment IP Range > < Destination Micro-Segment IP Range > < Protocol > Port Range >._ ### _Governance and Compliance_ The visibility of the network micro-segments underlying assets' characteristics and criticality is crucial for the optimal management of network communication policies. To achieve this purpose, a semantic-aware tier, called "governance", is used with the ZT policies to ensure their compliance with the best practices for communication between the network assets [23]. The governance tier uses semantic tags (e.g. Database, Web Server, etc.) to perform a risk-aware classification of the micro-segments and underlying assets based on the criticality of the data stored transported, or processed by the micro-segment assets and their accessibility [24]. In this work, we consider eight criticality levels for classifying the network micro-segments as detailed in Table I. This table is generated following the study in [24] in conjunction with guidance from the security team administrators of the two enterprises contributing to this study. It is worth mentioning that the governance rules are generated following the best network communication practices. They are tuned per organization based on the network structure and business processes. A governance rule is defined as follows: **Definition 2**.: _A governance rule represents the best practice of who/what communicates to the different network assets. It relies on the micro-segments assigned tags to assess the communications enabled through the network ZT policies. A governance rule has the following format: \(<\) Source Tag \(>\)\(<\) Destination Tag \(>\)\(<\) Service Tag \(>\)._ The Governance module assesses the compliance of each ZT policy with the respective governance rule. Consider \(P\) to be the set of governance rules. Governance-compliant connections, denotes by \(CC\), are defined as follows: **Definition 3**.: _Compliant connections are communications allowed by the ZT policies that comply with the defined governance rules. Let \(CC\) denote the set of compliant edges (connections enabled by the ZT policies) where \(CC\subseteq\left\{E\mid\left(\mathit{tag}(x),\mathit{tag}(y),s\right)\in P\right\}\) and \(\mathit{tag}(v)\) be a function to identify the governance tag assigned to vertex \(v\in V\)._ For instance, the ZT policy \(<\) Human-Resources Web Server IP Address \(>\)\(<\) Human-Resources Application Server IP Address \(>\)\(<\) TCP \(>\)\(<\) 443 \(>\) is compliant with the governance rule \(<\) Web Server \(>\)\(<\) Application Server \(>\)\(<\) Secure Web \(>\). Hence, all communications enabled through the above ZT policy are marked safe. Similarly, we denote by \(NC\) the set of non-compliant edges. In a network setting, _compliant_ connections are usually considered trusted as per the governance policies. The criticality of the non-compliant connections amongst the assets is a function of the trust rating of its incident vertices i.e., assets. In this work, we are mostly concerned with attack paths potentially compromising highly-critical assets. In particular, the ones incorporating non-compliant connections which imply a relatively higher risk of being exploited. In this context, we define highly-critical assets as follows: **Definition 4**.: _Highly-critical assets are network resources that are considered valuable due to the sensitivity of the data they host (e.g. databases). Let \(V_{critical}\) denote a set of nodes with maximum criticality. Formally, \(V_{critical}=\left\{v\mid v\in V\ \wedge\ c_{v}=\ 7\right\}\) where \(c_{v}\) is the criticality rating of node \(v\) implied by the assigned governance tag._ ### _Shortest Path Identification_ Shortest path (SP) algorithms (e.g. Bellman-Ford, Dijkstra's) are designed to find a path between two given vertices in a graph such that the total sum of the weights of the edges is minimum. Our proposed framework relies on shortest paths calculation to identify the eminent worst-case scenario for potential cyber-attacks compromising highly-critical assets. In this context, we define a critical attack path as follows [25]: **Definition 5**.: _An attack path is a succinct representation of the sequence of connections (enabled by ZT policies) through vulnerable assets that an attacker needs to exploit to eventually compromise a highly-critical asset._ The time complexity of shortest path (SP) algorithms on a directed graph can be bounded as a function of the number of edges and vertices by \(\mathit{O}\left(VE\right)\)[26]. However, the complexity of SP algorithms can be improved by using GNNs to approximate the distance between nodes in a graph. After training a neural network, the time complexity of finding the distance between nodes during the inference phase is constant, denoted by \(\left(O\left(1\right)\right)\). ### _Processing Graph Data with GNNs_ The goal of graph representation learning is to generate graph representation vectors that capture the structure and features of graphs accurately. Classical approaches to learning low dimensional graph representations [27, 28] are inherently transductive. They make predictions on nodes in a single, fixed graph (e.g. using matrix-factorization-based objectives) and do not naturally generalize to unseen graph elements. Graph Neural Networks (GNNs) [29, 30] are categories of artificial neural networks for processing data represented as graphs. Instead of training individual embeddings for each node, GNNs _learn_ a function that generates embeddings by sampling and aggregating features from a node's local neighborhood to efficiently generate node embeddings for previously unseen data. This inductive approach to generating node embeddings is essential for evolving graphs and networks constantly encountering unseen nodes. GNNs broadly follow a recursive neighborhood aggregation (or message passing) where each round of neighborhood aggregation is a hidden layer \(l\) in the GNN. Let \(\mathit{G=\left(V,E\right)}\) denote a directed graph with nodes \(V\) and edges \(E\). Let \(\mathit{N}(v)\) be the neighborhood of a node \(v\) where \(\mathit{N}(v)=\left\{u\in V\mid(v,u)\in E\right\}\). For each layer, or each message passing iteration, a node \(v\) aggregates information from its sampled neighbors \(\mathcal{N}\left(v\right)\) as described in Equation 1. \[h_{v}^{l}=\sigma\left(M^{l}\cdot\Lambda\left(\{h_{v}^{l-1}\}\cup\left\{w_{e} h_{u}^{l-1},\forall u\in\mathcal{N}(v)\right\}\right)\right) \tag{1}\] The aggregated information is computed using a differentiable function \(\Lambda\) and a non-linear activation function \(\sigma\). \(w_{e}\) is the edge feature vector from node \(v\) to node \(u\). The set of weight matrices \(M^{l},\forall l\in\left\{1,\ldots,L\right\}\) are used to propagate information between layers. After undergoing \(k\) rounds of aggregation, a node is represented by its transformed feature vector, which encapsulates the structural information of the node's k-hop neighborhood as described in [31]. \begin{table} \begin{tabular}{|l|l|} \hline **Level** & **Description** \\ \hline 0 & UnTagged/unknown \\ 1 & Untrusted and external/public c.g internet 0.0.0.0/0 \\ 2 & Trusted external e.g vendor \\ 3 & Internet facing \\ 4 & Untrusted and internal e.g users \\ 5 & Internal and connecting to untrusted internal e.g web servers \\ 6 & Internal and connecting to data or non-critical data \\ 7 & Critical data \\ \hline \end{tabular} \end{table} TABLE I: Assets criticality levels and associated description. ### _GNNs Expressive Power_ The success of neural networks is based on their strong expressive power that allows them to approximate complex non-linear mappings from features to predictions. GNNs learn to represent nodes' structure-aware embeddings in a graph by aggregating information from their \(k\)-hop neighboring nodes. However, GNNs have limitations in representing a node's location or position within the broader graph structure [12]. For instance, two nodes that have topologically identical or isomorphic local neighborhood structures and share attributes, but are in different parts of the graph, will have identical embeddings. The bounds of the expressive power of GNNs are defined by the 1-Weisfeiler-Lehman (WL) isomorphism test [21] In other words, GNNs have limited expressive power as they yield identical vector representations for subgraph structures that the 1-WL test cannot distinguish, which may be very different [12, 13]. ## IV Proposed Framework SPGNN-API In this section, we present our proposed framework that aims to achieve end-to-end autonomous identification, risk assessment, and proactive mitigation of potential network attack paths. As depicted in Figure 1, the SPGNN-API consists of five modules: (a) Network micro-segmentation, (b) governance and compliance, (c) network data pre-processing, (d) GNN-based calculation of shortest paths to critical assets, and (e) risk triage and proactive mitigation. We elaborate on these modules in the following subsections. ### _Micro-Segmentation_ First, we represent a given network as a directed connectivity graph. Let \(C(V,E,S)\) be a labeled, directed graph that represents the network's connectivity, where \(V\) is the set of graph vertices representing the network assets (servers and cloud resources). The set of graph-directed edges \(E\) indicates the connected vertices' communication using the service identified through the edge label \(s\in S\). Here \(S\) denotes the set of network services that are defined by a protocol and port range and \(E\subseteq\{(v,u,s)\mid(v,u)\in V^{2}\wedge x\neq y\wedge s\in S\}\). We derive the set of feature vectors characterizing the graph vertices (network assets) and edges (incident assets communication) from layers 3 and 4 network flow packet headers. This includes features such as frequently used ports and protocols, frequent destinations, and flow volume. Our approach assumes that assets within the same micro-segment exhibit similar communication patterns. To automatically identify the network micro-segments, we use attentional embedded graph clustering [32], a deep embedded clustering based on graph attentional auto-encoder. The clustering algorithm aims at partitioning the connectivity graph \(C=(V,E,S)\) into \(k\) sub-graphs representing the network micro-segments. It learns the hidden representations of each network asset, by attending to its neighbors, to combine the features' values with the graph structure in the latent representation. We stack two graph attention layers to encode both the structure and the node attributes into a hidden representation. ### _Governance and Compliance_ Each micro-segment is assigned a "governance" tag implying its underlying assets' criticality and risk appetite. For instance, a _web server_ asset criticality is lower than a _database_. To automate the assignment of tags, we further assess the network flows in terms of communication patterns and frequently used ports and protocols to identify the dominating service(s) used by each micro-segment's underlying assets. For instance, a micro-segment mostly using TCP 80 for communication is most likely a web server while a micro-segment substantially using TCP 3306 is presumably a database. The detailed process of application profile assignment and the handling of dynamic ports is beyond the scope of this paper. We then automate the generation of the ZT policies to govern the communication between the micro-segments at the network layer. We first identify all attempted communications in the network and automatically generate ZT policies to enable all these communications. We compare the generated Fig. 1: SPGNN-API framework architecture where Sub-figure a) illustrates the micro-segmentation process through attentional embedded graph clustering of the network based on layer 2 and 3 flow packets header analysis and the network connectivity graph. This process is followed by a GNN-based model for generating the ZT policies governing the communication between the micro-segments as detailed in Sub-figure b). Sub-figure c) describes the network data pre-processing stage to illuminate the edges that cannot be part of an attack path. The updated graph is then used to identify the shortest paths to highly-critical assets as illustrated in sub-figure d). Finally, edges are classified as either safe, compliant critical, or non-compliant critical. The ZT policies are then tuned to block the latter class of edges. policies with the governance rules and highlight the non-compliant policies. We further assess the risk imposed by the non-compliant connections based on the criticality of the incident edges and the network topology. We then formulate a GNN model for tuning the ZT policies to reduce the risks without disrupting the network functionalities. The details of this process are beyond the scope of this paper. ### _Network Data Pre-processing_ SPGNN-API relies on shortest paths calculation to predict imminent attack paths. We aim to pre-process the network connectivity graph by identifying edges that can potentially contribute to attack paths and filter out the edges that cannot be exploited by an attacker. This pre-processing stage ensures that all calculated shortest paths do represent attack paths. An essential step toward the identification of an attack path is locating network vulnerabilities and assessing their severity which directly impacts the risk imposed by potential attacks exploiting these vulnerabilities. To locate the network vulnerabilities, we utilize a port scanner (e.g. Nessus). We then rely on the NIST Common Vulnerability Scoring System (CVSS) base metrics [10] to identify the features and severity of the detected vulnerabilities. We identify edges potentially contributing to critical attack paths following an exclusion methodology. We filter out edges that cannot be exploited by attackers based on a pre-defined set of criteria. This set of criteria does not define specific vulnerability interactions and ways of exploiting these vulnerabilities. They rather highlight the propensity of exploiting the vulnerabilities to eventually compromise critical assets. **Edges exclusion criteria:** Graph edges are excluded if they don't meet the following criteria: (1) The edge source node needs to have a vulnerability with CVSS base metric "scope" set to "changed". This implies that the vulnerability can impact resources in components beyond its security scope. Hence, being exploited, it enables the attacker to move further in the network and potentially reach a highly-critical asset. (2) The edge source node needs to have a vulnerability with CVSS overall base score metric "High" or "Critical". This implies the potential criticality of the attack step. (3) All edges with highly-critical asset destinations are considered. A major strength of our proposed approach is that it does not restrict the detection of potential attacks to a predefined set of vulnerability interactions. Instead, we assume that once an attacker gains access to an asset, they can exploit any underlying vulnerability without any specific prerequisites such as access rights or user privilege. This assumption is based on the constantly evolving nature of attacks and the ability of attackers to discover new ways of exploiting vulnerabilities. Consequently, we do not track an end-to-end sequence of attack steps as there might be infinite alternatives. Instead, we identify the propensity of an edge being involved in an attack by determining if there exists a (shortest) path from that edge to a highly-critical asset going through vulnerable nodes. This comprehensive approach to representing vulnerability interactions is not feasible for traditional attack path detection models due to the time complexity of generating attack trees, where the size of the graph is a function of the potential vulnerabilities' interactions [8]. However, our presented approach, which is based on the P-GNN, overcomes this issue with a time complexity of \(O(\mathit{nlog}^{2}n)\), where \(n\) is the number of assets in the network. Accordingly, the size of the graph is irrelevant to the number of vulnerabilities and their potential interactions. ### _GNN Model for Shortest Paths Identification_ We formulate and develop a transferable GNN model for shortest path identification. Our approach involves identifying the shortest paths to a predefined set of nodes representing highly-critical assets in a network. By identifying the shortest path representing the minimum set of exploits an attacker would need to compromise such highly-critical assets, we account for the worst-case scenario for potential attacks. We base our framework on the Position Graph Neural Network (P-GNN) model. The P-GNN approach randomly samples sets of anchor nodes. It then learns a non-linear vector of distance-weighted aggregation scheme over the anchor sets that represents the distance between a given node and each of the anchor sets [13]. To enhance the P-GNN architecture; firstly, we recover the actual shortest path distance from the node embeddings through a transferable GNN model. Secondly, we identify the shortest path length to a predefined set of nodes representing high-criticality assets rather than a randomly distributed set of anchors. Thirdly, we update the message function to only consider the position information for calculating the absolute distances, independent of node features. Lastly, since we aim to identify high-risk network connections, we embed the shortest path distance as an edge feature. **Anchor Sets.** We formulate a strategy for selecting anchors and assigning critical assets to anchor sets. Let \(n\) be the number of highly-critical assets in the network. We first mark anchors around the nodes representing highly-critical assets where each anchor set holds only one critical asset. As per the original P-GNN model, to guarantee low distortion embedding at least \(k\) anchors are sampled where \(k=c\log^{2}|V|\) and \(c\) is a constant. If the number of critical assets \(|V_{critical}|<k\), the remaining anchors are sampled randomly where each node in \(V\sim V_{critical}\) is sampled independently. The anchors' size is distributed exponentially and is calculated as follows: \[|Anchor\_i|=\lfloor\frac{|V|}{2^{i+1}}\rfloor,i\in\{0..k\} \tag{2}\] **Objective Function.** The goal of the SPGNN is to learn a mapping \(V\times V_{critical}^{k}\mapsto R^{+}\) to predict the actual minimum shortest path distances from each \(u\in V\) to \(V_{critical}\) where \(k=|\,V_{critical}|\). Hence, unlike the original P-GNN objective function, defined for the downstream learning tasks using the learned positional embeddings (e.g. membership to the same community), our objective is formulated for learning the actual shortest path length as follows: \[\min_{\phi}\sum_{\forall u\in V}\mathcal{L}\left(\min_{i\in\{1..k\}}\hat{d}_{\phi}\left(u,v_{i}\right)-\min_{i\in\{1..k\}}d_{y}\left(u,v_{ i}\right)\right)\] \[\min_{\phi}\sum_{\forall u\in V}\mathcal{L}\left(\min\left(\hat{d }_{\phi}\left(u,V_{critical}\right)\right)-\min\left(d_{y}(u,V_{critical}) \right)\right) \tag{3}\] where \(\mathcal{L}\) is the mean squared error (MSE) loss function to be minimized. \(\hat{d}_{\phi}\left(u,\,V_{critical}\right)\) is the vector of learned approximation of the shortest path distance from a node \(u\) to every critical asset \(v\in V_{critical}\). \(d_{y}\) is the observed shortest path distance. As the model aims to identify the risk imposed by critical paths, we account for the worst-case scenario by considering the minimum shortest path length from the (vulnerable) node to a highly-critical asset. Therefore, the loss is computed only on the minimum of the distance vector. **Message Passing.** The message-passing function, in our approach, exclusively relies on the position information to calculate the absolute distances to the anchor sets and disregards node features. To calculate position-based embeddings, we follow the original P-GNN \(q\)-hop approach where the 1-hop \(d_{sp}^{t}\) distance can be directly inferred from the adjacency matrix. During the training process, the shortest path distances \(d_{sp}^{g}(u,v)\) between a node \(u\) and an anchor node \(v\) are calculated as follows [13]: \[d_{sp}^{q}(u,v)\mapsto\begin{cases}d_{sp}(u,v),&if\ d_{sp}(u,v)<q\\ \infty&otherwise.\end{cases} \tag{4}\] Where \(d_{sp}(u,v)\) is the shortest path distance between a pair of nodes. Since the P-GNN aims to map nodes that are close (in position) in the network to similar embedding, the distance is further mapped to a range in \((\,0\,,1\,)\) as follows [13]: \[s(u,v)=\frac{1}{d_{sp}^{g}(u,v)+1} \tag{5}\] Accordingly, the message-passing process is defined as: \[h_{u}=\phi(x_{u}\oplus_{(v\in\mathbb{R}_{v})}\psi(u,v)) \tag{6}\] where \(h_{u}\) represents the node embedding of the vertex \(u\), \(x_{u}\) is the input feature vector of the node \(u\) inferred based on the adjacency matrix. \(\oplus\) is the aggregation function. In our approach, we found that the mean aggregation function provides the best performance. \(\psi\) is the message function and is computed as described in Equation 5. Finally, \(\phi\) is the update function to obtain the final representation of node \(u\). **Recovery of true paths length.** We aim to learn the true shortest path length by pulling the value of the node embedding closer to the labels during the learning process. To this end, we rely on the MSE loss function to minimize the deviation between the predicted and observed shortest path distances. To recover the true path length from the learned positional embedding, we introduce four steps to the P-GNN learning process after learning the node embeddings through message passing: Firstly, for every node \(u\in V\), we calculate the absolute distance (AD) of the learned node embeddings between \(u\) and each critical asset \(v\in V_{critical}\). Secondly, we assign the minimum value of the calculated AD to the node \(u\). Thirdly, as the calculated AD is not necessarily an integer value, we approximate the assigned AD to an integer value to represent the predicted shortest path distance. Lastly, we attribute the approximated shortest path value to the incident edge features. _(1) Absolute Distance (AD) of node embedding._ We particularly use the AD function since it is less impacted by outliers, hence, more robust. This is particularly significant since complex network structures are characterized by a high variance in the criticality of the assets and the path-length distributions. For every node \(u\in V\), we calculate a vector of absolute distances \(T_{u}\) between the learned embedding of \(u\) denoted as \(h_{u}\) and the embedding of every critical asset \(v_{i}\in V_{critical}\), denoted as \(h_{v_{i}}\). \(h_{u}\) and \(h_{v_{i}}\) are calculated as described in Equation 6. The AD vector is calculated as follows, where \(k\) is the embedding space dimension: \[AD(u,v)=\sum_{n=1}^{k}|h_{u}^{n}-h_{v}^{n}| \tag{7}\] \[T_{u}=\forall_{v_{i}\in V_{critical}}AD(u,v_{i})\] \(T_{u}\) is then used in Equation 3 to calculate the loss where \(\hat{d}\left(u,\,V_{critical}\right)=T_{u}\). _(2) Minimum absolute distance to a critical asset._ The downstream task is concerned with identifying the risk imposed by potential attack paths. If a node \(u\in V\) has (shortest) paths to multiple critical assets, we account for the worst-case scenario by identifying the minimum length of the shortest paths \(z_{u}\) and assigning its value as a feature for node \(u\). It is calculated as follows: \[z_{u}=\min_{i\in\{1...k\}}T_{u}^{i} \tag{8}\] where \(k\) is the embedding space dimension. _(3) Approximation of path length._ We identify two approaches for approximating the learned minimum shortest path length \(z_{u}\) of a certain node \(u\). The first approach, denoted as \(SPGNN_{R}\), relies on simple rounding of the shortest path length. This naive approach is rather intuitive and is fully transferable as discussed in Section V. The predicted distance \(SP_{R}(u)\) is then calculated as follows: \[SP_{R}:V\mapsto N \tag{9}\] \[SP_{R}(u)\mapsto Round(z_{u})\] The second approach, \(SPGNN_{DNN}\), relies on a deep neural network (DNN) to learn a mapping between the learned shortest path length and its integer representation. To overcome the inaccuracies induced by rounding the AD, we aim to increase the separation between the labels representing the observed paths-length. Since the downstream task is concerned with assessing the risks imposed by the attack paths, we restrict the detection of paths to a certain range of length values that are anticipated to induce high risks. Accordingly, we transform the path identification into a classification task where the learned embeddings are mapped to a class representing a path length within the range of interest. The goal of the DNN is to learn a mapping to predict the integer representation of the minimum shortest path distance \(z_{u}\) described in Equation 8 from each \(u\in V\) to \(V_{critical}\) where \(k=|\,V_{critical}|\). Accordingly, the objective function is: \[\min_{\theta}\sum_{\forall u\in V}\mathcal{L}_{c}(g_{\theta}(\lambda_{u}),l) \tag{10}\] where \(g_{\theta}:R^{a}\mapsto R^{b}\) is a function that maps the node features \(\lambda_{u}\) (that include \(z_{u}\)) where \(|\lambda_{u}|=a\) to a label \(l\) in the set of the encoded labels \(L=I,...,b\) where \(b\) is the threshold of paths length considered. \(\theta\) denotes the parameters of \(g_{\theta}\) and \(\mathcal{L}_{c}\) is the categorical cross entropy loss function. In addition to the minimum shortest path distance \(z_{u}\), we enrich the classifier input with additional heuristics of the original P-GNN positional embeddings \(h_{u}\) described in Equation 6. We rely on the intuition that the learned P-GNN embeddings of nodes that share the same shortest path distance are most likely to have similar statistical features. We define the DNN classifier input feature vector \(\lambda_{u}|\ \forall u\in\ V\) as follows: \[\begin{split}\lambda_{u}=&(\max_{v\in V_{critical}}| cos_{sim}(u,v)|,\max_{v\in V_{critical}}cross_{entropy}(u,v),\\ & min(h_{u}),max(h_{u}),mean(h_{u}),var(h_{u}),norm_{2}(h_{u}), \\ & std(h_{u}),median(h_{u}),z_{u}).\end{split} \tag{11}\] The output of the DNN model is the classes representing the different shortest path lengths. We rely on the one-hot encoding mapping to represent the output. The predicted distance denoted as \(SP_{DNN}(u)\) is then calculated as follows: \[\begin{split} SP_{DNN}:V\mapsto N\\ SP_{DNN}(u)\mapsto g_{\theta}(z_{u})\end{split} \tag{12}\] The stacking of a DNN classifier significantly enhances the accuracy of the SPGNN when trained and tested on the same network data. However, it does not perform equally well in a transfer learning setting as discussed later in Section V. This can be attributed to the fact that the input to the DNN classifier depends on the learned positional embeddings \(h_{u}\) and is highly impacted by the size and distribution of the anchors set. _(4) Shortest path as edge feature._ When it comes to graph representation learning, relying on node features is often more efficient than edge features due to the amount of information contained within the nodes, and the relatively smaller number of nodes as compared to edges. As a result, we begin by predicting the shortest paths as additional node features. Then, We attribute the calculated distance to all _incident edges of the node_, as shown in Figure 2. Let \(v\) be a node in the network, \(SP(v)\) be the learned integer representation of the minimum shortest path for node \(v\), and \(y_{e}\) be the feature vector for edge \(e\). Accordingly, the node features are assigned to their incident edges as follows: \[\{\forall u\in V\ \wedge\ \exists\ e_{u,v}\in E\,\ y_{e_{u,v}}=SP(v)\} \tag{13}\] **Labels.** Manually generated labels are expensive and hard to acquire. Therefore, we rely on a regular shortest path algorithm (e.g. Dijkstra), denoted by \(d_{sp}(u,v)\), to generate the labels for training the SPGNN. We calculate the observed shortest path \(d_{y}\) from a node \(u\) to critical asset \(v\) as per Equation 14. The calculated shortest path represents the label of the node \(u\). \[\begin{split} d_{y}(u,v)\mapsto\begin{cases}0&if\ v\notin V _{critical}\ \lor\ d_{sp}(u,v)=\emptyset\\ d_{sp}(u,v)&otherwise\end{cases}\end{split} \tag{14}\] ### _Risk Triage and Mitigation_ We develop a module to automate the assessment of risks imposed by potential exploitation of the detected attack paths in terms of the propensity and impact of compromising highly-critical assets. We first identify critical attack paths that require immediate intervention based on a pre-defined set of criteria. We then autonomously locate connections (edges) playing a key role in enabling the critical attack paths. Accordingly, we proactively implement the proper mitigation actions. To assess attack path criticality, we introduce a new metric namely _Application Criticality (AC)_. The assets criticality metric assesses the risk based on the assets workload (e.g. database, application server, etc.) and data processed. However, the AC metric assesses the risk based on the application the asset belongs to. For instance, a human-resources application database with human-identifiable information is assigned a higher AC rating than an inventory application database. **Application criticality:** Applications can be classified based on the scope of expected damages, if the application fails, as either, mission-critical, business-critical, or non-critical (operational and administrative) [33]. Organizations rely on mission-critical systems and devices for immediate operations. Even brief downtime of a mission-critical application can cause disruption and lead to negative immediate and long-term impacts. A business-critical application is needed for long-term operations and does not always cause an immediate disaster. Finally, organizations can continue normal operations for long periods without the non-critical application. Two different companies might use the same application but it might only be critical to one. Hence, we rely on the security team of enterprises contributing to this study to assign the AC. Attack paths are considered critical if they meet the following criteria: (1) The start of the path is an asset with criticality level \(\leq\) 4 implying the ease of accessibility of the asset. (2) Destination highly-critical assets belong to a mission-critical application (3) The shortest path is of length at most five. After filtering out non-critical paths, we aim to locate and characterize connections playing a key role in enabling critical attack paths. Accordingly, we model a DNN edge classifier to assess the edges of attack paths. Three output classes are defined, based on which mitigation actions are planned: (1) Non-compliant critical, (2) compliant critical, and (3) safe. Non-compliant edges are inherently un-trusted as they do not comply with the organization's communication best practices. Accordingly, non-compliant critical edges are immediately blocked by automatically tuning the ZT policies enabling the connection. Compliant connections represent legitimate organizational communication, hence blocking them might disrupt the network functionalities. Therefore, these Fig. 2: The shortest path length is assigned to the path source node and all its incident edges. connections are highlighted and the associated ZT policies are located. A system warning is generated requesting the network administrators to further assess the highlighted ZT policies. Finally, no actions are required for safe connections. We assess the criticality of attack paths' edges based on the following criteria, representing the input of the DNN classifier: * \(\text{Feature}_{1}\): The trust level of the edge destination asset. * \(\text{Feature}_{2}\): The AC rating of the edge destination asset. * \(\text{Feature}_{3}\): Exploited vulnerability base score of the source asset. * \(\text{Feature}_{4}\): The shortest path distance from the edge to the highly-critical asset. * \(\text{Feature}_{5}\): The compliance of the edge. Let \(f_{\psi}:E\mapsto Y\) be a function that maps the set of edges \(E\) to the set of labels \(Y\) representing the three edge classes where \(\psi\) denotes the parameters of \(f_{\psi}\). Let \(feat_{e}\) be the input feature vector of the edge \(e\) to be assessed. To optimize the edge's classification task, we express the objective function as the minimization of the cross-entropy loss function \(\mathcal{L}_{d}\). We represent this objective function as follows: \[\min_{\psi}\sum_{\forall einE}(f_{\psi}(feat_{e}),y) \tag{15}\] ## V Results and Evaluation The evaluation process is three folds: (1) evaluating the performance of the \(SPGNN\) shortest path calculation in a semi-supervised setting (Sec. V-D), (2) assessing the performance in a transfer-learning setting (Sec. V-E), and (3) evaluating the accuracy of identifying critical attack paths and locating key paths edges (Sec. V-F). ### _Experimental Settings_ We test the performance of SPGNN in three settings: **Experiment 1 - evaluating the performance of shortest paths identification.** The focus of this experiment is to evaluate the ability of \(SPGNN_{R}\) and \(SPGNN_{DNN}\) to identify the shortest paths in a semi-supervised setting. We use the same dataset for training and testing. We compare the prediction accuracy with the baseline model \(SPAGAN\). To identify the minimum ratio of labeled data required to achieve satisfactory performance, we use the train and test split masks with distribution shifts for all datasets described in Section V-B. **Experiment 2 - assessing and validating the learning transferability.** This experiment setting is particularly concerned with assessing the learning transferability of the proposed \(SPGNN_{R}\) shortest path identification. We test the transferability by training the model using a dataset and testing it using a different unlabeled dataset. **Experiment 3 - Assessing and validating the attack paths identification.** This experiment aims to assess the end-to-end performance of the SPGNN-API in identifying critical attack paths and highlighting key connections enabling the paths. We test the performance of this task by comparing the model accuracy to labeled synthetic network datasets and real-world datasets of enterprises contributing to this research. ### _Dataset_ Two classes of datasets are used for the proposed model evaluation: (1) Enterprise network datasets (two synthetic datasets, \(STD_{1}\) and \(STD_{2}\), and two real-world datasets, \(RTD_{1}\) and \(RTD_{2}\)). (2) Two widely used citation network datasets Cora [14] and Citeseer [15]. We generate two synthetic datasets (\(STD_{1}\) and \(STD_{2}\)) to imitate a mid-sized enterprise network setting. We defined the node configurations and network connections to cover all possible combinations of values for the five features used for assessing the criticality of the attack path's edges. We collect the real-world datasets, denoted by \(RTD_{1}\) and \(RTD_{2}\), from two-mid sized enterprises; a law firm and a university, respectively. We rely on the Nessus scan output to identify the configurations and properties of the network assets as well as the underlying vulnerabilities. We use enterprise-provided firewall rules, ZT policies, and governance rules to define and characterize the assets' communications. Table II lists the details of the datasets used in assessing the performance of our proposed model. In the proposed approach, we identify the path length to a set of anchor nodes to represent highly-critical assets. For the citation datasets, we randomly sample nodes to represent highly-critical assets. Since the citation datasets do not represent a real network setting, we will limit the evaluation of the attack path identification to the (real-world and synthetic) enterprise network datasets. ### _Baseline Models_ We compare the performance of our proposed model architectures \(SPGNN_{R}\) and \(SPGNN_{DNN}\) with the state-of-the-art baseline \(SPAGAN\)[11] w.r.t. to the shortest path identification. The SPAGAN conducts path-based attention that explicitly accounts for the influence of a sequence of nodes yielding the minimum cost, or shortest path, between the center node and its higher-order neighbors. To validate the performance of the SPGNN-API for attack paths identification, we generate the network attack graph using the MulVAL tool [8] by combining the output of the vulnerability scanner Nessus [34] and the enterprise network perimeter and zero-trust firewall policies. ### _Evaluation of Shortest path Detection_ In this section, we assess the performance of the two proposed architectures the \(SPGNN_{R}\) and \(SPGNN_{DNN}\) using all six datasets. We report the mean accuracy of 100 runs with 80%-20% train-test masks and 20 epochs. \begin{table} \begin{tabular}{l||r|r|r|r|r} \hline \hline **Dataset** & **Nodes** & **Edges** & **Critical** & **Compliant** & **Non-compliant** \\ \hline \(SDT_{1}\) & 864 & 5,018 & 284 & 2,002 & 3,016 \\ \(SDT_{2}\) & 865 & 5,023 & 284 & 2,002 & 3,021 \\ \(RTD_{1}\) & 221 & 1,914 & 21 & 882 & 1,032 \\ \(RTD_{2}\) & 370 & 21,802 & 70 & 10901 & 10901 \\ \(CORA\) & 2,708 & 10,556 & 180 & N/A & N/A \\ \(CITESEER\) & 3,327 & 9,464 & 523 & N/A & N/A \\ \hline \hline \end{tabular} \end{table} TABLE II: Dataset features and statistics. **Accuracy evaluation:** Table III summarizes the performance of \(\mathit{SPGNN}_{R}\) and \(\mathit{SPGNN}_{DNN}\). While both models can discriminate symmetric nodes by their different distances to anchor sets, we observe that \(\mathit{SPGNN}_{DNN}\) significantly outperforms \(\mathit{SPGNN}_{R}\) across all datasets. This can be attributed to the power of the DNN in capturing the skewed relationships between the generated positional embedding and the defined set of path-length classes. Furthermore, transforming the prediction of path lengths to a classification task, where one-hot encoding is used to represent the output, enables the model to capture the ordinal relationships between the different lengths and hence the gain in the performance. Both architectures exhibit performance degradation when tested with the real-world dataset \(RTD_{1}\). Due to the relatively small size of the dataset. The model could not capture the complex relationships between the network entities during training. **Self-loops:** In general, adding self-loops allows the GNN to aggregate the source node's features along with that of its neighbors [35]. Nevertheless, since our model relies only on positional embedding irrelevant to the features of the nodes, removing the self-loops enhances the accuracy of SPGNN as detailed in Table III as the iterative accumulation of the node positional embedding confuses the learned relative distance to the anchor sets. Accordingly, we introduce a data pre-processing stage to remove self-loops in the real-world network datasets and the citation datasets. **SPGNN convergence:** We illustrate in Figure 3 the progression of the Mean Squared Error MSE loss during the training process of \(\mathit{SPGNN}_{R}\). We particularly assess the \(\mathit{SPGNN}_{R}\) since, unlike the \(\mathit{SPGNN}_{DNN}\), its output directly reflects the GNN performance without further learning tasks. We observe that the gradient is sufficiently large and proceeding in the direction with the steepest descent which indeed minimizes the objective. The stability and efficacy of the learning process constantly enhance the accuracy of the model irrelevant of the dataset characteristics. The objective function is sufficiently smooth indicating that the model is not under-fitting. **Analysis of the \(\mathit{SPGNN}_{R}\) generated shortest path distance embedding.** We conducted an in-depth analysis of 20 random samples from the test sets of the six datasets. For each sample, we plot the predicted \(\hat{d}\) vs rounded \(\mathit{SP}_{pred}\) vs observed \(d_{y}\) shortest paths distances in blue, yellow, and red, respectively, as illustrated in Figure 4. We observe the proximity of the predicted and observed distances where the predicted values are mostly in the range of +/- 1 hop from the observed values. Hence, we prove the strength of the proposed GNN approach in approximating the shortest path distance. We further notice that the rounded values are vastly overlapping with the observed values which further proves the robustness of the simple, yet intuitive, rounding approach. **Baseline comparison:** We compare the performance of the proposed model with the baseline \(\mathit{SPGAN}\). We observe that the proposed architectures, in particular the \(\mathit{SPGNN}_{DNN}\), strictly outperform \(\mathit{SPGAN}\) and can capture the skewed relationships in the datasets as shown in Table III. This can be attributed to the fact that \(\mathit{SPGAN}\) uses a spatial attention mechanism that only considers the neighboring nodes within a predefined radius around each target node during the learning phase and does not incorporate features of nodes beyond the predefined distance which impacts the model performance. Furthermore, \(\mathit{SPAGAN}\) (and most state-of-the-art approaches) relies on the graph elements' features to calculate the shortest paths distance information. This justifies the performance degradation of \(\mathit{SPAGAN}\), in this case, since only graph structure and positional embedding are considered. This further proves the strength of the proposed approach that can identify, with high accuracy, the shortest paths distance irrelevant to graph elements features. ### _Evaluation of Transfer-Learning_ In this setting, the pre-training and testing processes are executed through distinct datasets. The goal of the pre-training is to transfer knowledge learned from labeled datasets to facilitate the downstream tasks with the unlabeled datasets. We only consider the \(\mathit{SPGNN}_{R}\) for testing in this setting. The stacked DNN of the \(\mathit{SPGNN}_{DNN}\) approach is characterized by a fixed input size and hence is not expandable to accommodate different network structures. To assess the robustness of the model transferability, we pre-train the model using different synthetic and real-world datasets. We observe that, in general, the size and sophistication of the dataset used for pre-training highly impact the performance of the model transferability. In general, training with real data yields better performance. We believe that the significant improvements can be attributed to the ability of \(\mathit{SPGNN}\) to utilize the perturbation in real-world data to consider more complicated interactions between the data samples which optimizes the model's ability to extend label information to unlabeled datasets. In contrast, pre-training the model with synthetic data and testing on a real dataset slightly hurts the accuracy. The non-perturbed structure of the synthetic data gives limited performance gain and yields negative transfer on the downstream classification task. In general, the results show convincing evidence that the inductive capabilities of the proposed \(\mathit{SPGNN}\) generalize to unseen datasets as detailed in Table IV. ### _Evaluation of Attack Paths and Critical Edges Detection_ The SPGNN-API does not record the end-to-end sequence of attack steps as there might be an infinite number of alternatives as discussed in Section V-F. It rather identifies Fig. 3: \(\mathit{SPGNN}_{R}\) MSE loss convergence for the six datasets. the propensity of an edge being part of an attack, i.e. there exists a (shortest) path from that edge to a highly-critical asset going through vulnerable nodes. Accordingly, to evaluate the performance of the attack path detection, we do not rely on an end-to-end assessment of attack paths. We rather assess the propensity of single edges being part of an attack. We evaluate the accuracy of the edge classification (Sec. IV-E) in two different settings semi-supervised and transfer-learning. We compare the model performance against a baseline (MulVAL). We base our assessment on the four enterprise network datasets as the citation datasets do not incorporate vulnerability information. We rely on the security team of the enterprises contributing to this study to manually label connections they would potentially block or patch given the network structure, reported vulnerabilities, and network visualization tools. **Accuracy assessment:** We assess the performance of the edge classifier in categorizing the attack path edges as either critical compliant, critical non-compliant, or safe. Comparing the output of the classifier to the manually labeled data we note the performance results in Table V. Since the set of safe edges comprises the attack path edges classified as safe as well as the connectivity graph edges that were not part of any attack path, the recorded model accuracy proves the efficacy of the presented approach in detecting attack paths in general and identifying key critical edges in particular. In addition to the raw accuracy rates, we report the receiver operating characteristic curve (ROC) and area under the curve \begin{table} \begin{tabular}{l c c c c c|c c c c c c} \hline \hline & \multicolumn{4}{c}{**Dataset Before Deleting self loops**} & \multicolumn{4}{c}{**Dataset After Deleting self loops**} \\ \cline{2-13} **Metrics** & \(STD_{1}\) & \(STD_{2}\) & \(RTD_{1}\) & \(RTD_{2}\) & \(CORA\) & \(CITESER\) & \(STD_{1}\) & \(STD_{2}\) & \(RTD_{1}\) & \(RTD_{2}\) & \(CORA\) & \(CITESER\) \\ \hline \(SPGNN_{R}\)\(\mathcal{L}\) & 0.07 & 0.14 & 0.36 & 0.02 & 0.33 & 0.53 & 0.02 & 0.14 & 0.22 & 0.01 & 0.29 & 0.38 \\ \hline \hline Accuracy \(SP_{pred}\) & 90.00\% & 84.04\% & 71.00\% & 94.02\% & 65.70\% & 68.53\% & 98.88\% & 84.47\% & 72.41\% & 97.05\% & 65.34\% & 72.53\% \\ \hline \(Accuracy\pm_{1hop}\) & 100\% & 98.42\% & 91.61\% & 100\% & 96.41\% & 92.65\% & 100\% & 98.50\% & 93.85\% & 100\% & 97.11\% & 94.54\% \\ \hline \hline \(SPGNN_{DNN}\)\(\mathcal{L}_{c}\) & 0.03 & 0.08 & 0.24 & 0.01 & 0.26 & 0.41 & 0.01 & 0.10 & 0.19 & 0.01 & 0.23 & 0.26 \\ \hline \(AccuracyPSIGN_{DNN}\) & 95.63\% & 80.14\% & 53.05\% & 96.10\% & 81.36\% & 79.36\% & 98.45\% & 84.47\% & 78.65\% & 98.25\% & 75.82\% & 81.20\% \\ \hline \(Accuracy\pm_{1hop}\) & 86.45\% & 85.29\% & 86.15\% & 98.65\% & 92.70\% & 84\% & 93.10\% & 91.93\% & 89.23\% & 100\% & 92.94\% & 87.32\% \\ \hline \hline MSE(SPAGAN) & 0.54 & 0.62 & 0.91 & 0.48 & 0.85 & 0.95 & 0.52 & 0.59 & 0.72 & 0.35 & 0.69 & 0.82 \\ \hline \(AccuracySP_{pred}\) & 52.36\% & 50.14\% & 57.50\% & 82.35\% & 62.12\% & 53.36\% & 54.23\% & 52.36\% & 56.23\% & 85.65\% & 63.26\% & 55.68\% \\ \hline \(Accuracy\pm_{1hop}\) & 86.45\% & 85.29\% & 86.15\% & 98.65\% & 92.70\% & 84\% & 88.20\% & 85.60\% & 84.42\% & 96.75\% & 93.98\% & 83.62\% \\ \hline \hline \end{tabular} \end{table} TABLE III: Overview of shortest paths identification accuracy of \(SPGNN_{R}\) and \(SPGNN_{DNN}\) as compared to the \(SPAGAN\) across the six datasets before and after deleting self-loops. Fig. 4: Shortest path distance distribution of 20 random samples from each of the six datasets. The blue and yellow points are the \(SPGNN\) predicted distances _before_ and _after_ the application of the rounding process, respectively. The red points are the observed distances. The Figures illustrate the accuracy of the predicted distances being within the range [-1,1] of the observed values. We further observe that the majority of the _rounded distances_ are either overlapping with or closer to the _observed distances_. This shows the efficiency of the rounding approach to enhance the shortest path distance prediction accuracy. (AUC). We assess the ability of the classifier to discriminate critical and safe edges, in general. Accordingly, we combine the critical compliant and critical non-compliant classes. The true positive samples are (compliant/non-compliant) critical samples that have been classified as critical. The false positive samples are critical samples that have been classified as safe. The ROC curve in Figure 5 illustrates outstanding discrimination of the two classes with an AUC score of 0.998. **Transfer-learning:** To assess the end-to-end transferability of the presented approach, we train the edge classifier using a dataset and test it using different datasets. The recorded classification accuracy in Table VI proves the inductive capabilities of \(SPGNN\) and its ability to efficiently characterize previously unseen data. To our expectations, training the model using a real dataset performs better on all datasets. The model's capacity to extend the label information to previously unseen datasets is enhanced by the perturbations in real-world datasets that enable the classifier to consider more complex interactions between the data samples. To plot the ROC curve, we combine the critical compliant and critical non-compliant classes and assess the model's ability to discriminate the critical and safe edges. The ROC curve in Figure 6 illustrates outstanding discrimination of the two classes with an AUC score between 0.93 and 0.98. **Baseline comparison** : We compare the SPGNN-API with the MulVAL attack graph generator. The MulVAL-generated attack graph nodes can be of three types; configuration nodes, privilege nodes (exploits), and attack step nodes (conditions). The privilege nodes represent compromised assets. The root nodes of the attack graph represent network configurations/vulnerabilities contributing to attack possibilities. The privilege nodes denote the compromised assets. The set of paths of the attack graph comprises all directed attack paths starting at the root configuration nodes and ending at the privilege nodes (attack goals). We configure the attack path generation to have all highly-critical assets as attack goals. We assess the attack step nodes and note the ZT policies that have been a step to achieve the attack privilege. We then compare the noted rules to the set of rules that have been flagged as critical by the SPGNN-API. We perform the analysis relying on \(RTD_{2}\) since no Nessus scan is available for the synthetic datasets and we had limited access to the \(RTD_{1}\) Nessus output for privacy reasons. The dataset has 370 nodes, of which 70 are highly-critical assets. The Nessus identified 44 vulnerable assets, of which six are highly critical. All six assets have been identified as potentially compromised by the MulVAL as well as \(SPGNN-API\). The \(SPGNN\), however, outperformed the MulVAL by detecting more potentially compromised non-critical assets as detailed in Table VII. This proves the significance of the presented holistic approach to vulnerability interaction. Further assessing the generated attack paths, we observe that SPGNN-API labeled 713 edges (and underlying ZT policies) as critical while only 171 appeared as a MulVAL attack step. This can be attributed to the fact that MulVAL restricts the detection of potential attacks to a predefined set of vulnerability interactions while the SPGNN-API assumes that any vulnerability can potentially be exploited by the attacker irrelevant of any pre-requisites. Of the 171 edges detected by MulVAL, our approach was able to detect 166. The five edges we missed are connecting level 7 highly-critical assets to level 1 assets. Since we aim to protect highly-critical assets these edges are not considered critical as per our features. ## VI Conclusion This work presents the first attempt at GNN-based identification of attack paths in dynamic and complex network structures. Our work fills the gaps and extends the current \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{3}{c}{**Dataset**} \\ \cline{2-5} **Metrics** & \(STD_{1}\) & \(STD_{2}\) & \(RTD_{1}\) & \(RTD_{2}\) \\ \hline Cross\_Entropy Loss & \(0.095\) & \(0.0061\) & \(0.01\) & \(0.007\) \\ \hline Accuracy & \(99.5\%\) & \(100\%\) & \(99.11\%\) & \(100\%\) \\ \hline \hline \end{tabular} \end{table} TABLE V: Performance overview of the \(SPGNN\) edge criticality classification in semi-supervised setting. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Model trained by**} & \(RTD_{1}\) & **Model trained by** & \(STD_{1}\) \\ \cline{2-7} **Metrics** & \(STD_{1}\) & \(STD_{2}\) & \(RTD_{2}\) & \(STD_{2}\) & \(RTD_{1}\) & \(RTD_{2}\) \\ \hline Cross\_Entropy Loss & \(0.009\) & \(0.0037\) & \(1.20\) & \(0.002\) & \(0.79\) & \(0.18\) \\ \hline Accuracy & \(100.00\%\) & \(98.17\%\) & \(92.75\%\) & \(99.87\%\) & \(92.42\%\) & \(97.44\%\) \\ \hline \hline \end{tabular} \end{table} TABLE VI: Performance overview of the \(SPGNN\) edge criticality classification in transfer-learning. Fig. 5: ROC curves of the \(SPGNN\) edge classification in the semi-supervised setting. Fig. 6: ROC curves of the \(SPGNN\) edge classification in the transfer-learning setting. literature with a novel GNN-based approach to automated vulnerability analysis, attack path identification, and risk assessment of underlying network connections that enable critical attack paths. We further present a framework for automated mitigation through a proactive non-person-based timely tuning of the network firewall rules and ZT policies to bolster cyber defenses before potential damage takes place. We model a novel GNN architecture for calculating shortest path lengths exclusively relying on nodes' positional information irrelevant to graph elements' features. We prove that removing self-loops enhances the accuracy of shortest path distance identification as self-loops render the nodes' positional embedding misleading. Furthermore, our in-depth analysis of attack path identification proves the efficiency of the presented approach in locating key connections potentially contributing to attacks compromising network highly-critical assets, with high accuracy. A key strength of the presented approach is not limiting the attacks' detection to a predefined set of possible vulnerabilities interaction. Hence, it is capable of effectively and proactively mitigating cyber risks in complex and dynamic networks where new attack vectors and increasingly sophisticated threats are emerging every day.
2306.17844
The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks
Do neural networks, trained on well-understood algorithmic tasks, reliably rediscover known algorithms for solving those tasks? Several recent studies, on tasks ranging from group arithmetic to in-context linear regression, have suggested that the answer is yes. Using modular addition as a prototypical problem, we show that algorithm discovery in neural networks is sometimes more complex. Small changes to model hyperparameters and initializations can induce the discovery of qualitatively different algorithms from a fixed training set, and even parallel implementations of multiple such algorithms. Some networks trained to perform modular addition implement a familiar Clock algorithm; others implement a previously undescribed, less intuitive, but comprehensible procedure which we term the Pizza algorithm, or a variety of even more complex procedures. Our results show that even simple learning problems can admit a surprising diversity of solutions, motivating the development of new tools for characterizing the behavior of neural networks across their algorithmic phase space.
Ziqian Zhong, Ziming Liu, Max Tegmark, Jacob Andreas
2023-06-30T17:59:13Z
http://arxiv.org/abs/2306.17844v2
# The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks ###### Abstract Do neural networks, trained on well-understood algorithmic tasks, reliably re-discover known algorithms for solving those tasks? Several recent studies, on tasks ranging from group arithmetic to in-context linear regression, have suggested that the answer is yes. Using modular addition as a prototypical problem, we show that algorithm discovery in neural networks is sometimes more complex. Small changes to model hyperparameters and initializations can induce discovery of qualitatively different algorithms from a fixed training set, and even parallel implementations of multiple such algorithms. Some networks trained to perform modular addition implement a familiar _Clock_ algorithm [1]; others implement a previously undescribed, less intuitive, but comprehensible procedure we term the _Pizza_ algorithm, or a variety of even more complex procedures. Our results show that even simple learning problems can admit a surprising diversity of solutions, motivating the development of new tools for characterizing the behavior of neural networks across their algorithmic phase space. ## 1 Introduction Mechanistically understanding deep network models--reverse-engineering their learned algorithms and representation schemes--remains a major challenge across problem domains. Several recent studies [2; 3; 4; 5; 1] have exhibited specific examples of models apparently re-discovering interpretable (and in some cases familiar) solutions to tasks like curve detection, sequence copying and modular arithmetic. Are these models the exception or the rule? Under what conditions do neural network models discover familiar algorithmic solutions to algorithmic tasks? In this paper, we focus specifically on the problem of learning modular addition, training networks to compute sums like \(8+6=2\ (\mathrm{mod}\ 12)\). Modular arithmetic can be implemented with a simple geometric solution, familiar to anyone who has learned to read a clock: every integer is represented as an angle, input angles are added together, and the resulting angle evaluated to obtain a modular sum (Figure 1, left). Nanda et al. [1] show that specific neural network architectures, when trained to perform modular addition, implement this _Clock_ algorithm. In this work, we show that the _Clock_ algorithm is only one part of a more complicated picture of algorithm learning in deep networks. In particular, networks very similar to the ones trained by Nanda et al. preferentially implement a qualitatively different approach to modular arithmetic, which we term the _Pizza_ algorithm (Figure 1, right), and sometimes even more complex solutions. Models exhibit sharp _algorithmic phase transitions_[6] between the _Clock_ and _Pizza_ algorithms as their width and attention strength very, and often implement multiple, imperfect copies of the _Pizza_ algorithm in parallel. Our results highlight the complexity of mechanistic description in even models trained to perform simple tasks. They point to characterization of algorithmic phase spaces, not just single algorithmic solutions, as an important goal in algorithm-level interpretability. **Organization** In Section 2, we review the _Clock_ algorithm [1] and show empirical evidence of deviation from it in models trained to perform modular addition. In Section 3, we show that these deviations can be explained by an alternative _Pizza_ algorithm. In Section 4, we define additional metrics to distinguish between these algorithms, and detect phase transitions between these algorithms (and others _Non-circular_ algorithms) when architectures and hyperparameters are varied. We discuss the relationship between these findings and other work on model interpretation in Section 5, and conclude in Section 6. ## 2 Modular Arithmetic and the _Clock_ Algorithm **Setup** We train neural networks to perform modular addition \(a+b=c\pmod{p}\), where \(a,b,c=0,1,\cdots,p-1\). We use \(p=59\) throughout the paper. In these networks, every integer \(t\) has an associated embedding vector \(\mathbf{E}_{t}\in\mathbb{R}^{d}\). Networks take as input embeddings \([\mathbf{E}_{a},\mathbf{E}_{b}]\in\mathbb{R}^{2d}\) and predict a categorical output \(c\). Both embeddings and network parameters are learned. In preliminary experiments, we train two different network architectures on the modular arithmetic task, which we refer to as: Model A and Model B. **Model A** is a one-layer ReLU transformer [7] with constant attention, while **Model B** is a standard one-layer ReLU transformer (see Appendix F.1 for details). As attention is not involved in Model A, it can also be understood as a ReLU MLP (Appendix G). ### Review of the _Clock_ Algorithm As in past work, we find that after training both Model A and Model B, embeddings (\(\mathbf{E}_{a},\mathbf{E}_{b}\) in Figure 1) usually describe a circle [8] in the plane spanned by the first two principal components of the embedding matrix. Formally, \(\mathbf{E}_{a}\approx[\cos(w_{k}a),\sin(w_{k}a)]\) where \(w_{k}=2\pi k/p\), \(k\) is an integer in \([1,p-1]\). Nanda et al. [1] discovered a circuit that uses these circular embeddings to implement an interpretable algorithm for modular arithmetic, which we call the _Clock_ algorithm. \begin{table} \begin{tabular}{c c c c} \hline Algorithm & Learned Embeddings & Gradient Symmetry & Required Non-linearity \\ \hline Clock & Circle & No & Multiplication \\ Pizza & Circle & Yes & Absolute value \\ Non-circular & Line, Lissajous-like curves, etc. & Yes & N/A \\ \hline \end{tabular} \end{table} Table 1: Different neural algorithms for modular addition Figure 1: Illustration of the _Clock_ and the _Pizza_ Algorithm. "If a meeting starts at 10, and lasts for 3 hours, then it will end at 1." This familiar fact is a description of a modular sum, \(10+3=1\,(\mathrm{mod}\ 12)\), and the movement of a clock describes a simple algorithm for modular arithmetic: the numbers 1 through 12 are arranged on a circle in \(360^{\circ}/12=30^{\circ}\) increments, angles of \(10\times 30^{\circ}\) and \(3\times 30^{\circ}\) are added together, then this angle is evaluated to determine that it corresponds to \(1\times 30^{\circ}\). Remarkably, Nanda et al. [1] find that neural networks like our Model B implement this _Clock_ algorithm, visualized in Figure 1 (left): they represent tokens \(a\) and \(b\) as 2D vectors, and adding their polar angles using trigonometric identities. Concretely, the _Clock_ algorithm consists of three steps: In step 1, tokens \(a\) and \(b\) are embedded as \(\mathbf{E}_{a}=[\cos(w_{k}a),\sin(w_{k}a)]\) and \(\mathbf{E}_{b}=[\cos(w_{k}b),\sin(w_{k}b)]\), respectively, where \(w_{k}=2\pi k/p\) (a real clock in everyday life has \(p=12\) and \(k=1\)). Then the polar angles of \(\mathbf{E}_{a}\) and \(\mathbf{E}_{b}\) are added (in step 2) and extracted (in step 3) via trigonometric identities. For each candidate output \(c\), we denote the logit \(Q_{abc}\), and finally the predicted output is \(c^{*}=\mathrm{argmax}_{c}\,Q_{abc}\). Crucial to this algorithm is the fact that the attention mechanism can be leveraged to perform multiplication. What happens in model variants when the attention mechanism is absent, as in Model A? We find two pieces of evidence of deviation from the _Clock_ algorithm in Model A. ### First Evidence for _Clock_ Violation: Gradient Symmetricity Since the _Clock_ algorithm has logits: \[Q_{abc}^{\mathrm{Clock}}=(\mathbf{E}_{a,x}\mathbf{E}_{b,x}-\mathbf{E}_{a,y} \mathbf{E}_{b,y})\mathbf{E}_{c,x}+(\mathbf{E}_{a,x}\mathbf{E}_{b,y}+\mathbf{ E}_{a,y}\mathbf{E}_{b,x})\mathbf{E}_{c,y}, \tag{1}\] (see Figure 1) the gradients of \(Q_{abc}\) generically lack permutation symmetry in argument order: \(\nabla_{\mathbf{E}_{a}}Q_{abc}\neq\nabla_{\mathbf{E}_{b}}Q_{abc}\). Thus, if learned models exhibit permutation symmetry (\(\nabla_{\mathbf{E}_{a}}Q_{abc}=\nabla_{\mathbf{E}_{b}}Q_{abc}\)), they must be implementing some other algorithm. We compute the 6 largest principal components of the input embedding vectors. We then compute the gradients of output logits (unnormalized log-probabilities from the model) with respect to the input embeddings, and then project them onto these 6 principal components. We compute the gradients of output logits with respect to input embeddings, then compute the projections of these gradients onto the principal components of the embedding space (since the angles relevant to the _Clock_ and _Pizza_ algorithms are encoded in the first few principal components). These projections are shown in Figure 2. While Model B demonstrates asymmetry in general, Model A exhibits gradient symmetry. ### Second Evidence for _Clock_ Violation: Logit Patterns Inspecting models' outputs, in addition to inputs, reveals further differences. For each input pair \((a,b)\), we compute the output logit (un-normalized log probability) assigned to the correct label \(a+b\). We visualize these _correct logits_ from Models A and B in Figure 3. Notice that the rows are indexed by \(a-b\) and the columns by \(a+b\). From Figure 3, we can see that the correct logits of Model A have a clear dependency on \(a-b\) in that within each row, the correct logits are roughly the same, Figure 2: Gradients on first six principal components of input embeddings. \((a,b,c)\) in the title stands for taking gradients on the output logit \(c\) for input \((a,b)\). x and y axes represent the gradients for embeddings of the first and the second token. The dashed line \(y=x\) signals a symmetric gradient. while this pattern is not observed in Model B. This suggests that Models A and B are implementing different algorithms. ## 3 An Alternative Solution: the _Pizza_ Algorithm How does Model A perform modular arithmetic? Whatever solution it implements must exhibit gradient symmetricity in Figure 2 and the output patterns in Figure 3. In this section, we describe a new algorithm for modular arithmetic, which we call the _Pizza_ algorithm, and then provide evidence that this is the procedure implemented by Model A. ### The _Pizza_ Algorithm Unlike the _Clock_ algorithm, the _Pizza_ algorithm operates _inside_ the circle formed by embeddings (just like pepperoni are spread all over a pizza), instead of operating on the circumference of the circle. The basic idea is illustrated in Figure 1: given a fixed label \(c\), for _all_\((a,b)\) with \(a+b=c\pmod{p}\), the points \(\mathbf{E}_{ab}=(\mathbf{E}_{a}+\mathbf{E}_{b})/2\) lie on a line though the origin of a 2D plane, and the points closer to this line than to the lines corresponding to any other \(c\) form two out of \(2p\) mirrored "pizza slices", as shown at the right of the figure. Thus, to perform modular arithmetic, a network can determine which slice pair the average of the two embedding vectors lies in. Concretely, the _Pizza_ algorithm also consists of three steps. Step 1 is the same as in the _Clock_ algorithm: the tokens \(a\) and \(b\) are embedded at \(\mathbf{E}_{a}=(\cos(w_{k}a),\sin(w_{k}a))\) and \(\mathbf{E}_{b}=(\cos(w_{k}b),\sin(w_{k}b))\), respectively. Step 2 and Step 3 are different from the _Clock_ algorithm. In Step 2.1, \(\mathbf{E}_{a}\) and \(\mathbf{E}_{b}\) are averaged to get \(\mathbf{E}_{ab}\). In Step 2.2 and Step 3, the polar angle of \(\mathbf{E}_{ab}\) is (implicitly) computed by computing the logit \(Q_{abc}\) for any possible outputs \(c\). While one possibility of doing so is to take the absolute value of the dot product of \(\mathbf{E}_{ab}\) with \((\cos(w_{k}c/2),\sin(w_{k}c/2))\), it is not commonly observed in neural networks (and will result in a different logit pattern). Instead, Step 2.2 transforms the sum into \(\mathbf{H}_{ab}\), which is then taken dot product with the output embedding \(U_{c}=(\cos(w_{k}c),\sin(w_{k}c))\). Finally, the prediction is \(c^{*}=\operatorname*{argmax}_{c}Q_{abc}\). See Appendix A and Appendix J for an example of such transforms (Step 2.2), mathematical derivation and the analysis of a circuit found in the wild. The key difference between the two algorithms lies in what non-linear operations are required: _Clock_ requires multiplication of inputs in Step 2, while _Pizza_ requires only absolute value computation, which is easily implemented by the ReLU layers. If neural networks lack inductive biases toward implementing multiplication, they may be more likely to implement _Pizza_ rather than _Clock_, as we will verify in Section 4. ### First Evidence for _Pizza_: Logit Patterns Both the _Clock_ and _Pizza_ algorithms compute logits \(Q_{abc}\) in Step 3, but they have different forms, shown in Figure 1. Specifically, \(Q_{abc}(Pizza)\) has an extra multiplicative factor \(|\cos(w_{k}(a-b)/2)|\) than \(Q_{abc}(\mathit{Clock})\). As a result, given \(c=a+b\), \(Q_{abc}(Pizza)\) is dependent on \(a-b\), but \(Q_{abc}(\mathit{Clock})\) is not. The intuition for the dependence is that a sample is more likely to be classified correctly if \(\mathbf{E}_{ab}\) is longer. The norm of this vector depends on \(a-b\). As we observe in Figure 3, the logits in Model A indeed exhibit a strong dependence on \(a-b\). Figure 3: Correct Logits of Model A & Model B. The correct logits of Model A (left) have a clear dependence on \(a-b\), while those of Model B (right) do not. ### Second Evidence for _Pizza_: Clearer Logit Patterns via Circle Isolation To better understand the behavior of this algorithm, we replace the embedding matrix \(\mathbf{E}\) with a series of rank-2 approximations: using only the first and second principal components, or only the third and fourth, etc. For each such matrix, embeddings lie in a a two-dimensional subspace. For both Model A and Model B, we find that embeddings form a circle in this subspace (Figure 4 and Figure 5, bottom). We call this procedure _circle isolation_. Even after this drastic modification to the trained models' parameters, both Model A and Model B continue to behave in interpretable ways: a subset of predictions remain highly accurate, with this subset determined by the periodicity of the \(k\) of the isolated circle. As predicted by the _Pizza_ and _Clock_ algorithms described in Figure 1, Model A's accuracy drops to zero at specific values of \(a-b\), while Model B's accuracy is invariant in \(a-b\). Applying circle isolation to Model A on the two principal components (one circle) yields a model with \(32.8\%\) overall accuracy, while retaining the first six principal components (three circles) yields an overall accuracy of \(91.4\%\). See Appendix D for more discussion. By contrast, Model B achieves \(100\%\) when embeddings are truncated to the first six principal components. Circle isolation thus reveals an _error correction_ mechanism achieved via ensembling: when an algorithm (clock or pizza) exhibits systematic errors on subset of inputs, models can implement multiple algorithm variants in parallel to obtain more robust predictions. Using these isolated embeddings, we may additionally calculate the isolated logits directly with formulas in Figure 1 and compare with the actual logits from Model A. Results are displayed in Table 2. We find that \(Q_{abc}(Pizza)\) explains substantially more variance than \(Q_{abc}(\textit{Clock})\). **Why do we only analyze correct logits?** The logits from the _Pizza_ algorithm are given by \(Q_{abc}(Pizza)=|\mathrm{cos}(w_{k}(a-b)/2)|\cos(w_{k}(a+b-c))\). By contrast, the _Clock_ algorithm has logits \(Q_{abc}(\textit{Clock})=\mathrm{cos}(w_{k}(a+b-c))\). In a word, \(Q_{abc}(Pizza)\) has an extra multiplicative factor \(|\mathrm{cos}(w_{k}(a-b)/2)|\) compared to \(Q_{abc}(\textit{Clock})\). By constraining \(c=a+b\) (thus \(\mathrm{cos}(w_{k}(a+b-c))=1\)), the factor \(|\mathrm{cos}(w_{k}(a-b)/2)|\) can be identified. **(Unexpected) dependence of logits \(Q_{abc}(\textit{Clock})\) on \(a+b\)**: Although our analysis above expects logits \(Q_{abc}(\textit{Clock})\) not to depend on \(a-b\), they do not predict its dependence on \(a+b\). In Figure 5, we surprisingly find that \(Q_{abc}(\textit{Clock})\) is sensitive to this sum. Our conjecture is that Step 1 and Step 2 of the _Clock_ are implemented (almost) noiselessly, such that same-label samples collapse to the same point after Step 2. However, Step 3 (classification) is imperfect after circle isolation, resulting in fluctuations of logits. Inputs with common sums \(a+b\) produce the same logits. Figure 4: Correct logits of Model A (_Pizza_) after circle isolation. The rightmost pizza is accompanying the third pizza (discussed in Section 3.4 and Appendix D). _Top:_ The logit pattern depends on \(a-b\). _Bottom:_ Embeddings for each circle. ### Third Evidence for _Pizza_: Accompanied & Accompanying Pizza The Achilles' heel of the _Pizza_ algorithm is antipodal pairs. If two inputs \((a,b)\) happen to lie antipodally, then their middle point will lie at the origin, where the correct "pizza slice" is difficult to identify. For example in Figure 1 right, antipodal pairs are (1,7), (2,8), (3,9) etc., whose middle points all collapse to the origin, but their class labels are different. Therefore models cannot distinguish between, and thus correctly classify, these pairs. Even for prime \(p\) where there are no strict antipodal pairs, approximately antipodal pairs are also more likely to be classified incorrectly than non-antipodal pairs. Intriguingly, neural networks find a clever way to compensate for this failure mode. we find that pizzas usually come with "accompanying pizzas". An accompanied pizza and its accompanying pizza complement each other in the sense that near-antipodal pairs in the accompanied pizza become adjacent or close (i.e, very non-antipodal) in the accompanying pizza. If we denote the difference between adjacent numbers on the circle as \(\delta\) and \(\delta_{1}\), \(\delta_{2}\) for accompanied and accompanying pizzas, respectively, then \(\delta_{1}=2\delta_{2}\pmod{p}\). In the experiment, we found that pizzas #1/#2/#3 in Figure 4 all have accompanying pizzas, which we call pizzas #4/#5/#6 (see Appendix D for details). However, these accompanying pizzas do not play a significant role in final model predictions 1. We conjecture that training dynamics are as follows: (1) At initialization, pizzas #1/#2/#3 correspond to three different "lottery tickets" [9]. (2) In early stages of training, to compensate the weaknesses (antipodal pairs) of pizzas #1/#2/#3, pizzas #4/#5/#6 are formed. (3) As training goes on (in the presence of weight decay), the neural network gets pruned. As a result, pizzas #4/#5/#6 are not used much for prediction, although they continue to be visible in the embedding space. \begin{table} \begin{tabular}{l|l|l|l} Circle & \(w_{k}\) & \(Q_{abc}(\text{clock})\) FVE & \(Q_{abc}(\text{pizza})\) FVE \\ \hline \#1 & \(2\pi/59\cdot 17\) & 75.41\% & 99.18\% \\ \hline \#2 & \(2\pi/59\cdot 3\) & 75.62\% & 99.18\% \\ \hline \#3 & \(2\pi/59\cdot 44\) & 75.38\% & 99.28\% \\ \end{tabular} \end{table} Table 2: After isolating circles in the input embedding, fraction of variance explained (FVE) of **all** Model A’s output logits (\(59\times 59\times 59\) of them) by various formulas. Both model output logits and formula results’ are normalized to mean \(0\) variance \(1\) before taking FVE. \(w_{k}\)’s are calculated according to the visualization. For example, distance between \(0\) and \(1\) in Circle #1 is \(17\), so \(w_{k}=2\pi/59\cdot 17\). Figure 5: Correct logits of Model B (_Clock_) after circle isolation. _Top:_ The logit pattern depends on \(a+b\). _Bottom:_ Embeddings for each circle. The Algorithmic Phase Space In Section 3, we have demonstrated a typical _Clock_ (Model A) and a typical _Pizza_ (Model B). In this section, we study how architectures and hyperparameters govern the selection of these two algorithmic "phases". In Section 4.1, we propose quantitative metrics that can distinguish between _Pizza_ and _Clock_. In Section 4.2, we observe how these metrics behave with different architectures and hyperparameters, demonstrating sharp phase transitions. The results in this section focus _Clock_ and _Pizza_ models, but other algorithmic solutions to modular addition are also discovered, and explored in more detail in Appendix B. ### Metrics We want to study the distribution of _Pizza_ and _Clock_ algorithms statistically, which will require us to distinguish between two algorithms automatically. In order to do so, we formalize our observations in Section 2.2 and 2.3, arriving at two metrics: **gradient symmetricity** and **distance irrelevance**. #### 4.1.1 Gradient Symmetricity To measure the symmetricity of the gradients, we select some input-output group \((a,b,c)\), compute the gradient vectors for the output logit at position \(c\) with respect to the input embeddings, and then compute the cosine-similarity. Taking the average over many pairs yields the gradient symmetricity. **Definition 4.1** (Gradient Symmetricity).: _For a fixed set \(S\subseteq\mathbb{Z}_{p}^{3}\) of input-output pairs2, define **gradient-symmetricity** of a network \(M\) with embedding layer \(E\) as_ Footnote 2: To speed-up the calculations, in our experiments \(S\) is taken as a random subset of \(\mathbb{Z}_{p}^{3}\) of size \(100\). \[s_{g}\equiv\frac{1}{|S|}\sum_{(a,b,c)\in S}\text{sim}\left(\frac{\partial Q_{ abc}}{\partial\mathbf{E}_{a}},\frac{\partial Q_{abc}}{\partial\mathbf{E}_{b}} \right),\] _where \(\text{sim}(a,b)=\frac{a\cdot b}{|a||b|}\) is the cosine-similarity, \(Q_{abc}\) is the logit for class \(c\) given input \(a\) and \(b\). It is clear that \(s_{g}\in[-1,1]\)._ As we discussed in Section 2.2, the _Pizza_ algorithm has symmetric gradients while the _Clock_ algorithm has asymmetric ones. Model A and Model B in Section 3 have gradient symmetricity \(99.37\%\) and \(33.36\%\), respectively (Figure 2). #### 4.1.2 Distance Irrelevance To measure the dependence of correct logits on differences between two inputs, which reflect the distances of the inputs on the circles, we measure how much of the variance in the correct logit matrix depends on it. We do so by comparing the average standard deviation of correct logits from inputs with the same differences and the standard deviation from all inputs. **Definition 4.2** (Distance Irrelevance).: _For some network \(M\) with correct logit matrix \(L\) (\(L_{i,j}=Q_{ij,i+j}\)), define its **distance irrelevance** as_ \[q\equiv\frac{\frac{1}{p}\sum_{d\in\mathbb{Z}_{p}}\operatorname{std}\left(L_{ i,i+d}\mid i\in\mathbb{Z}_{p}^{2}\right)}{\operatorname{std}\left(L_{i,j}\mid i,j \in\mathbb{Z}_{p}^{2}\right)},\] _where \(\operatorname{std}\) computes the standard deviation of a set. It is clear that \(q\in[0,1]\)._ Model A and Model B in Section 3 give distance irrelevance 0.17 and 0.85, respectively (Figure 3). A typical distance irrelevance from the _Pizza_ algorithm ranges from 0 to 0.5 while a typical distance irrelevance from _Clock_ algorithm ranges from 0.5 to 1. ### Phase Transition Results We want to study how models "choose" whether to implement the _Clock_ or _Pizza_ algorithm. We do so by interpolating between Model A (transformer without attention) and Model B (transformer with attention). To do so, we introduce a new hyperparameter \(\alpha\) we call the **attention rate**. For a model with attention rate \(\alpha\), we modify the attention matrix \(M\) for each attention head to be \(M^{\prime}=M\alpha+I(1-\alpha)\). In other words, we modify this matrix to consist of a linear interpolation between the identity matrix and the original attention (post-softmax), with the rate \(\alpha\) controlling how much of the attention is kept. The transformer with and without attention corresponds to the case where \(\alpha=1\) (attention kept) and \(\alpha=0\) (constant attention matrix). With this parameter, we can control the balance of attention versus linear layers in transformers. We performed the following set of experiments on transformers (see Appendix F.1 for architecture and training details). (1) One-layer transformers with width \(128\) and attention rate uniformly sampled in \([0,1]\) (Figure 6). (2) One-layer transformers with width log-uniformly sampled in \([32,512]\) and attention rate uniformly sampled in \([0,1]\) (Figure 6). (3) Transformers with \(2\) to \(4\) layers, width \(128\) and attention rate uniformly sampled in \([0,1]\) (Figure 10). The _Pizza_ and the _Clock_ algorithms are the dominating algorithms with circular embeddings.For circular models, most observed models either have low gradient symmetricity (corresponding to the _Clock_ algorithm) or low distance irrelevance (corresponding to the _Pizza_ algorithm). Two-dimensional phase change observed for attention rate and layer width.For the fixed-width experiment, we observed a clear phase transition from the _Pizza_ algorithm to the _Clock_ algorithm (characterized by gradient symmetricity and distance irrelevance). We also observe an almost linear phase boundary with regards to both attention rate and layer width. In other words, the attention rate transition point increases as the model gets wider. Figure 6: Training results from 1-layer transformers. Each point in the plots represents a training run reaching circular embeddings and 100% validation accuracy. See Appendix C for additional plots. _Top:_ Model width fixed to be 128. _Bottom:_ Model width varies. The phase transition lines are calculated by logistic regression (classify the runs by whether gradient symmetricity \(>98\%\) and whether distance irrelevance \(<0.6\)). **Dominance of linear layers determines whether the _Pizza_ or the _Clock_ algorithm is preferred.** For one-layer transformers, we study the transition point against the attention rate and the width: * The _Clock_ algorithm dominates when the attention rate is higher than the phase change point, and the _Pizza_ algorithm dominates when the attention rate is lower than the point. Our explanation is: At a high attention rate, the attention mechanism is more prominent in the network, giving rise to the clock algorithm. At a low attention rate, the linear layers are more prominent, giving rise to the pizza algorithm. * The phase change point gets higher when the model width increases. Our explanation is: When the model gets wider, the linear layers become more capable while the attention mechanism receive less benefit (attentions remain scalars while outputs from linear layers become wider vectors). The linear layer therefore gets more prominence with a wider model. Existence of non-circular algorithmsAlthough our presentation focuses on circular algorithms (i.e., whose embeddings are circular), we find non-circular algorithms (i.e., whose embeddings do not form a circle when projected onto any plane) to be present in neural networks. See Appendix B for preliminary findings. We also find that deeper networks are more likely to form non-circular algorithms. We also observe the appearance of non-circular networks at low attention rates. Nevertheless, the _Pizza_ algorithm continues to be observed (low distance irrelevance, high gradient symmetricity). ## 5 Related Work **Mechanistic interpretability** aims to mechanically understand neural networks by reverse engineering them [2; 3; 5; 4; 10; 11; 12; 13; 14]. One can either look for patterns in weights and activations by studying single-neuron behavior (superposition [11], monosemantic neurons [15]), or study meaningful modules or circuits grouped by neurons [4; 14]. Mechanistic interpretability is closely related to training dynamics [8; 13; 1]. **Learning mathematical tasks**: Mathematical tasks provide useful benchmarks for neural network interpretability, since the tasks themselves are well understood. The setup could be learning from images [16; 17], with trainable embeddings [18], or with number as inputs [19; 5]. Beyond arithmetic relations, machine learning has been applied to learn other mathematical structures, including geometry [20], knot theory [21] and group theory [22]. **Algorithmic phase transitions**: Phase transitions are present in classical algorithms [23] and in deep learning [6; 24; 25]. Usually the phase transition means that the algorithmic performance sharply changes when a parameter is varied (e.g., amount of data, network capacity etc). However, the phase transition studied in this paper is _representational_: both clock and pizza give perfect accuracy, but arrive at answers via different internal computations. These model-internal phase transitions are harder to study, but closer to corresponding phenomena in physical systems [24]. **Algorithm learning in neural networks**: Emergent abilities in deep neural networks, especially large language models, have recently attracted significant attention [26]. An ability is "emergent" if the performance on a subtask suddenly increases with growing model sizes, though such claims depend on the choice of metric [27]. It has been hypothesized that the emergence of specific capability in a model corresponds to the emergence of a modular circuit responsible for that capability, and that emergence of some model behaviors thus results from a sequence of quantized circuit discovery steps [5]. ## 6 Conclusions We have offered a closer look at recent findings that familiar algorithms arise in neural networks trained on specific algorithmic tasks. In modular arithmetic, we have shown that such algorithmic discoveries are not inevitable: in addition to the _Clock_ algorithm reverse-engineered by [1], we find other algorithms (including a _Pizza_ algorithm, and more complicated procedures) to be prevalent in trained models. These different algorithmic phases can be distinguished using a variety of new and existing interpretability techniques, including logit visualization, isolation of principle components in embedding space, and gradient-based measures of model symmetry. These techniques make it possible to _automatically_ classify trained networks according to the algorithms they implement, and reveal algorithmic phase transitions in the space of model hyperparameters. Here we found specifically that the emergence of a _Pizza_ or _Clock_ algorithm depends on the relative strength of linear layers and attention outputs. We additionally showed that these algorithms are not implemented in isolation; instead, networks sometimes ensemble multiple copies of an algorithm in parallel. These results offer exciting new challenges for mechanistic interpretability: (1) How can find, classify, and interpret unfamiliar algorithms in a systematic way? (2) How to disentangle multiple, parallel algorithm implementations in the presence of ensembling? LimitationsWe have focused on a single learning problem: modular addition. Even in this restricted domain, qualitatively different model behaviors emerge across architectures and seeds. Significant additional work is needed to scale these techniques to the even more complex models used in real-world tasks. Broader ImpactWe believe interpretability techniques can play a crucial role in creating and improving safe AI systems. However, they may also be used to build more accurate systems, with the attendant risks inherent in all dual-use technologies. It is therefore necessary to exercise caution and responsible decision-making when deploying such techniques. ## Acknowledgement We would like to thank Mingyang Deng for valuable discussions and MIT SuperCloud for providing computation resources. ZL and MT are supported by the Foundational Questions Institute, the Rothberg Family Fund for Cognitive Science and IAIFI through NSF grant PHY-2019786. JA is supported by a gift from the OpenPhilanthropy Foundation.
2309.09025
Efficient Privacy-Preserving Convolutional Spiking Neural Networks with FHE
With the rapid development of AI technology, we have witnessed numerous innovations and conveniences. However, along with these advancements come privacy threats and risks. Fully Homomorphic Encryption (FHE) emerges as a key technology for privacy-preserving computation, enabling computations while maintaining data privacy. Nevertheless, FHE has limitations in processing continuous non-polynomial functions as it is restricted to discrete integers and supports only addition and multiplication. Spiking Neural Networks (SNNs) operate on discrete spike signals, naturally aligning with the properties of FHE. In this paper, we present a framework called FHE-DiCSNN. This framework is based on the efficient TFHE scheme and leverages the discrete properties of SNNs to achieve high prediction performance on ciphertexts. Firstly, by employing bootstrapping techniques, we successfully implement computations of the Leaky Integrate-and-Fire neuron model on ciphertexts. Through bootstrapping, we can facilitate computations for SNNs of arbitrary depth. This framework can be extended to other spiking neuron models, providing a novel framework for the homomorphic evaluation of SNNs. Secondly, inspired by CNNs, we adopt convolutional methods to replace Poisson encoding. This not only enhances accuracy but also mitigates the issue of prolonged simulation time caused by random encoding. Furthermore, we employ engineering techniques to parallelize the computation of bootstrapping, resulting in a significant improvement in computational efficiency. Finally, we evaluate our model on the MNIST dataset. Experimental results demonstrate that, with the optimal parameter configuration, FHE-DiCSNN achieves an accuracy of 97.94% on ciphertexts, with a loss of only 0.53% compared to the original network's accuracy of 98.47%. Moreover, each prediction requires only 0.75 seconds of computation time
Pengbo Li, Huifang Huang, Ting Gao, Jin Guo, Jinqiao Duan
2023-09-16T15:37:18Z
http://arxiv.org/abs/2309.09025v1
# Efficient Privacy-Preserving Convolutional Spiking Neural Networks with FHE ###### Abstract With the rapid development of AI technology, we have witnessed numerous innovations and conveniences. However, along with these advancements come privacy threats and risks. Fully Homomorphic Encryption (FHE) emerges as a key technology for privacy-preserving computation, enabling computations while maintaining data privacy. Nevertheless, FHE has limitations in processing continuous non-polynomial functions as it is restricted to discrete integers and supports only addition and multiplication operations. Spiking Neural Networks (SNNs) operate on discrete spike signals, naturally aligning with the properties of FHE. In this paper, we present a framework called FHE-DiCSNN. This framework is based on the efficient TFHE scheme and leverages the discrete properties of SNNs to achieve remarkable prediction performance on ciphertext (up to 97.94% accuracy) with a time efficiency of 0.75 seconds per prediction. Firstly, by employing bootstrapping techniques, we successfully implement computations of the Leaky Integrate-and-Fire (LIF) neuron model on ciphertexts. Through bootstrapping, we can facilitate computations for SNNs of arbitrary depth. This framework can be extended to other spiking neuron models, providing a novel framework for the homomorphic evaluation of SNNs. Secondly, inspired by Convolutional Neural Networks (CNNs), we adopt convolutional methods to replace Poisson encoding. This not only enhances accuracy but also mitigates the issue of prolonged simulation time caused by random encoding. Furthermore, we employ engineering techniques to parallelize the computation of bootstrapping, resulting in a significant improvement in computational efficiency. Finally, we evaluate our model on the MNIST dataset. Experimental results demonstrate that, with the optimal parameter configuration, FHE-DiCSNN achieves an accuracy of 97.94% on ciphertexts, with a loss of only 0.53% compared to the original network's accuracy of 98.47%. Moreover, each prediction requires only 0.75 seconds of computation time. ## 1 Introduction **Privacy-Preserved AI.** In recent years, privacy preservation has garnered significant attention in the field of machine learning. Fully Homomorphic Encryption (FHE) has emerged as the most suitable tool for facilitating Privacy-Preserving Machine Learning (PPML) due to its robust encryption security and efficient communication capabilities. The foundation of FHE was established in 2009 when Gentry introduced the first fully homomorphic encryption scheme [1, 2] capable of evaluating arbitrary circuits. His pioneering work not only proposed the FHE scheme but also outlined a method for constructing a comprehensive FHE scheme from a model with limited yet sufficient homomorphic evaluation capacity. Inspired by Gentry's groundbreaking contributions, subsequent second-generation schemes like BGV [3] and FV [4] had been proposed. The evolution of FHE schemes had continued with third-generation models such as FHEW [5], TFHE [6], and Gao [7, 8], which had offered rapid bootstrapping and had supported an unlimited number of operations. The CKKS scheme [9, 10] had attracted considerable interest as a suitable tool for PPML implementation, given its natural handling of encrypted real numbers. However, existing FHE schemes had primarily supported arithmetic operations such as addition and multiplication, while widely used activation functions such as ReLU, sigmoid, leaky ReLU, and ELU had been non-arithmetic functions. To overcome this challenge, Dowlin et al. [11] introduced CryptoNets, which utilized neural networks, particularly artificial feedforward neural networks trained on plaintext data, to provide accurate predictions on homomorphically encrypted data. Nonetheless, CryptoNets had faced performance limitations to some extent due to the replacement of the sigmoid activation function and associated computational overhead. Zhang et al. [12] had proposed a privacy-preserving deep learning model called the dual-projection deep computation model, which utilized cloud outsourcing to enhance learning efficiency and combined it with the BGV scheme for training. Building upon CryptoNets, Brutzkus et al. [13] had developed an enhanced version that had reduced latency and optimized memory usage. Furthermore, Lee et al. [14] demonstrated the potential of applying FHE (with bootstrapping) to deep neural network models by implementing ResNet-20 on the CKKS scheme. In a distinct study [15], the authors developed the FHE-DiNN framework, a discrete neural network framework predicated on the TFHE scheme. Unlike traditional neural networks, FHE-DiNN had discretized network weights into integers and utilized the sign function as the activation function. The computation of the sign function had been achieved through bootstrapping on ciphertexts. Each neuron's output had been refreshed with noise, thereby enabling the neural network to extend computations to any depth. Although FHE-DiNN offered high computational speed, it had compromised model prediction accuracy. Given the close resemblance between the sign function and the output of Spiking Neural Network(SNN) neurons, this work provided a compelling basis for investigating efficient homomorphic evaluations of SNNs in the context of PPML. **CNNs and SNNs.** Convolutional Neural Networks (CNNs) have emerged as powerful tools in the field of computer vision, offering exceptional accuracy and an automated feature extraction process [16]. The unique structures of CNNs, including convolutional and pooling layers, are built upon three key concepts: (a) local receptive fields, (b) weight sharing, and (c) spatial subsampling. These elements eliminate the need for explicit feature extraction (Convolutional layer) and reduce training time, making CNNs highly suitable for visual recognition tasks [17]. In recent years, CNNs have found widespread applications in various domains, such as image classification and recognition [18, 19, 20], Natural Language Processing (NLP) [21, 22], object detection [23], and video classification [24, 25]. The widespread adoption of CNNs has played a significant role in the advancement of deep learning. Spiking Neural Networks (SNNs), regarded as the third generation of neural networks [26], operate in a manner more akin to biological reality compared to their predecessors. Unlike the widespread use of Artificial Neural Networks (ANNs), SNNs uniquely process information in both space and time, capturing the temporal dynamics of biological neurons. Neuron models, the fundamental units of SNNs, have been constructed by neurophysiologists in numerous forms. Among these, the most influential models include the Hodgkin-Huxley (H-H) model [27], the leaky integrate-and-fire (LIF) model [28], the Izhikevich model [29], and the spike response model [30] (SRM). These models are distinguished by their use of spikes or 'action potentials' for information communication, closely emulating the behavior of neurons in the brain. This temporal aspect of information processing enables SNNs to manage time-series data more naturally and efficiently than traditional artificial neural networks. Convolution Spiking Neuron Networks (CSNNs) represent the integration of these two powerful models into CSNNs which brings together the spatial feature learning capabilities of CNNs and the temporal dynamic processing of SNNs. This combination allows CSNNs to process spatiotemporal data more efficiently and accurately, making them particularly suitable for tasks such as video processing, speech recognition, and other real-time sensory data processing tasks. Zhou et al. [31] built upon [32] to create a sophisticated architecture for SNNs, utilizing the VGG16 model for CIFAR10 [18, 33] and the GoogleNet model for ImageNet [19]. In parallel, Zhang et al. [34] devised a deep convolutional spiking neural network encompassing two convolutional layers and two hidden layers, employing a ReL-PSP-based spiking neuron model and training the network through temporal BP with recursive backward gradients. Further, a range of CSNNs works [35, 36, 37, 38] represent converted variants of conventional CNNs, while others [39, 40] incorporate BP directly onto the network using rate coding or multi-peak per neuron strategies. Traditional neural networks rely on real numbers for computations, with neurons' outputs and network weights represented as continuous values. However, homomorphic encryption algorithms are unable to directly operate on real numbers. Consequently, in order to perform homomorphic computations, the outputs, and weights of the neural network must be discretized into integers. In contrast, Discrete Convolutional Spiking Neural Networks (DiCSNNs) are characterized by neuron outputs that fundamentally consist of discrete value signals, necessitating only the discretization of weights. From this standpoint, SNNs demonstrate greater suitability for homomorphic computations compared to traditional neural networks. **Our Contribution.** In this paper, we propose the FHE-DiCSNN framework. Built upon the efficient TFHE scheme and incorporating convolutional operations from CNN, this framework harnesses the discrete nature of SNNs to achieve exceptional prediction performance on ciphertexts (with a maximum accuracy of 97.94%) while maintaining a time efficiency of 0.75 seconds per prediction. 1. By successfully implementing the FHE-Fire and FHE-Reset functions using bootstrapping techniques, we enable computations of LIF neurons on ciphertexts. This approach can be extended to other SNNs neuron models, offering a novel solution for privacy protection in third-generation neural networks (SNNs). 2. LIF neurons serve as activation functions in deep networks, forming the Spiking Activation Layer. The bootstrapped LIF model generates ciphertext with minimal initial noise. By ensuring that the accumulated noise after linear layer operations remains below a predefined threshold, subsequent layers in the Spiking Activation Layer share the same initial noise, enabling further computations. Our framework allows the network to expand to any depth without noise-related concerns. 3. To convert signals into spikes, we replace Poisson encoding with convolutional methods. This not only enhances accuracy but also mitigates the issue of prolonged simulation time caused by randomness. Additionally, we employ engineering techniques to parallelize the bootstrapping computation, resulting in a significant improvement in computational efficiency. We conducted experiments on the MNIST dataset to validate the advantages of FHE-DiCSNN. Firstly, using the Spikingjelly package, we trained CSNNs with different parameters, including LIF and IF models. The results indicate that the decay factor \(\tau\) of the LIF model significantly affects accuracy. Next, we discretized the trained network and determined FHE parameters based on experimental results and theoretical analysis. Finally, we evaluated the accuracy and time efficiency of the FHE-DiCSNN framework on ciphertexts. Experimental results demonstrate that, with optimal parameter configuration, FHE-DiCSNN achieves a ciphertext accuracy of 97.94%, with only a 0.53% loss compared to the original network's accuracy of 98.47%. Moreover, each prediction requires only 0.75 seconds of computation time. **Outline of the paper.** The paper is structured as follows: Section 2 provides definitions and explanations of SNNs and TFHE, including a brief introduction to the bootstrapping process of TFHE. In Section 3, we present our method of constructing Discretized Convolutional Spiking Neural Networks and prove that the discretization error can be controlled. In Section 4, we highlight the challenges of evaluating DiCSNN homomorphically and provide a detailed explanation of our proposed solution. In Section 5, we present comprehensive experimental results for verification of our proposed framework. And discuss the challenges and possible future work in section 6. ## 2 Preliminary Knowledge In this section, we aim to offer a comprehensive elucidation of the bootstrapping operations within the framework of the TFHE scheme. Additionally, we will provide an in-depth exposition of the background knowledge pertinent to Spiking Neural Networks (SNNs). ### Programmable Bootstrapping Firstly, let us define some mathematical symbols and concepts used in FHE. Set \(\mathbb{Z}_{p}=\left\{-\frac{p}{2}+1,\ldots,\frac{p}{2}\right\}\) denote a finite ring defined over the set of integers. The message space for homomorphic encryption is defined within this finite ring \(\mathbb{Z}_{p}\). Consider \(N=2^{k}\) and the cyclotomic polynomial \(X^{N}+1\), then \[R_{q,N}\triangleq R/qR\equiv\mathbb{Z}_{q}[X]/\left(X^{N}+1\right)\equiv \mathbb{Z}[X]/\left(X^{N}+1,q\right).\] Similarly, we can define the polynomial ring \(R_{p,N}\). Before discussing the programmable bootstrapping theorem, we will introduce three homomorphic encryption schemes used. * **LWE (Learning With Errors).**We revisit the encryption form of LWE [41] as shown in Figure 1, which is employed to encrypt a message \(m\in\mathbb{Z}_{p}\) as \[LWE_{s}(m)=(\mathbf{a},b)=\left(\mathbf{a},\langle\mathbf{a},\mathbf{s} \rangle+e+\left\lfloor\frac{q}{p}m\right\rfloor\right)\bmod q,\] where \(\mathbf{a}\in\mathbb{Z}_{q}^{n}\), \(b\in\mathbb{Z}_{q}\), and the keys are vectors \(\mathbf{s}\in\mathbb{Z}_{q}^{n}\). The ciphertext \((\mathbf{a},b)\) is decrypted using: \[\left\lfloor\frac{p}{q}(b-\langle\mathbf{a},\mathbf{s}\rangle)\right\rfloor \bmod p=\left\lfloor m+\frac{p}{q}e\right\rceil=m.\] * **RLWE (Ring Learning With Errors) [42].** An RLWE ciphertext of a message \(m(X)\in R_{p,N}\) can be obtained as follows: \[RLWE_{s}(m(X))=\left(a(X),b(X)\right),\text{where }b(X)=a(X)\cdot s(X)+e(X)+ \left\lfloor\frac{q}{p}m(X)\right\rfloor,\] where \(a(X)\gets R_{q,N}\) is uniformly chosen at random, and \(e(X)\leftarrow\chi_{\sigma}^{n}\) is selected from a discrete Gaussian distribution with parameter \(\sigma\). The decryption algorithm for RLWE is similar to LWE. * **GSW.** As one of the third-generation fully homomorphic encryption schemes, the GSW [43] scheme exhibits advantages in both efficiency and security. Furthermore, its variant, RGSW [43], has been widely applied in practical scenarios. Given a plaintext \(m\in\mathbb{Z}_{p}\), the plaintext \(m\) is embedded into a power of a polynomial to obtain \(X^{m}\in R_{p,N}\), which is then encrypted as \(RGSW(X^{m})\). RGSW enables efficient computation of homomorphic multiplication, denoted as \(\diamond\), while effectively controlling noise growth: \[RGSW\left(X^{m_{0}}\right)\diamond RGSW\left(X^{m_{1}}\right)=RGSW \left(X^{m_{0}+m_{1}}\right),\] \[RLWE\left(X^{m_{0}}\right)\diamond RGSW\left(X^{m_{1}}\right)= RLWE\left(X^{m_{0}+m_{1}}\right).\] Now, we present the theorem of Programmable Bootstrapping. **Theorem 2.1**: _(Programmable Bootstrapping [44]) TFHE/FHEW bootstrapping enables the computation of any function \(g\) with the property \(g:\mathbb{Z}_{p}\rightarrow\mathbb{Z}_{p}\) and \(g\left(v+\frac{p}{2}\right)=-g(v)\). The function \(g\) is referred to as the program function of bootstrapping. Given an LWE ciphertext \(LWE(m)_{s}=(\mathbf{a},b)\), where \(m\in\mathbb{Z}_{p}\), \(\mathbf{a}\in\mathbb{Z}_{p}^{N}\), and \(b\in\mathbb{Z}_{p}\), it is possible to bootstrap it into \(LWE_{s}(g(m))\) with very low initialization noise._ Fig. 1: The partitioning of the circle serves to reflect the mapping relationship between \(\mathbb{Z}_{p}\) and \(\mathbb{Z}_{q}\). The bootstrapping process relies on the Homomorphic Accumulator denoted as \(ACC_{g}\). Utilizing the notations from [45], the bootstrapping process can be divided into the following steps: * **Initialization.** Obtain the initial polynomial by: \[ACC_{g}[-b]=X^{-b}\cdot\sum_{i=0}^{N-1}g\left(\left\lfloor\frac{i\cdot p}{2N} \right\rfloor\right)X^{i}\bmod X^{N}+1.\] * **Blind Rotation.** The \(ACC_{g}\leftarrow^{+}_{+}-a_{i}\cdot ek_{i}\) modifies the content of the accumulator from \(ACC_{g}[-b]\) to \(ACC_{g}\left[-b+\sum a_{i}s_{i}\right]=ACC_{g}[-m-e]\), where \[\mathrm{ek}\ =\left(RGSW\left(X^{s_{1}}\right),\ldots,RGSW\left(X^{s_{n}} \right)\right),\text{ over }R_{q}^{N}.\] * **Sample Extraction.** The \(ACC_{g}=\left(a(X),b(X)\right)\) is the RLWE ciphertext with component polynomials \(a(X)=\sum_{0\leq i\leq N-1}a_{i}X^{i}\) and \(b(X)=\sum_{0\leq i\leq N-1}b_{i}X^{i}\). The extraction operation outputs the LWE ciphertext: \[RLWE_{z}\xrightarrow[\text{Sample Extraction}]{\text{Sample Extraction}}LWE_{z}(g(m))=\left(\mathbf{a},b_{0}\right),\] where \(\mathbf{a}=\left(a_{0},\ldots,a_{N-1}\right)\) is the coefficient vector of \(a(X)\), and \(b_{0}\) is the coefficient of \(b(X)\). * **Key Switching.** Key switching transforms the key of the LWE instance from the original vector \(\mathbf{z}\) to the vector \(\mathbf{s}\) while preserving the plaintext message \(m\): \[LWE_{\mathbf{z}}(g(m))\xrightarrow[\text{Key Switching}]{\text{Key Switching}}LWE_{\mathbf{s}}(g(m)).\] By utilizing a bootstrapping key and a KeySwitching key as input, bootstrapping can be defined as follows: \[\text{bootstrapping}=\text{KeySwitch}\circ\text{Extract}\circ\text{ BlindRotate}\circ\text{Initialize}.\] Given a program function \(g\), bootstrapping is a process that takes an LWE ciphertext \(LWE_{s}(m)\) as input and outputs \(LWE_{s}(g(m))\) with the original secret key \(s\): \[\text{bootstrapping}\ \ (LWE_{s}(m))=LWE_{s}(g(m)).\] This property will be extensively utilized in our context. Since bootstrapping does not modify the secret key, we will use the shorthand \(LWE(m)\) to refer to an LWE ciphertext throughout the rest of the text. ### Leaky Integrate-and-Fire Neuron Model Neurophysiologists have developed a range of models to capture the dynamic characteristics of neuronal membrane potentials, which are essential for constructing SNNs and determining their fundamental dynamical properties. Prominent models that have had a significant impact on neural networks include the Hodgkin-Huxley (H-H) model [46], the leaky integrate-and-fire (LIF) model [47], the Izhikevich model [48], and the spike response model [49] (SRM), among others. In this study, we selected the leaky integrate-and-fire (LIF) model, as shown in Eq.(1), as the primary focus. This choice was made due to the simplicity of the LIF model and its ability to effectively describe the dynamic behavior of biological neurons. \[\tau\frac{\text{d}V}{\text{d}t}=V_{\text{rest}}-V+RI, \tag{1}\] where \(\tau\) represents the membrane time constant indicating the decay rate of the membrane potentials, \(V_{\text{rest}}\) represents the resting potential, the \(R\) and \(I\) terms denote the membrane impedance and input current, respectively. The LIF model greatly simplifies the process of action potentials while retaining three key features of actual neuronal membrane potentials: leakage, accumulation, and threshold excitation. Building upon this foundation, there exists a series of variant models, such as the second-order LIF model [50], exponential LIF model [51], adaptive exponential LIF model [52], and others. These variant models focus on describing the details of neuronal pulse activity and further enhance the biological plausibility of the LIF model at the cost of additional implementation complexity. In practical applications, it is common to employ discrete difference equations as an approximation method for modeling the equations governing neuronal electrical activity. While the specific accumulation equations for various neuronal membrane potentials may vary, the threshold excitation and reset equations for the membrane potential remain consistent. Consequently, the neuronal electrical activity can be simplified into three distinct stages: charging, firing, and resetting, depicted as follows: \[\begin{cases}H[t]=V[t-1]+\frac{1}{\tau}(-(V[t-1]-V_{\text{reset}})+I[t]),\\ S[t]=\text{Fire}\left(H[t]-V_{\text{th}}\right),\\ V[t]=\text{Reset}(H[t])=\begin{cases}V_{\text{reset}},&\text{if}\,H[t]\geq V _{\text{th}};\\ H[t],&\text{if}\,V_{\text{reset}}\leq H[t]\leq V_{\text{th}};\\ V_{\text{reset}},&\text{if}\,H[t]\leq V_{\text{reset}}.\end{cases}\end{cases} \tag{2}\] Generally set \(R=1\) and the \(\text{Fire}(\cdot)\) is a step function defined as: \[\text{Fire}(x)=\begin{cases}1,&\text{ if }\quad x\geq 0;\\ 0,&\text{ if }\quad x\leq 0.\end{cases}\] Here, the equation \(I[t]=\sum_{j}\omega_{j}x_{j}[t]\) denotes the cumulative membrane current resulting from external inputs originating from pre-synaptic neurons or image pixels. The input of each Leaky Integrate-and-Fire (LIF) neuron is obtained through a weighted sum calculation. This computational process can be performed in either convolutional layers or linear layers (fully connected layers), as both operations involve the calculation of the weighted sum of inputs, referred to as WeightSum. In this context, \(x_{j}\) represents the respective input value, while \(\omega_{j}\) corresponds to the weight associated with each input. Within the notation \(\sum_{j}\omega_{j}\), the variable j represents the number of neurons in the layer when \(\omega_{j}\) represents the parameters of a fully connected network. Conversely, when considering convolutional layers or average pooling layers, j represents the square of the corresponding filter size when \(\omega_{j}\) denotes the parameters. These symbols will continue to be utilized in subsequent discussions. ### Spiking Neural Networks Due to the non-differentiable nature of spikes [53], the conventional backpropagation (BP) algorithm cannot be directly applied to SNNs [54]. Training SNNs is a captivating research direction, there are some commonly used training methods such as ANN-to-SNN conversion and unsupervised training with STDP, and the gradient surrogate method is adopted for training SNNs in this study. The main idea is to use a similar continuous function to replace the spike function or its derivative, resulting in a spike-based BP algorithm. Wu et al. [28] introduce four curves to approximate the derivative of spike activity denoted by \(f_{1},f_{2},f_{3},f_{4}\) as follow: \[f_{1}(V) =\frac{1}{a_{1}}\operatorname{sign}\left(\left|V-V_{th}\right|< \frac{a_{1}}{2}\right),\] \[f_{2}(V) =\left(\frac{\sqrt{a_{2}}}{2}-\frac{a_{2}}{4}\left|V-V_{th}\right| \right)\operatorname{sign}\left(\frac{2}{\sqrt{a_{2}}}-\left|V-V_{th}\right| \right),\] \[f_{3}(V) =\frac{1}{a_{3}}\frac{e^{\frac{V_{th}-V}{a_{3}}}}{\left(1+e^{ \frac{V_{th}-V}{a_{3}}}\right)^{2}},\] \[f_{4}(V) =\frac{1}{\sqrt{2\pi a_{4}}}e^{-\frac{\left(V-V_{th}\right)^{2}}{ 2a_{4}}}.\] In general, the training of SNNs adheres to three fundamental principles: (1) Spiking neurons generate binary output that is susceptible to noise. The temporal firing frequency serves as a representative measure of the strength of category responses for classification tasks. (2) The primary objective is to ensure that only the correct neuron fires at the highest frequency, while other neurons remain quiescent. Mean Squared Error (MSE) loss is frequently employed for training, as it has demonstrated enhanced performance. (3) Resetting the network state after each simulation is crucial. Moreover, SNNs exhibit suboptimal performance in handling real-world data, such as image pixels and floating-point values. To address various stimulus patterns effectively, SNNs commonly employ a range of encoding methods, including rate coding, temporal coding, bursting coding, and population coding [55], to process input stimuli. In our study, the inputs are encoded into rate-based spike trains by the Poisson process, name Poisson encoding. Given a time interval \(\Delta t\) in advance, then the reaction time is divided into \(T\) intervals evenly. During each time step \(t\), a random matrix \(M_{t}\) is generated using uniform distribution in \([0,255]\). Then, we compare the original normalized pixel matrix \(X_{o}\) with \(M_{t}\) to determine whether the current time \(t\) has a spike or not. The final encoding spike \(X\) is calculated by using the following equation: \[X(i,j)=\begin{cases}0,&X_{o}(i,j)\leq M_{t}(i,j),\\ 1,&X_{o}(i,j)>M_{t}(i,j),\end{cases}\] where \(i\) and \(j\) are the coordinates of the pixel points in the images. In this way, the encoded spikes follow the Poisson distribution. ## 3 Discretized Convolutional Spiking Neural Network ### Convolutional Spiking Neural Networks Convolutional Neural Networks (CNNs) capitalize on the local perception and weight-sharing characteristics inherent in convolution operations, enabling efficient extraction of image features using a limited number of convolution kernels. Consequently, CNN is capable of more effectively extracting and learning features from images without relying on the complexity and high computational costs associated with random coding. In contrast, Poisson encoding serves as a simple random coding technique employed to convert continuous signals into pulse signals. However, Poisson encoding itself lacks the capability to extract image features. Due to its stochastic nature, employing Poisson encoding for signal encoding necessitates a large number of pulse samples to ensure the preservation of relevant information. This, in turn, leads to longer simulation times required for accurate extraction of image features, thereby increasing computational costs and time overhead. CSNNs (Convolutional Spiking Neural Networks) is a neural network model that combines CNNs and SNNs. In CSNNs, the LIF model or other spiking models, as mentioned earlier, are used to simulate the electrical activity of neurons, forming the Spiking Activation Layer in the network. Combined with convolutional layers, CSNNs extract image features and encode them into spike signals. By leveraging the spatial feature extraction capability of CNN and the spiking transmission characteristics of SNNs, CSNNs benefit from both convolution operations and discrete spike transmission. The visualization diagram of a CSNNs is presented in Figure 2, and described in detail as follows: * Convolutional layer: The input image has a size of \(28\times 28\) with a padding dimension of 1. The convolution window or kernel size is \(8\times 8\) with a stride Fig. 2: The visualization diagram of the CSNNs network. In the proposed model, different colors are employed to signify distinct layers: red designates the convolutional layer, cyan is utilized for the pooling layer, green also denotes a separate convolutional layer, and purple illustrates the linear layer. of \((2,2)\), resulting in 10 feature maps. Consequently, the output size of this layer is \(10\times 12\times 12\). * Spiking Activation Layer: Each input node is activated using the LIF neuron model. * Scale average pooling layer: This layer applies a window size of \(2\times 2\), leading to an output size of \(10\times 6\times 6\). * Fully connected layer(Linear layer): This layer connects the 360 input nodes to the 160 output nodes, which is equivalent to performing matrix multiplication with a \(160\times 360\) matrix. * Spiking Activation Layer: Each input node is activated using the LIF neuron model. * Fully connected layer(Linear layer): This layer connects the 160 input nodes to the 10 output nodes. * Spiking Activation Layer: The LIF neuron activation is applied to each of the 10 input values. ### Discretized CSNN Traditional neural networks rely on real numbers for computations, with neurons' outputs and network weights represented as continuous values. However, homomorphic encryption algorithms are unable to directly operate on real numbers. Consequently, in order to perform homomorphic computations, the outputs, and weights of the neural network must be discretized into integers. In contrast, discretized CSNNs are characterized by neuron outputs that fundamentally consist of discrete value signals, necessitating only the discretization of weights. From this standpoint, CSNNs demonstrate greater suitability for homomorphic computations compared to traditional neural networks. **Definition 3.1**: _A Discretized Convolutional Spiking Neural Network (DiCSNNs) is characterized as a feed-forward spiking neural network wherein all weights, inputs, and outputs of the neuron model undergo discretization, resulting in their representation as elements of a finite set \(\mathbb{Z}_{p}\), which signifies the integers modulo \(p\)._ We utilize fixed-precision real numbers and apply suitable scaling to convert the weights into integers, effectively discretizing CSNNs into DiCSNNs. Denote this discretization method as the following function: \[\hat{x}\triangleq\text{Discret}(x,\theta)=\lfloor x\cdot\theta\rceil,\] where \(\theta\in\mathbb{Z}\) is referred to as the scaling factor, and \(\lfloor\cdot\rceil\) represents rounding to the nearest integer. The discretized result of \(x\) is denoted as \(\hat{x}\). Moreover, alternative methodologies exist to accomplish this objective. Within the encryption process, all relevant numerical values are defined on the finite ring \(\mathbb{Z}_{p}\). Hence, it is imperative to carefully monitor the numerical fluctuations throughout the computation to prevent reductions modulo \(p\), as such reductions could give rise to unanticipated errors in the computational outcomes. It is important to highlight that the computation of LIF neurons is influenced by the discretization of weights, necessitating appropriate modifications. The essence of discretization lies in accommodating the requirements of FHE. Specifically, we address two aspects in this regard. Firstly, the initial equation of LIF neurons (Equation 2) involves a division operation, which poses challenges in the context of computation. Therefore, it is imperative to find alternative approaches that avoid explicit division calculations, ensuring compatibility with FHE. Secondly, the Fire function in LIF neurons, representing a step function, relies on the programming function \(g\) employed in bootstrapping techniques. To satisfy the condition \(g(x)=-g(\frac{p}{2}+x)\), the Sign function proves to be more suitable than the Fire function. Therefore, it is worth considering using the Sign funnction as a replacement for the Fire function. To provide comprehensive insights, we present Theorem 3.1, which elucidates the discretization process of LIF neuron weights. **Theorem 3.1**: _Under the given conditions \(V_{reset}=0\) and \(V_{th}=1\), Eq.(2) can be discretized into the following equivalent form with a discretization scaling factor \(\theta\):_ \[\begin{cases}\hat{H}[t]=\hat{V}[t-1]+\hat{I}[t],\\ 2S[t]=\text{ Sign }\left(\hat{H}[t]-\hat{V}_{th}^{\tau}\right)+1,\\ \hat{V}[t]=\text{Reset}(\hat{H}[t])=\left\{\begin{array}{cl}\hat{V}_{reset} \,,&\text{ if }\hat{H}[t]\geq\hat{V}_{th}^{\tau},:\\ \left\lfloor\frac{\theta-1}{\theta}\hat{H}[t]\right\rceil,&\text{ if }\hat{V}_{reset}\leq\hat{H}[t]<\hat{V}_{th}^{ \tau}\,:\\ \hat{V}_{reset}\,,&\text{ if }\hat{H}[t]\leq\hat{V}_{reset}\,.\end{array}\right.\end{cases} \tag{3}\] _Here, the hat symbol represents the discretized values, and \(\hat{V}_{th}^{\tau}=\theta\cdot\tau\). Moreover \(\hat{I}[t]=\sum\hat{\omega}_{j}S_{j}[t]\)._ _Proof._ We multiply both sides of Equations of the LIF model(Eq.(2)) by \(\tau\) and the Sign function has been substituted for the Fire function, which yields the following equations: \[\left\{\begin{array}{rl}\tau H[t]=(\tau-1)V[t-1]+I[t],\\ 2S[t]=\text{ Sign }\left(\tau H[t]-V_{\text{th}}^{\tau}\right)+1,\\ (\tau-1)V[t]=\text{Reset}(\tau H[t])=\left\{\begin{array}{rl}0,&\text{ if }\quad\tau H[t]\geq V_{\text{th}}^{\tau};\\ \frac{\tau-1}{\tau}(\tau H[t]),&\text{ if }\quad 0\leq\tau H[t]\leq V_{\text{th}}^{ \tau};\\ 0,&\text{ if }\quad\tau H[t]\leq 0.\end{array}\right.\end{array}\right.\] Then, we treat \(\tau H[t]\) and \((\tau-1)V[t-1]\) as separate iterations objects. Therefore, we can rewrite \(\tau H[t]\) as \(H[t]\) without ambiguity, as well as \((\tau-1)V[t-1]\). Finally, by multiplying the corresponding discretization factor \(\theta\), we obtain Eq3. Note that since the division operation has been moved to the Reset function, rounding is applied during discretization. In Eq.(3), the Leaky Integrate-and-Fire (LIF) model degenerates into the Integrate-and-Fire (IF) model when \(V_{\text{th}}^{\tau}=1\) and \(\tau=\infty\). To facilitate further discussions, we will refer to this set of equations as the LIF(IF) function. \[\begin{split} 2S[t]=& LIF(I[t]),\\ 2S[t]=& IF(I[t]).\end{split} \tag{4}\] ### Multi-Level Discretization In Eq 4, LIF(IF) model twice spike signals. If left unaddressed, the next Spiking Activation Layer would receive twice the input. To tackle this issue, we propose a multi-level discretization method that can also resolve the division problem in average pooling. First, we redefine the WeightSum as follows: \[I[t]=\sum\omega_{j}S_{j}[t]=\sum\frac{\omega_{j}}{2}2S_{j}[t].\] Then, by reducing the scaling factor \(\theta\) of the corresponding weights to \(\theta/2\), we obtain that \[\hat{I}[t] \triangleq\sum\text{Discret}(\frac{\omega_{j}}{2},\theta)\cdot 2S_{j}\] \[=\sum\text{Discret}(\omega_{j},\frac{\theta}{2})\cdot 2S_{j}\] \[\approx\theta\cdot I[t].\] This approach can be extended to the treatment of average pooling. The subsequent layer following the average pooling layer may consist of either a convolutional or a linear layer, which is subsequently fed into the Spiking Activation Layer. The computation involved in average pooling requires a division operation, which is not conducive to FHE. Consequently, we can apply a similar strategy by transferring this division operation to the subsequent linear layer. This process can be outlined as follows: \[I[t]=\sum_{j}\omega_{j}\frac{\sum_{k}S_{k}[t]}{kn}=\sum_{j}\frac{\omega_{j}}{n} (\sum_{k}S_{k}[t]),\] where \(\omega_{j}\) denotes the corresponding weight parameter and \(n\) represents the divisor of the average pooling layer. Subsequently, by decreasing the scaling factor of the weights, \(\theta\), to \(\theta/n\), we obtain: \[\hat{I}[t] \triangleq\sum_{j}\text{Discret}(\frac{\omega_{j}}{k},\theta) \cdot(\sum_{k}S_{k}[t])\] \[=\sum_{j}\text{Discret}(\omega_{j},\frac{\theta}{n})\cdot(\sum_{ k}S_{k}[t])\] \[\approx\theta\cdot I[t].\] For each Spiking Activation Layer, the input is approximate \(\theta\) times that of the original network, defined as scale-invariance. This property is crucial for FHE as it guarantees that each Spiking Activation Layer's message space is a multiple of \(\theta\). By selecting suitable parameters, we can perform homomorphic evaluations on neural networks of any depth, independent of the network's depth. ## 4 Homomorphic Evaluation of DiCSNNs In this chapter, we present FHE-DiCSNN, a network designed for performing forward propagation on ciphertexts. The chapter is divided into two parts. Firstly, we discuss the computation of convolutional layers, average pooling layers, and linear layers (fully connected layers) on ciphertext. While the WeightSum operation is inherently supported by FHE, it is crucial to carefully consider the maximum value and the growth of noise of ciphertexts during computation to avoid any potential errors. In the second part, we employ programmable bootstrapping techniques from [6] to homomorphically compute the Fire and Reset functions of LIF neurons, referred to as FHE-Fire and FHE-Reset functions respectively. The use of bootstrapping refreshes the ciphertext noise after each Spiking Activation Layer, eliminating the need for fixed constraints on the network depth. Thus, our framework offers flexibility in selecting network depths, facilitating the evaluation of neural networks with varying depths. ### Homomorphic Computation of WeightSum Weightsum performs the essential operation of multiplying the value vector of the lower layer by the weight vector and summing them up. The weights remain fixed during the prediction process. Essentially, WeightSum represents the dot product between the weight vector and the value vector of the input layer. In the ciphertext domain, this computation can be expressed as: \[\sum\hat{\omega}_{j}LWE(x_{j})=LWE(\sum\hat{\omega}_{j}x_{j}). \tag{5}\] Here, we omit the specific summation dimensions, which can be easily determined based on the convolutional layers, linear layers, and average pooling layers. WeightSum is inherently supported by FHE. To ensure the correctness of the computation, representing as \(Dec(\sum\hat{\omega}_{j}LWE(x_{j}))=Dec(LWE(\sum\hat{\omega}_{j}x_{j}))\), two conditions must be satisfied: (1) \(\sum_{j}\hat{\omega}_{j}x_{j}\in\left[-\frac{p}{2},\frac{p}{2}\right]\); (2) The noise remains within the noise bound. The first condition can be easily fulfilled by selecting a large enough message space \(\mathbb{Z}_{p}\). Regarding the ciphertext noise, after the WeightSum operation, the noise grows to \(\sum_{j}\left|\hat{\omega}_{j}\cdot\sigma\right|,\) assuming that \(LWE(x_{j})\) has an initial noise \(\sigma.\) This assumption is reasonable because \(LWE(x_{j})\) are generated by Spiking Activation Layers evaluated through bootstrapping. It is observed that the noise maximum is proportional to the discretization parameter \(\theta.\) One approach to control the noise growth is to decrease \(\theta,\) although this may result in reduced accuracy. Another strategy is to balance the security level by reducing the initial noise \(\sigma.\) ### Homomorphic Computation of LIF Neuron Model The Fire and Reset functions from the Eq3, being non-polynomial functions, necessitate the utilization of programmable bootstrapping from Theorem 2.1 for computation. To address this, we propose the FHE-Fire and FHE-Reset functions, a framework specifically designed to implement the Fire and Reset functions on ciphertexts. We define the program function \(g\) as follows: \[g(m)=\left\{\begin{array}{ccc}1,&\text{ if }&m\in\left[0,\frac{p}{2}\right);\\ -1,&\text{ if }&m\in\left[-\frac{p}{2},0\right).\end{array}\right.\] Note that the condition \(g\left(v+\frac{p}{2}\right)=-g(v)\) should be satisfied. Then, the Fire function can be represented as: \[2FHE-Fire(LWE(m)) =\text{ bootstrap }(\text{LWE}(m))+1\] \[=\begin{cases}LWE(2),\text{ if }&m\in\left[0,\frac{p}{2}\right);\\ LWE(0),\text{ if }&m\in\left[-\frac{p}{2},0\right)\end{cases} \tag{6}\] \[=LWE(\text{Sign}(m)+1)\] \[=LWE(2\cdot\text{ Spike }).\] Similar to the FHE-Fire function, the FHE-Reset function can be computed by defining the program function \(g\) for bootstrapping as follows: \[g(m)\triangleq\begin{cases}0,&\text{ if }m\in\left[\hat{V}_{\text{th}}, \frac{p}{2}\right);\\ \left\lfloor\frac{\theta-1}{\theta}m\right\rfloor,&\text{ if }m\in\left[0,\hat{V}_{ \text{th}}\right);\\ 0,&\text{ if }m\in\left[\hat{V}_{\text{th}}\,-\frac{p}{2},0\right);\\ \frac{p}{2}-\left\lfloor\frac{\theta-1}{\theta}m\right\rfloor,&\text{ if }m\in\left[-\frac{p}{2},\hat{V}_{\text{th}}\,-\frac{p}{2}\right), \end{cases}\] where \(g(x)=-g\left(x+\frac{p}{2}\right)\) must be satisfied too. Then, the FHE-Reset function can be computed as follows: \[\text{FHE-Reset}\left(LWE(m)\right)\triangleq\text{ bootstrap }\left(LWE(m)\right)\] \[=\begin{cases}LWE(0),&m\in\left[\hat{V}_{\text{th}}\,,\frac{p}{2} \right);\\ LWE(\left\lfloor\frac{\theta-1}{\theta}m\right\rfloor),&m\in\left[0,\hat{V}_{ \text{th}}\,\right);\\ LWE(0),&m\in\left[\hat{V}_{\text{th}}\,-\frac{p}{2},0\right);\\ LWE\left(\frac{p}{2}-\left\lfloor\frac{\theta-1}{\theta}m\right\rfloor\right),&m\in\left[-\frac{p}{2},\hat{V}_{\text{th}}\,-\frac{p}{2}\right).\end{cases} \tag{7}\] Please note that if \(m\) falls into the interval \(\left[-\frac{p}{2},\hat{V}_{\text{threshold}}\,-\frac{p}{2}\right)\), the FHE-Reset function will produce incorrect computation results. Therefore, we need to ensure that the value of \(\hat{H}[t]\) does not fall into this interval. The following theorem demonstrates that this condition is easily satisfied. **Theorem 4.1**: _If \(\mathrm{M}\triangleq\hat{V}_{th}+\max_{t}(|\hat{I}[t]|)\) and \(\mathrm{M}\leq\frac{p}{2}\), then \(\hat{H}[t]\in\left[\hat{V}_{threshold}\,-\,\frac{p}{2},\frac{p}{2}\right)\)._ _Proof_. \[\max(\hat{H}[t]) =\max(\hat{V}[t]+\hat{I}[t])\] \[\leq\frac{\tau-1}{\tau}\hat{V}_{th}+\max_{t}(|\hat{I}[t]|)\] \[<\mathrm{M}\] \[\leq\frac{p}{2} \tag{8}\] \[\min(\hat{H}[t]) =\min(\hat{V}[t]+\hat{I}[t])\] \[\geq 0-\max_{t}(|\hat{I}[t]|)\] \[\geq-\frac{p}{2}+\hat{V}_{th}\] \[\geq-\frac{p}{2} \tag{9}\] The above theorem states that as long as \(\text{M}\leq\frac{p}{2}\), the maximum and minimum values of \(\hat{H}[t]\) will fall within the interval \(\big{[}\hat{V}_{\text{threshold}}-\frac{p}{2},\frac{p}{2}\big{)}\). It can be readily demonstrated that the maximum value arising in the computation process of CSNN is guaranteed to occur in the variable \(\hat{H}[t]\). This finding not only confirms the validity of the FHE-Reset function but also allows for an estimation of the maximum value within the message space. It also provides a convenient criterion for selecting the parameter \(p\) for the message space. Furthermore, the FHE-Fire and FHE-Reset functions not only compute Fire and Reset functions on ciphertexts but also refresh the ciphertext noise. This property is crucial as it ensures resulting ciphertexts have minimal initial noise. By keeping accumulated noise after linear layer operations below a predetermined upper bound, subsequent layers in CSNNs share the same initial noise, enabling accurate computations. In essence, our framework allows network expansion to arbitrary depths without noise concerns. ## 5 Experiments In this chapter, we empirically demonstrate the excellent performance of FHE-DiCSNN in terms of accuracy and time efficiency. Firstly, we analyze the outstanding time efficiency of FHE-DiCSNN. Secondly, through theoretical analysis, we determine that the maximum value within the message space and the maximum noise growth are directly proportional to the discretization factor \(\theta\). We design experiments to determine the corresponding proportionality coefficients, allowing us to select appropriate FHE parameters based on the value of \(\theta\). Finally, we experimentally evaluate the actual accuracy and time efficiency of FHE-DiCSNN under different combinations of decay factor \(\tau\) and discretization factor \(\theta\). ### Time Consumption The structure of CSNN has been extensively discussed in Section 3. The convolutional layer plays a crucial role in extracting key image features, which, when combined with LIF neurons, enables pulse encoding specific to image features, replacing the stochastic Poisson encoding. If we replace the convolution process in CSNN, as shown in Figure 2, with Poisson encoding 2.3, we obtain a fully connected SNN driven by Poisson encoding. However, Poisson encoding introduces randomness, and to obtain stable experimental results, a sufficiently large simulation time \(T\) (which can be understood as the number of cycles for processing a single image) is required, significantly increasing the time consumption. In contrast, spiking encoding based on the convolutional layer can stably extract features, allowing the simulation time to be reduced to 2 cycles (to ensure that LIF neurons accumulate sufficient membrane potential to generate spikes). The simulation time \(T\) is a crucial factor that significantly affects time efficiency. It determines the number of cycles in the network and also increases the number of bootstrapping operations. Bootstrapping is the most time-consuming step in FHE, and in FHE-DiCSNN, it is reflected in the computation of LIF neurons. Therefore, the number of bootstrapping operations can be used as a simple estimate of time consumption. For each LIF neuron, two bootstrapping are required to compute FHE-Fire and FHE-Reset. On the other hand, Poisson encoding essentially involves one comparison and can be implemented using the Sign function, requiring one bootstrapping. The following table provides a simple estimation for the CSNN defined in Figure 2 and an equivalently dimensioned Poisson-encoded SNN: If we do not consider parallel computing, the number of bootstrapping can be used as a simple estimate of the network's time consumption. In this case, both Poisson-encoded SNN and CSNN would have a time consumption in the order of thousands. However, since the bootstrapping of Spiking Activation Layers and Poisson encoding can be performed in parallel, the time consumption will be proportional to the number of corresponding layers. In the case of parallel computing, CSNN exhibits a time efficiency that is 10 times higher than that of Poisson-encoded SNN. ### Parameters Selection In this part, we discuss the selection of FHE parameters. We begin with the message space \(Z_{p}\). In the encryption scheme, \(p\) acts as the modulus, ensuring all operations occur within the finite field \(\mathbb{Z}_{p}\). It is crucial to monitor numerical growth and prevent subtraction operations from exceeding \(p\) to avoid unexpected outcomes. Theorem 4.1 provides an easy criterion to find the maximum value. As long as it is satisfied that \[\hat{V}_{\text{th}}+\max_{t}(|\hat{I}[t]|)\leq\frac{P}{2}, \tag{10}\] \begin{table} \begin{tabular}{c c c} \hline \hline & Poisson-encoded SNN & CSNN \\ \hline bootstrapping & \((784+2\times 160+2\times 10)\times T_{1}\) & \((1440\times 2+2\times 160+2\times 10)\times T_{2}\) \\ Spiking Activation Layer & \(5\times T_{1}\) & \(6\times T_{2}\) \\ \hline \hline \end{tabular} \end{table} Table 1: \(T_{1}\) and \(T_{2}\) represent the simulation time required for Poisson-encoded SNN and CSNN, respectively, to achieve their respective peak accuracy performances. Typically, \(T_{1}\) falls within the range of [20-100], while \(T_{2}\) falls within the range of [2-4]. the value of the intermediate variable will not exceed the message space \(\mathbb{Z}_{p}\). The formula \[\hat{V}_{\text{th}}+\max_{t}(|\hat{I}[t]|)\approx\theta\cdot(V_{th}+\max_{t}(|I[ t]|))=\theta\cdot(V_{th}+\sum\omega_{j}S_{j}[t]),\] indicates that \(\hat{I}\) is proportional to discretization parameter \(\theta\). We estimated the true maximum value of \(V_{th}+\sum\omega_{j}S_{j}[t]\) on the training set, and the findings are summarized in Table 2. A technique was proposed to save computational cost by dynamically adjusting the size of the message space in the paper DiNN [15]. This technique is also applicable to our work, so that a smaller plaintext space can be selected to reduce the growth rate of noise. Accurately locating the noise growth is another problem we need to solve. The noise of the ciphertext only increases during the calculation of WeightSum. For a \begin{table} \begin{tabular}{c c c c} \hline \hline \(V_{th}+\max_{t}(|I[t]|)\) & Spiking Activation Layer1 & Spiking Activation Layer2 & Spiking Activation Layer3 \\ \hline \(\tau=2\) & 29.03 & 9.25 & 4.93 \\ \(\tau=3\) & 36.13 & 10.85 & 6.44 \\ \(\tau=4\) & 2.8 & 0.96 & 0.17 \\ \(\tau=\infty(\text{IF})\) & 23.00 & 11.64 & 6.39 \\ \hline \hline \end{tabular} \end{table} Table 2: Under the given conditions of decay parameter \(\tau=2,3,4\), and \(\tau=\infty(IF)\), we recorded the maximum values of the inputs to each Spiking Activation Layer during the forward propagation of the CSNNs network on the training set. After scaling the aforementioned values by \(\theta\), we can estimate the maximum values that may arise in the FHE-DiCSNN. However, it is important to note that this estimation based on the training set parameters may lead to certain samples from the test set causing intermediate variables to exceed the predefined limits. Fortunately, the probability of such an occurrence is very low. single WeightSum operation, since its inputs are ciphertexts with initial noise \(\sigma\), the noise of the ciphertext will increase to \[\sigma\sum_{j}\left|\hat{w}_{j}\right|\approx\theta\cdot\sigma\sum_{j}\left|w_{j} \right|. \tag{11}\] The above equation demonstrates that the maximum value of noise growth can be obtained by calculating \(\max\sum_{j}\left|w_{j}\right|\), which can be determined at the time of setting because the weights are known. The experimental results are presented in the following table3: From the above discussion, it is evident that both the size of the message space and the upper bound of the noise exhibits a direct proportionality to \(\theta\). The experimental results presented provide the corresponding scaling factors, enabling us to estimate the upper bounds associated with different \(\theta\) values. With this information, suitable FHE parameters can be chosen or standard parameter sets such as STD128 [45] can be utilized. ### Experimental Results Following the depicted process shown in Fig3, we conducted the experimental procedure during the noon time period using an Intel Core i7-7700HQ CPU @ \begin{table} \begin{tabular}{c c c c c} \hline & \(\tau=2\) & \(\tau=3\) & \(\tau=4\) & \(\tau=\infty\)(IF) \\ \hline \(\max\sum_{j}\left|\omega_{j}\right|\) & 17.42 & 18.87 & 10.24 & 11.89 \\ \hline \end{tabular} \end{table} Table 3: Based on the noise estimation formula mentioned above, the quantity \(\max\sum_{j}\left|\omega_{j}\right|\) can be employed to estimate the growth of noise in DiCSNNs for various \(\theta\). 2.80 GHz. The procedure can be outlined as follows: 1. The grayscale handwritten digit image is encrypted into LWE ciphertext. 2. The ciphertext undergoes multiplication with discretized weights and is forwarded to the Spiking Activation Layer. 3. Within the Spiking Activation Layer, the LIF neuron model executes the FHE-Fire and FHE-Reset procedures on the ciphertext. Acceleration of bootstrapping operations is achieved through FFT technology and parallel computing. 4. Steps 1-3 are repeated \(T\) times, and the resulting outputs are accumulated as classification scores. 5. Decryption is performed, and the highest score is selected as the classification result. We selected different combinations of the decay parameter \(\tau\) and scaling factor \(\theta\), and the experimental results are presented in the Table 4. We have selected combinations of different decay parameters \(\tau\) and discretization scaling factors \(\theta\), and the experimental results are displayed in the following table: \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \(\tau=2\) & \(\tau=3\) & \(\tau=4\) & \(\tau=\infty\)(IF) & Time/per image \\ \hline \(\theta=10\) & 87.81\% & 75.62\% & 8.49\% & 97.10\% & \\ \(\theta=20\) & 92.67\% & 76.98\% & 9.13\% & 97.67\% & 0.75s \\ \(\theta=40\) & 94.77\% & 79.35\% & 9.80\% & 97.94\% & \\ \hline CSNNs & 95.53\% & 89.94\% & 9.80\% & 98.47\% & \\ \hline \hline \end{tabular} \end{table} Table 4: During the evaluation of the FHE-DiCSNN network on the encrypted test set, we performed tests using different values of \(\theta\). The last row showcases the performance of the original CSNNs on the plaintext test set, while the converged CSNNs was trained using the Spikingjelly [56] framework. ## 6 Conclusion Fig. 3: This figure showcases the practical application scenario of FHE-DiCSNN. The encrypted image, along with its corresponding evaluation key, is uploaded by the local user to the cloud server. Equipped with powerful computational capabilities, the server conducts network calculations. Subsequently, the classification scores are returned to the local user, who decrypts them to obtain the classification results. The experimental findings highlight the significant negative impact of the decay factor \(\tau\) on accuracy. Specifically, when \(\tau=4\), the network becomes inactive. Analysis of the network's intermediate variables revealed a lack of spike generation by the neurons, and the weights in the second and third layers almost completely decay to zero. Thus, in CSNNs, ensuring the excitation of spikes is crucial. The size of the threshold voltage \(\hat{V}_{th}^{\tau}\) directly influences spike generation, with larger \(\hat{V}_{th}^{\tau}\) values making it more challenging to trigger spikes. Conversely, the IF model with the smallest \(\hat{V}_{th}^{\tau}\) exhibits the highest accuracy. On the other hand, the impact of \(\theta\) on accuracy is positive, as larger \(\theta\) values result in higher precision within the network. It is vital to emphasize that the choice of \(\theta\) must be compatible with the size of the message space. Otherwise, an excessively large \(\theta\) can cause the maximum value to exceed the range, leading to a detrimental effect on accuracy. When selecting a smaller upper bound for noise, differences in spike generation frequency were observed between FHE-DiCSNN and DiCSNNs during network computations. This implies that certain ciphertexts may experience noise overflow, leading to incorrect classification results. However, this has a negligible impact on the final classification outcome. It occurs only at the edges of the threshold, where slight noise overflow happens with very low probability, resulting in occasional anomalous spike transitions. This intriguing experimental observation indicates that FHE-DiCSNN exhibits a certain level of noise tolerance. ## 6 Conclusion This paper introduces the FHE-DiCSNN framework, which is built upon the efficient TFHE scheme and incorporates convolutional operations from CNN. The framework leverages the discrete nature of SNNs to achieve exceptional predic tion accuracy and time efficiency in the ciphertext domain. The homomorphic computation of LIF neurons can be extended to other SNNs models, offering a novel solution for privacy protection in third-generation neural networks. Furthermore, by replacing Poisson encoding with convolutional methods, it improves accuracy and mitigates the issue of excessive simulation time caused by randomness. Parallelizing the bootstrapping computation through engineering techniques significantly enhances computational efficiency. Additionally, we provide upper bounds on the maximum value of homomorphic encryption and the growth of noise, supported by experimental results and theoretical analysis, which guide the selection of suitable homomorphic encryption parameters and validate the advantages of the FHE-DiCSNN framework. There are also promising avenues for future research: 1. Exploring homomorphic computation of non-linear spiking neuron models, such as QIF and EIF. 2. Investigating alternative encoding methods to completely alleviate simulation time concerns for SNNs. 3. Exploring intriguing extensions, such as combining SNNs with RNNs or reinforcement learning and homomorphically evaluating these AI algorithms.
2305.00416
Quaternion Matrix Completion Using Untrained Quaternion Convolutional Neural Network for Color Image Inpainting
The use of quaternions as a novel tool for color image representation has yielded impressive results in color image processing. By considering the color image as a unified entity rather than separate color space components, quaternions can effectively exploit the strong correlation among the RGB channels, leading to enhanced performance. Especially, color image inpainting tasks are highly beneficial from the application of quaternion matrix completion techniques, in recent years. However, existing quaternion matrix completion methods suffer from two major drawbacks. First, it can be difficult to choose a regularizer that captures the common characteristics of natural images, and sometimes the regularizer that is chosen based on empirical evidence may not be the optimal or efficient option. Second, the optimization process of quaternion matrix completion models is quite challenging because of the non-commutativity of quaternion multiplication. To address the two drawbacks of the existing quaternion matrix completion approaches mentioned above, this paper tends to use an untrained quaternion convolutional neural network (QCNN) to directly generate the completed quaternion matrix. This approach replaces the explicit regularization term in the quaternion matrix completion model with an implicit prior that is learned by the QCNN. Extensive quantitative and qualitative evaluations demonstrate the superiority of the proposed method for color image inpainting compared with some existing quaternion-based and tensor-based methods.
Jifei Miao, Kit Ian Kou, Liqiao Yang, Juan Han
2023-04-30T07:20:22Z
http://arxiv.org/abs/2305.00416v1
Quaternion Matrix Completion Using Untrained Quaternion Convolutional Neural Network for Color Image Inpainting ###### Abstract The use of quaternions as a novel tool for color image representation has yielded impressive results in color image processing. By considering the color image as a unified entity rather than separate color space components, quaternions can effectively exploit the strong correlation among the RGB channels, leading to enhanced performance. Especially, color image inpainting tasks are highly beneficial from the application of quaternion matrix completion techniques, in recent years. However, existing quaternion matrix completion methods suffer from two major drawbacks. First, it can be difficult to choose a regularizer that captures the common characteristics of natural images, and sometimes the regularizer that is chosen based on empirical evidence may not be the optimal or efficient option. Second, the optimization process of quaternion matrix completion models is quite challenging because of the non-commutativity of quaternion multiplication. To address the two drawbacks of the existing quaternion matrix completion approaches mentioned above, this paper tends to use an untrained quaternion convolutional neural network (QCNN) to directly generate the completed quaternion matrix. This approach replaces the explicit regularization term in the quaternion matrix completion model with an implicit prior that is learned by the QCNN. Extensive quantitative and qualitative evaluations demonstrate the superiority of the proposed method for color image inpainting compared with some existing quaternion-based and tensor-based methods. keywords: Color image inpainting, quaternion convolutional neural network (QCNN), quaternion matrix completion. ## 1 Introduction Color image inpainting is used to repair missing or damaged areas of a color image caused by sensor noise, compression artifacts, or other distortion. It can also restore color images with missing regions due to occlusions or other factors. In general, image inpainting can be used to improve the visual quality and completeness of images, and it is a useful tool for many fields including photography, film, and video production, as well as medical imaging and forensics. Various methods have been proposed to address color image inpainting, with some of the popular ones including deep learning-based techniques [1, 2, 3], tensor completion methods [4, 5, 6, 7], and quaternion matrix completion methods [8, 9, 10, 11, 12]. These methods have been extensively researched and have shown significant improvement in the performance of color image inpainting tasks. Although deep learning-based methods often exhibit highly competitive results in color image inpainting, they still have some limitations in some cases. Firstly, training in deep learning methods requires a large amount of labeled data, which may be difficult to obtain in some scenarios. Secondly, deep learning methods often require a significant amount of computational resources and time for training and inference, which may be infeasible for resource-constrained applications that require fast image inpainting. Moreover, since deep learning methods are often trained based on specific data distributions [13], they may perform poorly under different data distributions. Therefore, non-deep learning methods based on tensor completion and quaternion matrix completion are highly popular in the application of color image inpainting due to their fast computation speed, good interpretability, and excellent performance on small datasets. Although both third-order tensors and quaternion matrices can be used to represent color images, quaternion matrices, as a novel representation, have more reasonable characteristics and advantages in representing color images. When processing color pixels with RGB channels, third-order tensors may not be able to fully utilize the high correlation among the three channels. This is because the third-order tensors represent color images by simply stacking RGB channels together, which treats the relationship between the RGB channels (referred to as the "intra-channel relationship") and the relationship between pixels (referred to as the "spatial relationship") equally. Figure 1(a) shows the _tensor perspective_. Therefore, any unfolding (matrixization or vectorization) operation of the tensor can break this intra-channel relationship because it is not treated differently from the spatial relationship under the tensor perspective. By contrast, the quaternion always treats the three channels of color pixels as a whole [9, 12, 14], so it can preserve this intra-channel relationship well; _see_ Figure 1(b), showing the _quaternion perspective_. Due to the superiority of quaternions in representing color pixels, quaternion matrix completion methods have recently achieved excellent results in color image inpainting. To complete quaternion matrices, there are primarily two approaches: minimizing the nuclear norm of the quaternion matrix [8, 15, 16] or decomposing the matrix into low-rank quaternion matrices [9, 12]. Nevertheless, currently available techniques for completing quaternion matrices have two notable limitations. Firstly, selecting an appropriate regularizer to capture the fundamental features of color images can be problematic, and in some cases, the selected regularizer (_e.g._ rank functions or total variation norm [17]) based on empirical observations may not be the most efficient or optimal. Secondly, the optimization process for quaternion matrix completion models is arduous due to the non-commutative nature of quaternion multiplication. Consequently, this paper aims to overcome the limitations of the current quaternion matrix completion methods mentioned above by utilizing an untrained quaternion convolutional neural network (QCNN) to produce the completed quaternion matrix directly. The proposed method has the following advantages: 1) Compared with traditional deep learning-based methods, our method directly exploits the deep priors of color images [18], and does not require a large amount of data to pre-train the network. 2) Using a network based on QCNN [19; 20; 21] to generate color images has advantages over traditional CNN, including prevention of over-fitting, fewer parameters, and most importantly, the ability to fully simulate the inherent relationships between color image channels. A detailed explanation of QCNN can be found in Section 2.2. 3) This method will effectively avoid the limitations of existing quaternion matrix completion methods. Specifically, the QCNN learns an implicit prior that replaces the explicit regularization term in the quaternion matrix completion model, which will liberate researchers from the anguish of searching for suitable regularization terms and designing complex optimization algorithms for quaternion matrix completion models. The remaining chapters of this paper are organized as follows: Section 2 introduces quaternions and QCNN. Section 3 provides details of the proposed color image inpainting approach. The qualitative and quantitative experiments are presented and analyzed in Section 4. Finally, some conclusions are drawn in Section 5. ## 2 Quaternions and Quaternion Convolutional Neural Network In this section, we will provide a concise overview of quaternion algebra before delving into a thorough analysis of the key components that make up the QCNN. Figure 1: The difference between tensors and quaternions representing color pixels. \(r_{m}\), \(g_{m}\), and \(b_{m}\) respectively denote the RGB channels of the color pixel \(\mathbf{p}_{m}=(r_{m},g_{m},b_{m})\) under _tensor perspective_ (or \(\dot{p}_{m}=r_{m}i+g_{m}j+b_{m}k\) under _quaternion perspective_) for \(m=1,2\). The relationship between pixels, _e.g._, \(\mathbf{p}_{1}\) and \(\mathbf{p}_{2}\) (or \(\dot{p}_{1}\) and \(\dot{p}_{2}\) in the quaternion matrix), is called “spatial relationship”, and the relationship between color channels is called “intra-channel relationship”. Under _tensor perspective_ (a), the intra-channel and spatial relationships are obviously treated equally; that is, the relationship between \(r_{1}\) and \(g_{1}\) is the same as that between \(r_{1}\) and \(r_{2}\), which may not be appropriate. Under _quaternion perspective_ (b), the three channels are always treated as a whole, and the intra-channel relationship (bundling with the three imaginary parts of a quaternion) is distinguished from the spatial relationship and can be maintained well. ### Quaternion Algebra As a natural extension of the complex domain, a quaternion \(\dot{q}\in\mathbb{H}\) consisting of one real part and three imaginary parts is defined as \[\dot{q}=\underbrace{q_{0}}_{\mathrm{Re}(\dot{q})}+\underbrace{q_{1}\mathtt{i}+q_ {2}\mathtt{j}+q_{3}\mathtt{k}}_{\mathrm{Im}(\dot{q})}, \tag{1}\] where \(q_{l}\in\mathbb{R}\left(l=0,1,2,3\right)\), and \(\mathtt{i},\mathtt{j},\mathtt{k}\) are imaginary number units and obey the quaternion rules that \[\left\{\begin{array}{l}\mathtt{i}^{2}=\mathtt{j}^{2}=\mathtt{k}^{2}=\mathtt{ i}\mathtt{j}\mathtt{k}=-1,\\ \mathtt{i}\mathtt{j}=-\mathtt{j}\mathtt{i}=\mathtt{k},\mathtt{j}\mathtt{k}=- \mathtt{k}\mathtt{j}=\mathtt{i},\mathtt{k}\mathtt{i}=-\mathtt{i}\mathtt{k}= \mathtt{j}.\end{array}\right.\] If the real part \(q_{0}:=\mathrm{Re}(\dot{q})=0\), then \(\dot{q}=q_{1}\mathtt{i}+q_{2}\mathtt{j}+q_{3}\mathtt{k}:=\mathrm{Im}(\dot{q})\) is named a pure quaternion. The conjugate and the modulus of a quaternion \(\dot{q}\) are, respectively, defined as \[\dot{q}^{*}=q_{0}-q_{1}\mathtt{i}-q_{2}\mathtt{j}-q_{3}\mathtt{k}\quad\text {and}\quad|\dot{q}|=\sqrt{q_{0}^{2}+q_{1}^{2}+q_{2}^{2}+q_{3}^{2}}.\] Given two quaternions \(\dot{p}\) and \(\dot{q}\in\mathbb{H}\), the multiplication of them is \[\begin{split}\dot{p}\dot{q}=&(p_{0}q_{0}-p_{1}q_{1}- p_{2}q_{2}-p_{3}q_{3})\\ &+(p_{0}q_{1}+p_{1}q_{0}+p_{2}q_{3}-p_{3}q_{2})\mathtt{i}\\ &+(p_{0}q_{2}-p_{1}q_{3}+p_{2}q_{0}+p_{3}q_{1})\mathtt{j}\\ &+(p_{0}q_{3}+p_{1}q_{2}-p_{2}q_{1}+p_{3}q_{0})\mathtt{k},\end{split} \tag{2}\] which is also referred to as Hamilton product [19]. Analogously, a quaternion matrix \(\dot{\mathbf{Q}}=(\dot{q}_{mn})\in\mathbb{H}^{M\times N}\) is written as \(\dot{\mathbf{Q}}=\mathbf{Q}_{0}+\mathbf{Q}_{1}\mathtt{i}+\mathbf{Q}_{2} \mathtt{j}+\mathbf{Q}_{3}\mathtt{k}\), where \(\mathbf{Q}_{l}\in\mathbb{R}^{M\times N}\left(l=0,1,2,3\right)\), \(\dot{\mathbf{Q}}\) is named a pure quaternion matrix when \(\mathbf{Q}_{0}=\mathrm{Re}(\dot{\mathbf{Q}})=\mathbf{0}\). ### Quaternion Convolutional Neural Networks QCNN has quaternionic model parameters, inputs, activations, and outputs. In the following, we recall and analyze the key components used in this paper of QCNN, _e.g._, quaternion convolution, quaternion activation functions, and quaternion-valued backpropagation. #### 2.2.1 Quaternion Convolution Convolution in the quaternion domain formally can be defined the same as that in the real domain [20; 22; 23]. Letting \(\dot{\mathbf{K}}=(\dot{k}_{mn})=\mathbf{K}_{0}+\mathbf{K}_{1}\mathtt{i}+ \mathbf{K}_{2}\mathtt{j}+\mathbf{K}_{3}\mathtt{k}\) be a quaternion convolution kernel matrix, and \(\dot{\mathbf{Y}}=(\dot{y}_{mn})=\mathbf{Y}_{0}+\mathbf{Y}_{1}\mathtt{i}+ \mathbf{Y}_{2}\mathtt{j}+\mathbf{Y}_{3}\mathtt{k}\) be a quaternion input matrix, their convolution in deep learning is computed as \[(\dot{\mathbf{K}}\vartriangle\dot{\mathbf{Y}})(r_{1},r_{2})=\sum_{m}\sum_{n} \dot{k}_{mn}\dot{y}_{r_{1}+m,r_{2}+n}, \tag{3}\] where \(\vartriangle\) denotes convolution operation. Deconvolution, strided convolution, dilated convolution, and padding in quaternion domain are also defined analogously to real-valued convolution. Assume that \(\dot{\mathbf{X}}=\mathbf{X}_{0}+\mathbf{X}_{1}\mathbf{i}+\mathbf{X}_{2}\mathbf{j}+ \mathbf{X}_{3}\mathbf{k}\) is a certain window (patch) of \(\dot{\mathbf{Y}}\), and has the same size as \(\dot{\mathbf{K}}\). Based on the Hamilton product (2), the convolution of \(\dot{\mathbf{K}}\) and \(\dot{\mathbf{X}}\) can be written as (4) From (4), one can see that if \(\dot{\mathbf{X}}\) and \(\dot{\mathbf{K}}\) are real-valued matrices, _i.e._, \(\mathbf{X}_{1}=\mathbf{X}_{2}=\mathbf{X}_{3}=\mathbf{K}_{1}=\mathbf{K}_{2}= \mathbf{K}_{3}=\mathbf{0}\), then the convolution degrades into the traditional real-valued case. Labeling \(\dot{\mathbf{K}}\), and using a matrix to represent the components of the convolution, we express the quaternion convolution formula (4) in the following way similar to matrix multiplication: (5) In addition, one can visually see the differences between real-valued convolution and quaternion convolution in Figure 2. From formula (5) and Figure 2, we can notice that the quaternion convolution forces each component of the quaternion kernel \(\dot{\mathbf{K}}\) to interact with each component of the input quaternion feature map \(\dot{\mathbf{X}}\). This kind of interaction mechanism forces the kernel to capture internal latent relations among different channels of the feature map since each characteristic in a channel will have an influence on the other channels through the common kernel. Different from real-valued convolution, which simply multiplies each kernel with the corresponding feature map, the quaternion convolution is similar to a mixture of standard convolution. Such a mixture can perfectly simulate the potential relationship between color image channels. This may be the fundamental reason why QCNN is more suitable for generating color images. No real-valued CNN would make such connections without the inspiration from quaternion convolutions, although it is feasible to incorporate this mixture into three real-valued CNNs using supplementary connections. Furthermore, when the quaternion convolution layer has the same output dimensions (a quaternion has four dimensions) as the real-valued convolution layer, for the quaternion convolution, the parameters that need to be learned are only \(\frac{1}{4}\) of the real-valued convolution, which has great potential to avoid the over-fitting phenomenon. These exciting characteristics of quaternion convolution are our main motivation for designing QCNN to color image inpainting tasks. #### 2.2.2 Quaternion activation functions Many quaternion activation functions have been investigated, whereas the split activation [19; 24], a more frequent and simpler solution, is applied in our proposed model. Let \(Q_{\mathfrak{l}}(\dot{\mathbf{Y}})\) be a split activation function applied to the quaternion \(\dot{\mathbf{Y}}=\mathbf{Y}_{0}+\mathbf{Y}_{1}\mathbf{i}+\mathbf{Y}_{2}\mathbf{j }+\mathbf{Y}_{3}\mathbf{k}\), such that \[Q_{\mathfrak{f}}(\dot{\mathbf{Y}})=\mathfrak{f}(\mathbf{Y}_{0})+\mathfrak{f}( \mathbf{Y}_{1})\mathbf{i}+\mathfrak{f}(\mathbf{Y}_{2})\mathbf{j}+\mathfrak{f}( \mathbf{Y}_{3})\mathbf{k},\] with \(\mathfrak{f}\) corresponding to any traditional real-valued activation function, _e.g._, ReLU, LeakyReLU, sigmoid, _etc._ Thus, \(Q_{\mathfrak{f}}\) can be, \(Q_{ReLU}\), \(Q_{LeakyReLU}\), \(Q_{sigmoid}\), \(Q_{Tanh}\), _etc._ #### 2.2.3 Quaternion-Valued Backpropagation The quaternion-valued backpropagation is just an extension of the method for its real-valued counterpart [19]. The gradient of a general quaternion loss function \(\mathcal{L}\) is computed for each component of the quaternion kernel matrix \(\dot{\mathbf{K}}\) as \[\frac{\nabla\mathcal{L}}{\nabla\dot{\mathbf{K}}}=\frac{\nabla\mathcal{L}}{ \nabla\mathbf{K}_{0}}+\frac{\nabla\mathcal{L}}{\nabla\mathbf{K}_{1}}\mathbf{i }+\frac{\nabla\mathcal{L}}{\nabla\mathbf{K}_{2}}\mathbf{j}+\frac{\nabla \mathcal{L}}{\nabla\mathbf{K}_{3}}\mathbf{k},\] where \(\nabla\) denotes gradient operator. Afterwards, the gradient is propagated back based on the chain rule. Thus, QCNN can be easily trained as real-valued CNN following the backpropagation. ## 3 Color Image Inpainting As quaternions offer a superior method of representing color pixels, every pixel in an RGB color image can be encoded as a pure quaternion. That is \[\dot{q}=0+q_{r}\mathbf{i}+q_{g}\mathbf{j}+q_{b}\mathbf{k}, \tag{6}\] where \(\dot{q}\) denotes a color pixel, \(q_{r}\), \(q_{g}\), and \(q_{b}\) are respectively the pixel values in red, green, and blue channels. Naturally, the given color image with spatial resolution of \(M\times N\) pixels can Figure 2: The differences between (a) real-valued convolution and (b) quaternion convolution. be represented by a pure quaternion matrix \(\dot{\mathbf{Q}}=(\dot{q}_{mn})\in\mathbb{H}^{M\times N}\), \(1\leq m\leq M\), \(1\leq n\leq N\) as follows: \[\dot{\mathbf{Q}}=\mathbf{0}+\mathbf{Q}_{r}\mathbf{i}+\mathbf{Q}_{g}\mathbf{j}+ \mathbf{Q}_{b}\mathbf{k}, \tag{7}\] where \(\mathbf{Q}_{r},\mathbf{Q}_{g},\mathbf{Q}_{b}\in\mathbb{R}^{M\times N}\) containing respectively the pixel values in red, green, and blue channels. Figure 3 shows an example of using a quaternion matrix to represent a color image. ### Optimization Model In the quaternion domain, a general quaternion matrix completion model for color image inpainting can be developed as \[\min_{\dot{\mathbf{X}}}\ \frac{1}{2}\|\mathcal{P}_{\Omega}(\dot{\mathbf{X}}- \dot{\mathbf{Q}})\|_{F}^{2}+\lambda\Phi(\dot{\mathbf{X}}), \tag{8}\] where \(\lambda\) is a nonnegative parameter, \(\dot{\mathbf{X}}\in\mathbb{H}^{M\times N}\) is a desired output completed quaternion matrix, \(\dot{\mathbf{Q}}\in\mathbb{H}^{M\times N}\) is an observed quaternion matrix with missing entries, \(\Phi(\cdot)\) is a regularization operator, and \(\mathcal{P}_{\Omega}\) is the unitary projection onto the linear space of matrices supported on the entries set \(\Omega\), defined as \[(\mathcal{P}_{\Omega}(\dot{\mathbf{X}}))_{mn}=\left\{\begin{array}{ll}\dot{ x}_{mn},&(m,n)\in\Omega,\\ 0,&(m,n)\notin\Omega.\end{array}\right.\] In model (8), \(\Phi(\cdot)\) can be the rank function, as used in recent quaternion matrix completion models [15; 25; 26], or any other suitable regularizer, or a combination of them. However, selecting an appropriate regularizer to capture the generic prior of natural images can be challenging, and in some cases, the empirically chosen regularizer may not be the most suitable or effective one. Furthermore, optimizing regularizers in the quaternion domain is a challenging task because of the non-commutativity of quaternion multiplication. Thus, in this paper, we propose to replace the explicit regularization term \(\Phi(\cdot)\) with an implicit prior learned by the QCNN. As a result, the model (8) becomes \[\min_{\dot{\theta}}\ \|\mathcal{P}_{\Omega}(f_{\dot{\theta}}(\dot{\mathbf{Z}})- \dot{\mathbf{Q}})\|_{F}^{2},\quad\text{and}\quad\dot{\mathbf{X}}_{opt}=f_{ \dot{\theta}_{opt}}(\dot{\mathbf{Z}}), \tag{9}\] Figure 3: Color image represented by quaternion matrix. where \(\dot{\mathbf{Z}}\), a random initialization with the same size as \(\dot{\mathbf{Q}}\), is passed as input to the QCNN \(f_{\dot{\theta}}(\dot{\mathbf{Z}})\) which is parameterized by \(\dot{\theta}\). Since there should not be any change in the uncorrupted regions, the final output pixels outside of the corrupted areas are replaced with the original input values. Therefore, once we get \(\dot{\mathbf{X}}_{opt}\), the final inpainted color image is \[\dot{\mathbf{X}}_{inpainted}=\mathcal{P}_{\Omega}(\dot{\mathbf{Q}})+\mathcal{P }_{\Omega^{c}}(\dot{\mathbf{X}}_{opt}), \tag{10}\] where \(\Omega^{c}\) is the complement of \(\Omega\). ### The Designed QCNN The designed QCNN is an encoder-decoder, which includes an encoding part with four times downsampling and a decoding part with four quaternion deconvolution (Qdeconv) layers for backing to the original size of the color image. The architecture and details of the designed QCNN can be seen in Figure 4 and Table 1, respectively. The quaternion batch normalization (QBN) method studied in [20; 27] is also used in the designed QCNN to stabilize and speed up the process of generating the quaternion matrix \(\dot{\mathbf{X}}_{opt}\). ## 4 Experiments ### Experiment Settings The experiments for our proposed QCNN-based method for color image inpainting were conducted in an environment with "torch==1.13.1+cu116". During backpropagation, we have chosen the Adam optimizer with a learning rate of 0.01. In addition, the input of the QCNN, \(\dot{\mathbf{Z}}\), is a random quaternion matrix with the same spatial size as the color image to be inpainted. The output of the QCNN is a completed quaternion matrix whose imaginary parts correspond to the inpainted RGB color image. Figure 4: The architecture of the designed QCNN. \begin{table} \begin{tabular}{l c c c} \hline \hline Module types & Kernel size & Stride & Output channels \\ \hline Qconv+QBN+\(Q_{LeakyReLU}\) & \(3\times 3\) & \(1\times 1\) & 64 \\ Qconv+QBN+\(Q_{LeakyReLU}\) & \(3\times 3\) & \(2\times 2\) & 64 \\ Qconv+QBN+\(Q_{LeakyReLU}\) & \(3\times 3\) & \(2\times 2\) & 64 \\ Qconv+QBN+\(Q_{LeakyReLU}\) & \(3\times 3\) & \(2\times 2\) & 64 \\ Qconv+QBN+\(Q_{LeakyReLU}\) & \(3\times 3\) & \(2\times 2\) & 64 \\ Qconv+QBN+\(Q_{LeakyReLU}\) & \(3\times 3\) & \(1\times 1\) & 64 \\ Qdeconv+QBN+\(Q_{LeakyReLU}\) & \(3\times 3\) & \(2\times 2\) & 64 \\ Qdeconv+QBN+\(Q_{LeakyReLU}\) & \(3\times 3\) & \(2\times 2\) & 64 \\ Qdeconv+QBN+\(Q_{LeakyReLU}\) & \(3\times 3\) & \(2\times 2\) & 64 \\ Qdeconv+QBN+\(Q_{LeakyReLU}\) & \(3\times 3\) & \(2\times 2\) & 64 \\ Qconv & \(3\times 3\) & \(1\times 1\) & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: The details of the designed QCNN. Figure 5: Tested color images. ### Datasets We evaluate the proposed method on eight common-used color images (including "baboon", "monarch", "airplane", "peppers", "sailboat", "lena", "panda", and "barbara", _see_ Figure 5) with spatial size \(256\times 256\). For random missing, we set three levels of sampling rates (SRs) which are \(\text{SR}=10\%\), \(\text{SR}=30\%\), and \(\text{SR}=50\%\). For structural missing, we use two kinds of cases, _see_ the observed color images in Figure 10. ### Comparison We compare our QCNN-based method with its real-domain counterpart, _i.e._, the CNN-based method with the same number of learnable parameters as our model. Additionally, we compare our method with several well-known tensor and quaternion-based techniques that utilize low-rank regularization, namely t-SVD [28], TMac-TT [29], LRQA-2 [15], LRQMC [9], and TQLNA [30]. The implementation of all the comparison methods follow their original papers and source codes. For measuring the quality of the results, two metrics, peak signal-to-noise ratio (PSNR) and structure similarity index (SSIM) [31] are used in this paper. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Methods: & CNN & t-SVD [28] & TMac-TT [29] & LRQA-2 [15] & LRQMC [9] & TQLNA [30] & **Ours** \\ \hline \hline Images: & \multicolumn{6}{|c|}{SR = 10\%} \\ \hline baboon & 18.456/0.572 & 17.325/0.505 & 19.242/0.579 & 17.941/0.527 & 18.065/0.546 & 18.075/0.537 & **20.301/0.647** \\ monarch & 19.589/0.860 & 17.325/0.643 & 17.306/0.763 & 14.561/0.642 & 14.752/0.684 & 13.534/0.579 & **20.292/0.874** \\ airplane & 21.001/0.675 & 17.765/0.377 & 20.503/0.588 & 18.601/0.424 & 18.715/0.480 & 18.442/0.416 & **22.258/0.715** \\ peppers & 23.328/0.929 & 15.993/0.718 & 20.416/0.871 & 17.111/0.761 & 16.206/0.740 & 17.145/0.764 & **23.672/0.932** \\ sailboat & 20.018/0.781 & 16.514/0.502 & 17.606/0.670 & 17.103/0.550 & 17.295/0.581 & 16.711/0.526 & **20.714/0.785** \\ lena & 22.773/0.913 & 17.659/0.798 & 21.616/0.877 & 18.607/0.811 & 18.553/0.825 & 18.528/0.802 & **23.724/0.925** \\ panda & 23.399/0.695 & 18.074/0.492 & 22.518/0.628 & 19.205/0.514 & 19.212/0.562 & 19.149/0.513 & **24.408/0.764** \\ barbara & 21.524/0.754 & 16.894/0.513 & 20.373/0.723 & 17.917/0.533 & 17.947/0.585 & 17.996/0.527 & **22.943/0.783** \\ \hline \hline Images: & \multicolumn{6}{|c|}{SR = 30\%} \\ \hline baboon & 21.800/0.761 & 20.657/0.703 & 21.801/0.751 & 20.685/0.695 & 21.279/0.727 & 20.878/0.705 & **22.565/0.775** \\ monarch & 25.163/0.950 & 19.003/0.833 & 22.487/0.910 & 19.582/0.842 & 19.725/0.851 & 19.876/0.848 & **25.278/0.952** \\ airplane & 25.301/0.843 & 22.555/0.671 & 24.096/0.821 & 22.982/0.681 & 23.183/0.724 & 23.250/0.702 & **26.119/0.852** \\ peppers & 28.281/0.973 & 22.287/0.908 & 25.458/0.951 & 23.330/0.923 & 23.671/0.930 & 23.976/0.934 & **28.536/0.976** \\ sailboat & 24.374/0.904 & 20.958/0.767 & 22.819/0.862 & 21.343/0.779 & 21.634/0.806 & 21.609/0.792 & **24.723/0.906** \\ lena & 27.519/0.963 & 23.217/0.917 & 26.136/0.951 & 23.729/0.921 & 24.173/0.931 & 24.059/0.927 & **27.730/0.966** \\ panda & 27.416/0.850 & 23.698/0.730 & 26.602/0.837 & 24.297/0.739 & 24.479/0.772 & 24.721/0.753 & **27.828/0.862** \\ barbara & 25.804/0.875 & 22.737/0.771 & 25.338/0.866 & 23.403/0.781 & 23.584/0.799 & 23.885/0.797 & **26.550/0.883** \\ \hline \hline Images: & \multicolumn{6}{|c|}{SR = 50\%} \\ \hline baboon & 24.263/0.858 & 21.837/0.764 & 23.547/0.839 & 23.004/0.812 & 23.704/0.839 & 23.099/0.816 & **24.588/0.865** \\ monarch & 28.834/0.976 & 23.558/0.927 & 26.256/0.957 & 24.089/0.932 & 24.079/0.935 & 24.587/0.938 & **29.015/0.977** \\ airplane & 29.525/0.928 & 26.626/0.835 & 28.217/0.922 & 26.768/0.822 & 27.195/0.869 & 27.438/0.850 & **29.705/0.931** \\ peppers & 31.622/0.987 & 27.287/0.966 & 29.461/0.980 & 27.936/0.971 & 28.171/0.973 & 28.749/0.976 & **31.675/0.989** \\ sailboat & 27.541/0.950 & 24.723/0.890 & 26.251/0.932 & 25.000/0.892 & 25.525/0.910 & 25.414/0.902 & **27.816/0.953** \\ lena & 30.749/0.980 & 27.385/0.962 & 29.360/0.975 & 27.476/0.962 & 28.189/0.968 & 28.128/0.967 & **30.991/0.982** \\ panda & 30.414/0.914 & 27.746/0.856 & 29.458/0.905 & 27.985/0.853 & 28.438/0.880 & 28.438/0.865 & **30.548/0.916** \\ barbara & 28.259/0.924 & 26.838/0.884 & 27.836/0.918 & 27.252/0.888 & 27.891/0.905 & 27.719/0.897 & **29.041/0.931** \\ \hline \end{tabular} \end{table} Table 2: The PSNR and SSIM values of different methods on the eight color images with three levels of sampling rates (the format is PSNR/SSIM, and **bold** fonts denote the best performance). Figure 6: Recovered two color images (baboon and monarch) for random missing with \(\text{SR}=10\%\). Figure 7: Recovered two color images (airplane and panda) for random missing with \(\text{SR}=10\%\). Figure 8: Recovered two color images (airplane and peppers) for random missing with \(\text{SR}=50\%\). Figure 9: Recovered two color images (lena and barbara) for random missing with \(\text{SR}=50\%\). Figure 10: Recovered two color images (baboon and lena) for structural missing pixels. ### Results Analysis Table 2 lists the PSNR and SSIM values of different methods on the eight color images with three levels of sampling rates. Figure 6 and Figure 7 display the recovered results of four color images by different methods for random missing with \(\text{SR}=10\%\). Figure 8 and Figure 9 display the recovered results of four color images by different methods for random missing with \(\text{SR}=50\%\). Experimental results of two kinds of structural missing cases are given in Figure 10. From all the experimental results, we can observe and summarize the following points: * Our QCNN-based color image inpainting method has advantages over the CNN-based method, especially in the case of a large number of lost pixels (_e.g._, SR=10%), the advantages of our QCNN-based method are obvious. As shown in Figures 6 and 7, when SR=10%, the color images generated by the CNN-based method shows an obvious color difference compared with the original images. In addition, our QCNN-based method has advantages over CNN-based methods in preserving color image details (_see_ Figures 8 and 9). The QCNN-based method has advantages over the CNN-based method mainly because the CNN-based method, for each kernel, simply merges the RGB three channels to sum the convolution results up, without considering the complicated interrelationship between different channels. This may result in the loss of important structural information. * Compared with existing methods based on low-rank approximation of quaternion matrices, our QCNN-based method replaced the explicit regularization term with an implicit prior learned by the QCNN. Therefore, our QCNN-based method has obvious advantages over the existing methods based on quaternion matrix low-rank approximation, both in terms of evaluation indicators and visually (_see_ Table 2 and Figures 6-10). ## 5 Conclusion For color image inpainting, this paper has proposed a quaternion matrix completion method using untrained QCNN. This approach enhances the quaternion matrix completion model by substituting the explicit regularization term with an implicit prior, which is acquired through learning by the QCNN. This method eliminates the need for researchers to spend time searching for appropriate regularization terms and designing intricate optimization algorithms for quaternion matrix completion models. From the experimental results, it can be seen that the method is very competitive with the existing quaternion matrix completion methods in the task of color image inpainting. This paper represents the first attempt to apply QCNN to the quaternion matrix completion problem, and may therefore provide new insights for researchers exploring quaternion matrix completion methods, as well as other quaternion matrix optimization problems. ## Acknowledgment This work was supported by University of Macau (File no. MYRG2019-00039-FST, MYRG2022-00108-FST ), Science and Technology Development Fund, Macau S.A.R (File no.FDCT/0036/2021/AGJ).
2310.20570
Correlation-pattern-based Continuous-variable Entanglement Detection through Neural Networks
Entanglement in continuous-variable non-Gaussian states provides irreplaceable advantages in many quantum information tasks. However, the sheer amount of information in such states grows exponentially and makes a full characterization impossible. Here, we develop a neural network that allows us to use correlation patterns to effectively detect continuous-variable entanglement through homodyne detection. Using a recently defined stellar hierarchy to rank the states used for training, our algorithm works not only on any kind of Gaussian state but also on a whole class of experimentally achievable non-Gaussian states, including photon-subtracted states. With the same limited amount of data, our method provides higher accuracy than usual methods to detect entanglement based on maximum-likelihood tomography. Moreover, in order to visualize the effect of the neural network, we employ a dimension reduction algorithm on the patterns. This shows that a clear boundary appears between the entangled states and others after the neural network processing. In addition, these techniques allow us to compare different entanglement witnesses and understand their working. Our findings provide a new approach for experimental detection of continuous-variable quantum correlations without resorting to a full tomography of the state and confirm the exciting potential of neural networks in quantum information processing.
Xiaoting Gao, Mathieu Isoard, Fengxiao Sun, Carlos E. Lopetegui, Yu Xiang, Valentina Parigi, Qiongyi He, Mattia Walschaers
2023-10-31T16:00:25Z
http://arxiv.org/abs/2310.20570v1
# Correlation-pattern-based Continuous-variable Entanglement Detection through Neural Networks ###### Abstract Entanglement in continuous-variable non-Gaussian states provides irreplaceable advantages in many quantum information tasks. However, the sheer amount of information in such states grows exponentially and makes a full characterization impossible. Here, we develop a neural network that allows us to use correlation patterns to effectively detect continuous-variable entanglement through homodyne detection. Using a recently defined stellar hierarchy to rank the states used for training, our algorithm works not only on any kind of Gaussian state but also on a whole class of experimentally achievable non-Gaussian states, including photon-subtracted states. With the same limited amount of data, our method provides higher accuracy than usual methods to detect entanglement based on maximum-likelihood tomography. Moreover, in order to visualize the effect of the neural network, we employ a dimension reduction algorithm on the patterns. This shows that a clear boundary appears between the entangled states and others after the neural network processing. In addition, these techniques allow us to compare different entanglement witnesses and understand their working. Our findings provide a new approach for experimental detection of continuous-variable quantum correlations without resorting to a full tomography of the state and confirm the exciting potential of neural networks in quantum information processing. _Introduction.--_The study of quantum entanglement is experiencing a thorough theoretical development and an impressive experimental progress [1; 2], leading to important applications in quantum cryptography [3], quantum metrology [4] and quantum computation [5]. It is, therefore, crucial to find reliable and practical methods to detect entanglement. Especially in the continuous variable (CV) regime, significant breakthroughs have recently been achieved in the experimental preparation of non-Gaussian entanglement [6; 7]. Such entangled states have been proven to be essential resource for entanglement distillation [8; 9; 10], quantum-enhanced imaging [11; 12] and sensing [13; 14], and to reach a quantum computational advantage [15]. However, entanglement detection in such complex systems turns out to be a challenging problem. The conventional entanglement criteria which rely on the knowledge of reconstructed density matrix, such as the positive partial transpose (PPT) criterion [16] or the quantum Fisher information (QFI) criterion proposed in Ref. [17], are experimentally infeasible for general non-Gaussian states. A common thought is to avoid the time-consuming process of two-mode tomography [18], which requires performing joint measurements on many quadrature combinations [19]. Only for some states with specific analytic structures [20], this demanding procedure can be simplified to a series of single-mode homodyne tomography [21]. An innovative approach to overcome this issue is provided by deep neural networks [22], which can work with limited amounts of data from actual measurements. Recently, neural networks have found extensive applications in quantum physics and quantum information science [23; 24], including detecting quantum features [25; 26; 27; 28], characterizing quantum states and quantum channels [29; 30; 31; 32; 33; 34; 35], and solving many-body problems [36; 37; 38]. A key step thus lies in selecting an appropriate training data set to ensure that the networks can effectively and universally learn the features of the quantum system. Keeping our focus on the homodyne measurements which are feasible in CV experiments, we seek to answer the following question in this paper: Can neural networks be used to detect entanglement for general non-Gaussian states? In this work, we develop a deep learning algorithm to detect entanglement for arbitrary two-mode CV states, including experimentally relevant photon subtracted states [7]. Instead of extracting entanglement properties from the density matrices, our neural network is only based on four correlation patterns, which can be easily measured via homodyne measurements. It can be found that our algorithm achieves much higher accuracy than PPT and QFI criteria based on the maximum likelihood (MaxLik) algorithm with the same homodyne data. Our network also shows strong robustness for single-photon subtracted states. Furthermore, with a visualization tool, namely a t-SNE algorithm, we show that elusive entanglement information is hidden in the correlation patterns, hence in the joint probability distributions. It can be seen that the neural network is indeed able to correctly sort out data: clusters of entangled states clearly emerge after neural network processing. Therefore, our findings provide an approach for detecting CV entanglement in experimentally-accessible ways and confirm the deep neural network approach as a powerful tool in quantum information processing. _Generation and selection of CV states._--We start by generally characterizing two-mode CV quantum states in order to generate an abundant training data set. To do so, we rely on the recently developed stellar formalism [39, 40, 41]. In this formalism, we analyse a pure state \(|\psi\rangle\) in terms of its stellar function \(F_{\phi}^{*}(\mathbf{z})\). To define this function, we start by considering the state's decomposition in the Fock basis, i.e., \(|\psi\rangle=\sum\limits_{\mathbf{n}\geq 0}\psi_{\mathbf{n}}|\mathbf{n}\rangle\in\mathcal{H}^{ \otimes 2}\) with \(\mathbf{n}=(n_{1},n_{2})\), such that the stellar function can be written as \[F_{\phi}^{*}(\mathbf{z})\equiv e^{\frac{1}{2}|\mathbf{n}|^{2}}\,\langle\mathbf{z}^{*}| \psi\rangle=\sum\limits_{n_{1},n_{2}}\frac{\psi_{\mathbf{n}}}{\sqrt{n_{1}!n_{2}!}} z_{1}^{n_{1}}z_{2}^{n_{2}},\ \ \forall\ \mathbf{z}=(z_{1},z_{2})\in\mathbb{C}^{2} \tag{1}\] where \(|\mathbf{z}\rangle=\mathrm{e}^{-\frac{1}{2}|\mathbf{n}|^{2}}\sum\limits_{n_{1},n_{2}} \frac{z_{1}^{n_{1}}z_{2}^{n_{2}}}{|n_{1},n_{2}\rangle}|n_{1},n_{2}\rangle\) is a coherent state of complex amplitude \(\mathbf{z}\). The stellar rank \(r\) of \(|\psi\rangle\) is defined as the number of zeros of its stellar function, representing a minimal non-Gaussian operational cost. For instance, \(r=0\) means that the state is Gaussian, while \(r=1\) corresponds to a class of non-Gaussian states that contains, both, single-photon added and subtracted states [41]. Any multimode pure state \(|\psi\rangle\) with finite stellar rank \(r\) can be decomposed into \(|\psi\rangle=\hat{G}|C\rangle\), where \(\hat{G}\) is a given Gaussian operator acting onto the state \(|C\rangle\), which is called core state; it is a normalized pure quantum state with multivariate polynomial stellar function of degree \(r\), equal to the stellar rank of the state. It then follows immediately that Gaussian operations \(\hat{G}\) must preserve the stellar rank [40]. We generate states \(\hat{\rho}\) in our data set by first creating a core state \(|C\rangle\) with a given stellar rank \(r\leq 2\) and random complex coefficients for the superposition in the Fock basis. Then, according to the Bloch-Messiah decomposition, any multimode Gaussian unitary operation \(\hat{G}\) can be decomposed as \(\hat{G}=\hat{\mathcal{U}}(\varphi)\hat{\mathcal{S}}(\xi)\hat{\mathcal{D}}( \alpha)\hat{\mathcal{V}}(\phi)\), where \(\hat{\mathcal{U}}\) and \(\hat{\mathcal{V}}\) are beam-splitter operators, \(\hat{\mathcal{S}}\) is a squeezing operator and \(\hat{\mathcal{D}}\) is a displacement operator. We choose random values for the parameters \(\varphi\), \(\xi\), \(\alpha\) and \(\phi\) of the different operators and apply the corresponding operation \(\hat{G}\) to the core state \(|C\rangle\) to produce the random state \(|\psi\rangle=\hat{G}|C\rangle\). Finally, by adding optical losses to simulate an experimental loss channel, a lossy two-mode state \(\hat{\rho}\) is generated. More details can be found in A. _Homodyne data._--The aim of our work is to feed our neural network with data that are directly accessible in experiments. To this goal, we focus on quadrature statistics, which in quantum optics experiments can be directly obtained through homodyne detection. The quadrature observables \(\hat{x}_{k}\) and \(\hat{p}_{k}\) in the modes \(k=1,2\) can be defined as the real and imaginary parts of the photon annihilation operator \(\hat{a}_{k}\) such that \(\hat{x}_{k}\equiv(\hat{a}_{k}+\hat{a}_{k}^{\dagger})\) and \(\hat{p}_{k}\equiv i(\hat{a}_{k}^{\dagger}-\hat{a}_{k})\). Homodyne detection then corresponds to a projective measurement on eigenstates of these quadrature operators. Hence, we define these eigenstates as \(\hat{x}_{k}|X_{k}\rangle=X_{k}|X_{k}\rangle\) and \(\hat{p}_{k}|P_{k}\rangle=P_{k}|P_{k}\rangle\), where \(X_{k}\) and \(P_{k}\) describe the continuum of real measurement outcomes for the quadrature measurements in the mode \(k\). For any given state \(\hat{\rho}\), we can obtain the joint quadrature statistics \(\mathcal{P}(X_{1},X_{2})\equiv\langle X_{1};X_{2}|\hat{\rho}|X_{1};X_{2}\rangle\) (other joint statistics are defined analogously). The density matrix is known in its Fock basis decomposition, we can explicitly calculate the joint quadrature statistics as \[\mathcal{P}(X_{1},X_{2})=\sum\limits_{\begin{subarray}{c}n_{1},n_{2}\\ n_{1},n_{2}\end{subarray};n_{1}^{\prime},n_{2}^{\prime}}\langle X_{1}|n_{1} \rangle\langle X_{2}|n_{2}\rangle\langle X_{1}|n_{1}^{\prime}\rangle^{*} \langle X_{2}|n_{2}^{\prime}\rangle^{*}. \tag{2}\] These quantities can be evaluated directly using the wave functions of Fock states which are given by \(\langle X_{k}|n_{k}\rangle=H(n_{k},X_{k}/\sqrt{2})e^{-\frac{\pi^{2}}{4}}/[(2\pi )^{1/4}\sqrt{2^{n_{k}}n_{k}!}]\), where \(H(n_{k},X_{k}/\sqrt{2})\) denotes a Hermite polynomial. Other joint quadrature distributions can be calculated analogously, using \(\langle P_{k}|n_{k}\rangle=i^{n_{k}}H(n_{k},P_{k}/\sqrt{2})e^{-\frac{\pi^{2}}{4 }}/[(2\pi)^{1/4}\sqrt{2^{n_{k}}n_{k}!}]\). _Training process._--The method described above to generate random states is repeated \(15,000\) times to obtain a set \(\varrho\) with a wide variety of density matrices. To evaluate the performance of the model on an independent data set, we use a cross-validation method to tune the hyperparameters of the model and split the entire data set into two parts: 70% for training and 30% for validation. For each density matrix \(\hat{\rho}\), four joint quadrature statistics \(\mathcal{P}(X_{1},X_{2})\), \(\mathcal{P}(X_{1},P_{2})\), \(\mathcal{P}(P_{1},X_{2})\), \(\mathcal{P}(P_{1},P_{2})\), see Eq. (2), and the corresponding output entanglement label \(\mathcal{E}_{\hat{p}}^{\text{True}}\) are calculated, see Fig. 1 for a scheme of the training data processing. Since the joint probability distributions are continuous over the whole phase space, they need to be discretized before we can feed the neural network with them. To that end, we restrict the region of phase space with values going from -6 to 6 and bin each distribution into a \(24\times 24\) correlation pattern. Every pixel is given by the median value of the joint probability distribution in the corresponding grid. For each state \(\hat{\rho}\), we thus obtain a \(24\times 24\times 4\)-dimensional tensor \(\mathcal{M}_{\hat{\rho}}\). Then for the full set of density matrices, \(\mathcal{M}_{\varrho}\) together with the entanglement labels \(\mathcal{E}_{\varrho}^{\text{True}}\) are used for training the neural network with Adam optimization algorithm. As shown in Fig. 1, each node of Figure 1: Scheme of the training data processing. The generation of the training data set begins with a series of random density matrices \(\varrho\). Then for each density matrix one generates 24\(\times\)24\(\times\)4-dimensional correlation patterns \(\mathcal{M}_{\varrho}\) as input data of the neural network. At the output, 3 entanglement labels \(\mathcal{E}^{\text{Phot}}\) are computed from \(\varrho\) and fed into the neural network for training. The loss function is evaluated between the true entanglement labels \(\mathcal{E}_{\varrho}^{\text{True}}\) and the predicted labels \(\mathcal{E}^{\text{Phot}}\) output from the neural network. the three hidden fully connected layers performs a unique linear transformation on the input vector from its previous layer, followed by a nonlinear ReLU activation function. The loss function is evaluated between the true entanglement labels \(\mathcal{E}_{\varrho}^{\text{True}}\) and the predicted labels \(\mathcal{E}^{\text{Pred}}\) output from the neural network with the binary cross-entropy. The backward processes are iterated until the loss function converges. The binary entanglement labels \(\mathcal{E}_{\varrho}^{\text{True}}=\{E_{\text{PPT}},E_{\text{QPI}}^{(1)},E_{ \text{QPI}}^{(2)}\}_{\varrho}\) are obtained for the classification task to detect whether the state in set \(\varrho\) is entangled or not via PPT [16] and QFI [17] criteria. In PPT criterion, the two-mode state is entangled if the partial transpose over one of its modes has negative eigenvalues, which leads to \(\|\hat{\varrho}^{\text{T}_{\text{R}}}\|_{1}>1\). Here, we label the states that satisfy this inequality as \(E_{\text{PPT}}=1\) and the rest as \(E_{\text{PPT}}=0\). The metrological-entanglement-based QFI criterion, is based on estimating a parameter \(\theta\) which is implemented through \(\hat{\rho}_{\theta}=e^{-i\theta\hat{A}}\hat{\rho}e^{i\theta\hat{A}}\), where \(\hat{A}=\sum_{i=1}^{2}\hat{A}_{i}\) is a sum of arbitrary local observable \(\hat{A}_{i}\) for the \(i\)th mode. The intuitive idea behind the QFI criterion is to detect entanglement by showing \(\hat{\rho}\) allows us to estimate \(\theta\) with a precision that is higher than what could have been achieved with a separable state with the same variances fr the generators. More rigorously, the criterion is given by \(E[\hat{\rho},\hat{A}]=F_{Q}(\hat{\rho},\hat{A})-4\sum_{i=1}^{2}\text{Var}[\hat {\rho},\hat{A}_{i}]\), where \(F_{Q}\) is the quantum Fisher information of state \(\hat{\rho}\), and \(\text{Var}[\cdot]\) denotes the variance. The entanglement witness depends on the choice of the observable \(\hat{A}_{i}\), which can be constructed by optimizing over an arbitrary linear combination of operators (namely, \(\hat{A}_{i}=\mathbf{n}\cdot\mathbf{H}=\sum_{k=1}n_{k}\hat{H}_{k}\)) [17]. At the first order, \(\mathbf{H}\) takes the form of \(\mathbf{H}^{(1)}=(\hat{x}_{i},\hat{p}_{i})\) with \(i=1,2\). To capture more correlations of non-Gaussian states, we can extend the set of operators by adding three second-order nonlinear operators: \(\mathbf{H}^{(2)}=(\hat{x}_{i},\hat{p}_{i},\hat{x}_{i}^{2},\hat{p}_{i}^{2},(\hat {x}_{i}\hat{p}_{i}+\hat{p}_{i}\hat{x}_{i})/2)\). If \(E[\hat{\rho},\hat{A}]>0\), the state is identified as QFI-type entangled and labeled as \(E_{\text{QFI}}^{(n)}=1\), otherwise it is labeled as \(E_{\text{QPI}}^{(n)}=0\), where \(n\) refers to the set \(\mathbf{H}^{(n)}\) which is used to compute \(E[\hat{\rho},\hat{A}]\). After \(3,000\) epochs of training the loss function has converged and the network has captured the features mapping the correlation patterns to the entanglement labels, without providing the full density matrices \(\varrho\). This is a crucial element since experiments generally do not have access to the full density matrix of a produced state, and what can be acquired is partial correlation information from measurements. _Testing process.--_To test the network with experimental-like data, we simulate the homodyne measurement outcomes via a Monte Carlo sampling method. The test data are obtained from previously unseen quantum states, denoted as \(\varrho_{\text{test}}\). For each pattern of the state in \(\varrho_{\text{test}}\), we perform \(N\) repetitions of sampling to simulate the joint measurement events for each mode, forming a \(2\times N\)-dimensional outcomes and used to recover the joint probability distributions. However, directly feeding the raw sampling results into the neural network is infeasible, as the input layer of our trained network requires \(24\times 24\times 4\)-dimensional data. Thus, we also bin each \(2\times N\) sampling points into a \(24\times 24\)-dimensional matrix. Figure 2(a) shows the discretized correlation patterns with different numbers of sampling points \(N\). The plot with \(N=\infty\) is directly obtained from discretizing the theoretical joint probability distributions. As the number of samples \(N\) increases from \(10\) to \(100,000\), the Monte Carlo sampling result converges towards the \(N=\infty\) case. We compare the performance of the neural network with PPT and QFI entanglement predictions estimated from the MaxLik algorithm by quantifying the ratio of states the different algorithms correctly classified within the set \(\varrho_{\text{test}}\). The MaxLik algorithm is a statistical method and can process the same Monte Carlo sampling outcomes to conduct the state tomography and reconstruct corresponding density matrices \(\varrho_{\text{MaxLik}}\)[18]. After \(20\) iterations of the algorithm, the entanglement labels \(\mathcal{E}^{\text{MaxLik}}\) can be derived using PPT and QFI criteria. A significant difference between deep learning and MaxLik is that the former excels at extracting features and insights from a large data set, enabling the neural network to extrapolate and make accurate predictions for unseen data, while the latter reconstructs each state separately without relying on prior experience. In Figs. 2(b) and (c), the orange and blue lines show the accuracy of \(\mathcal{E}^{\text{Pred}}\) predicted by our neural network. The gray lines show the accuracy of \(\mathcal{E}^{\text{MaxLik}}\) based on MaxLik. For PPT-type entanglement \(E_{\text{PPT}}\), the accuracy of the neural network achieves \(0.993\) when the number of Monte Carlo sampling points is \(N=100,000\). On the contrary, even though the average fidelity between \(\varrho_{\text{test}}\) and the reconstructed density matrix \(\varrho_{\text{MaxLik}}\) increases from \(0.734\) to \(0.945\), the accuracy of MaxLik remains unchanged when the number of samples increases. For QFI-type entanglement \(E_{\text{QPI}}^{(n)}\), the accuracy of the neural network achieves \(0.913\) and \(0.923\) for the first and second order when \(N=100,000\), respectively. With the same amount of joint homodyne measurement data, the Figure 2: (a) Binned correlation patterns of a state \(\hat{\rho}\in\varrho_{\text{test}}\) when the number of Monte Carlo sampling points \(N=10,100,1000,10,000\) and \(100,000\). The plot with \(N=\infty\) shows the correlation patterns directly discretized from the theoretical joint probability distributions. (b) Accuracy of PPT-type entanglement prediction from the neural network (orange line) and MaxLik algorithm (gray line) against the same value of \(N\) in (a). (c) Accuracy of QFI-type entanglement prediction from the neural network (blue lines) and MaxLik (gray lines) algorithm against \(N\). Solid and dashed lines represent the first and second-order QFI, respectively. neural network shows better performance than the standard state reconstruction process through the MaxLik algorithm. Furthermore, we test our network on lossy single-photon subtracted states [42]. This class of states is included in our training data set since it can be decomposed as \[\ket{\psi}_{\text{sub}} \propto(\text{cos}\gamma\hat{a}_{1}+\text{sin}\gamma\hat{a}_{2}) \hat{S}_{1}(\xi_{1})\hat{S}_{2}(\xi_{2})\ket{00} \tag{3}\] \[=\hat{S}_{1}(\xi_{1})\hat{S}_{2}(\xi_{2})(\text{cos}\gamma\text{ sinh}r_{1}\text{e}^{i\omega_{1}}\ket{10}+\text{sin}\gamma\text{sinh}r_{2} \text{e}^{i\omega_{2}}\ket{01}).\] where \(\hat{S}_{i}(\xi_{i})\) is the single-mode squeezing on mode \(i\) with squeezing parameter \(\xi_{i}=r_{i}\text{e}^{i\omega}\), and \(\gamma\) is the angle controlling the probability in which mode the single photon is subtracted. Hence we can also test how our network performs for this kind of state. The accuracy to predict PPT-type entanglement achieves \(0.985\) when the number of sampling points is \(N=100,000\). We compare the robustness of our network with another existing experiment-friendly criterion based on Fisher information reported in Ref. [43] for a state \(\ket{\psi}_{\text{sub}}\) with squeezing parameters \(r_{1}=2\text{dB}\) and \(r_{2}=-3\text{dB}\), and \(\omega_{1}=\omega_{2}=0\). While the other criterion cannot detect the entanglement when the loss coefficient \(\eta\) is larger than \(0.06\), our neural network performs with a much stronger robustness leading to a loss threshold \(\eta=0.33\) for the first-order QFI, and beyond \(\eta=0.5\) for PPT and second order QFI (see Fig. 6 of the Appendix). _Clustering process._--To visualise how our neural network processes input correlation patterns, we use t-Distributed Stochastic Neighbor Embedding (t-SNE) [44] as a dimension reduction technique to compare our data before and after the processing by the neural network. Different from the supervised classifier neural networks, the t-SNE algorithm is an unsupervised learning approach dealing with unlabeled data. This visualization tool has been widely used in many biology and marketing problems, like cell population identification [45; 46] and customer segmentation [47]. It can be seen that clusters of entangled states clearly emerge after neural network processing. Figure 3(a) shows two \(24\times 24\times 4\)-dimensional correlation patterns of a state with non-entangled labels \(\{0,0,0\}\) and a state with entangled labels \(\{1,1,1\}\), marked with red and yellow triangles, respectively. These high-dimensional data can be mapped into a two-dimensional plane through the t-SNE algorithm and the results are shown in Figs. 3(b)(c)(d). The left plot of Fig. 3(b) exhibits clusters formed from the discretized correlation patterns of the whole training data set (\(n=15,000\)). Each point represents a quantum state, colored by its PPT-type entanglement label \(E_{\text{PPT}}=0\) (dark green) or \(E_{\text{PPT}}=1\) (light green). The right plot of Fig. 3(b) reveals clusters formed from output vectors of the last hidden layer of the neural network with 64 neurons. After the neural network processing, the two overlapped dark green and light green clouds have largely detached from each other, forming disjoint clusters. We can clearly see that the two triangles significantly separate and now belong to their respective cluster. Similarly, clusters in Figs. 3(c) and (d) are of the same shape as in (b), but colored with the first-order and second-order QFI-type entanglement labels \(E_{\text{QPI}}^{(1)}\) and \(E_{\text{QFI}}^{(2)}\). Again, the two clouds are better separated after the neural network processing [right plots of (c) and (d)]. Comparing (c) with (d), we can see that the light green cluster in (d) covers more area than in (c), which intuitively shows that, as expected, second-order QFI-type criterion finds more entangled states than first-order. These results clearly show how different clusters of states have different metrological capabilities. Even though there is no explicit boundary between the two classes (entangled or not) in the left cluster of Fig. 3(b), where the neural network has not been applied yet, the two classes of states already tend to cluster together. This implies that the correlation patterns inherently contain entanglement-related regularities for the studied states, which makes it more feasible for the deep learning algorithm to learn from the training data. Therefore, this visualization method provides us with a technique to select appropriate input data when detecting a specific quantum feature through the deep learning algorithm. In conclusion, we develop a neural network to detect Figure 3: Two-dimensional clusters of two-mode CV states. (a) Examples of \(24\times 24\times 4\)-dimensional correlation patterns for two input states. (b) Left: The 2-dimensional clustering of states before being fed into the network, where t-SNE preserves the pairwise similarities between data points. Right: The same dimension reduction process is conducted on the 64-dimensional array from the last hidden layer of the neural network. Points representing PPT-type entangled (\(E_{\text{PPT}}=1\)) are colored in light green, while others are colored in deep green, (c) and (d) use the same method as (b) but are colored according to the first-order and second-order QFI-type entanglement labels \(E_{\text{QFI}}^{(2)}\) and \(E_{\text{QFI}}^{(2)}\), respectively. CV entanglement for general two-mode states with finite stellar rank, only using correlation patterns obtained through homodyne detection. We test the performance of the network on patterns generated by Monte Carlo with different numbers of sampling points. With the same limited patterns, our method provides higher accuracy compared to the entanglement that can be extracted through a full state tomography. Meanwhile, the neural network shows strong robustness to losses, which we illustrate for the specific case of single-photon-subtracted states. Finally, we use the t-SNE algorithm to visualize the clusters formed from abundant correlation patterns before and after they are fed into the network. This helps us validate the suitability of the input for detecting target quantum features. This can be further used to identify and detect more refined kinds of quantum correlations, like not passive separability, i.e., the fact that the entanglement cannot be undone with optical passive transformations (beamsplitters, phase shifters), a strong feature of non-Gaussian states necessary to reach quantum advantage [48]. ###### Acknowledgements. We acknowledge enlightening discussions with Nicolas Treps. This work is supported by the National Natural Science Foundation of China (Grants No. 11975026, No. 12125402, No. 12004011, and No. 12147148), Beijing Natural Science Foundation (Grant No. Z190005), and the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0301500). M.I., C.E.L., and M.W. received funding through the ANR JCJC project NoRdiC (ANR-21-CE47-0005). V.P. and M.W. acknowledge financial support from the European Research Council under the Consolidator Grant COQCOoN (Grant No. 820079).
2309.05863
The bionic neural network for external simulation of human locomotor system
Muscle forces and joint kinematics estimated with musculoskeletal (MSK) modeling techniques offer useful metrics describing movement quality. Model-based computational MSK models can interpret the dynamic interaction between the neural drive to muscles, muscle dynamics, body and joint kinematics, and kinetics. Still, such a set of solutions suffers from high computational time and muscle recruitment problems, especially in complex modeling. In recent years, data-driven methods have emerged as a promising alternative due to the benefits of flexibility and adaptability. However, a large amount of labeled training data is not easy to be acquired. This paper proposes a physics-informed deep learning method based on MSK modeling to predict joint motion and muscle forces. The MSK model is embedded into the neural network as an ordinary differential equation (ODE) loss function with physiological parameters of muscle activation dynamics and muscle contraction dynamics to be identified. These parameters are automatically estimated during the training process which guides the prediction of muscle forces combined with the MSK forward dynamics model. Experimental validations on two groups of data, including one benchmark dataset and one self-collected dataset from six healthy subjects, are performed. The results demonstrate that the proposed deep learning method can effectively identify subject-specific MSK physiological parameters and the trained physics-informed forward-dynamics surrogate yields accurate motion and muscle forces predictions.
Yue Shi, Shuhao Ma, Yihui Zhao
2023-09-11T23:02:56Z
http://arxiv.org/abs/2309.05863v1
# The bionic neural network for external simulation of human locomotor system ###### Abstract Muscle forces and joint kinematics estimated with musculoskeletal (MSK) modeling techniques offer useful metrics describing movement quality. Model-based computational MSK models can interpret the dynamic interaction between the neural drive to muscles, muscle dynamics, body and joint kinematics, and kinetics. Still, such a set of solutions suffers from high computational time and muscle recruitment problems, especially in complex modeling. In recent years, data-driven methods have emerged as a promising alternative due to the benefits of flexibility and adaptability. However, a large amount of labeled training data is not easy to be acquired. This paper proposes a physics-informed deep learning method based on MSK modeling to predict joint motion and muscle forces. The MSK model is embedded into the neural network as an ordinary differential equation (ODE) loss function with physiological parameters of muscle activation dynamics and muscle contraction dynamics to be identified. These parameters are automatically estimated during the training process which guides the prediction of muscle forces combined with the MSK forward dynamics model. Experimental validations on two groups of data, including one benchmark dataset and one self-collected dataset from six healthy subjects, are performed. The results demonstrate that the proposed deep learning method can effectively identify subject-specific MSK physiological parameters and the trained physics-informed forward-dynamics surrogate yields accurate motion and muscle forces predictions. Electromyography (EMG) exoskeleton online learning adversarial learning edge-computing parallel computing ## 1 Introduction Accurate estimation of muscle forces holds pivotal significance in diverse domains. As a powerful computational simulation tool, the musculoskeletal (MSK) model can be applied for detailed biomechanical analysis to understand muscle-generated forces thoroughly, which would be beneficial to various applications ranging from designing efficacious rehabilitation protocols Dong et al. (2006), optimizing motion control algorithms Ding et al. (2010), Luo et al. [2023], enhancing clinical decision-making Azimi et al. [2020], Karthick et al. [2018], Sekiya et al. [2019] and the performance of athletes Zaman et al. [2022], McErlain-Naylor et al. [2021]. Thus far, the majority of the MSK models are based on physics-based modeling techniques to interpret transformation among neural excitation, muscle dynamics, joint kinematics, and kinetics. It is challenging to offer a biologically consistent rationale for the choice of any objective function Kim et al. [2009], Modenese et al. [2011], given our lack of knowledge about the method used by the central nervous system (CNS) Rane et al. [2019]. Moreover, estimation of muscle forces and joint motion during inverse dynamics analysis suffers from high computational time and muscle recruitment problems, especially in complex modeling Trinler et al. [2019], Lee et al. [2009], Lund et al. [2015]. The surface electromyography (sEMG) signal is a non-invasive technique that measures the electrical activity of muscles. The sEMG signal is able to reflect the motor intention of the human 20 ms to 200 ms prior to the actual initiation of the joint motion and muscle activation Campanini et al. [2020], Teramae et al. [2018]. MSK forward dynamics model is an effective method for mapping sEMG to muscle force and joint motion. The sEMG-based forward dynamics approaches calculate the muscle forces based on the sEMG signals and muscle properties which can directly reflect the force and velocity of muscle contraction without assuming optimization criteria or constraints. For instance, Zhao et al. proposed an MSK model driven by sEMG for estimating the continuous motion of the wrist joint and a genetic algorithm for parameters optimization Zhao et al. [2020a]. Thomas et al. presented an MSK model to predict joint moment and related muscle forces of knee and elbowBuchanan et al. [2004]. Although MSK forward dynamics approaches are effective, access to individualised physiological parameters is challenging. Optimisation algorithms are used commonly for parameter identification which needs the muscle forces or joint moments calculated from inverse dynamics as the targets. However, the process of obtaining the targets is time-consuming Simonetti et al. [2021], Silvestros et al. [2019]. To address the time-consuming and uncertainty issues of model-based methods, data-driven methods have also been explored to establish relationships between movement variables and neuromuscular status, i.e., from sEMG to joint kinematics and muscle forces Su et al. [2021], Wu et al. [2021]. Burton et al. Burton et al. [2021] implemented four machine/deep learning methods, including recurrent neural network (RNN), convolutional neural network (CNN), fully-connected neural network (FNN), and principal component regression, to predict trend and magnitude of the estimated joint contact and muscle forces. Zhang et al. proposed a physics-informed deep learning framework for the prediction of the muscle forces and joint Kinematics where a new loss function from motion equation was designed as soft constraints to regularize the data-driven model Zhang et al. [2023]. Shi et al. seamlessly integrated Lagrange's equation of motion and inverse dynamic muscle model into the generative adversarial network (GAN) framework for sEMG-based estimation of muscle force and joint kinematics Shi et al. [2023]. However, despite their adaptability, and flexibility, deep learning methods still suffer from potential limitations. It is a 'black box' method, the intermediate processes of data-driven models do not consider the physical significance underlying the modeling process. More importantly, the above deep learning methods used labeled input and output data for training and it is difficult for us to get the value of the muscle forces corresponding to the sEMG. Given the considerations of the limitations concerning both model-based and data-driven models, in this work, we propose a novel PINN framework combining the information from MSK forward dynamics for the estimation of joint angle and related muscle forces, simultaneously identifying the physiological parameters. We find that the entire MSK forward dynamics system can be regarded as an ordinary differential equation (ODE) system with unknown physiological parameters, where the independent variable is the joint angle. Specifically, a fully connected neural network (FNN) estimates the MSK forward dynamics model. The physiological parameters are identified during back-propagation. In addition, muscle forces are predicted through the guide of muscle contraction dynamics. The main contributions of this paper include: The PINN framework is proposed for the estimation of joint motion and related muscle forces, simultaneously identifying the physiological parameters. In the absence of true values, muscle forces are estimated based on embedded muscle contraction dynamics through sEMG signals and joint motion signals. ## 2 Methods In this section, an introduction to each sub-process of the MSK forward dynamics system is first presented in Part. A. Then, we demonstrate a novel PINN framework in Part. B. ### Musculoskeletalelat forward dynamics #### 2.1.1 Muscle activation dynamics Muscle activation dynamics refer to the process of transforming sEMG signals \(e_{n}\) into muscle activation signals \(a_{n}\) which can be estimated through Eq. 1. \[a_{n}=\frac{e^{Ae_{n}}-1}{e^{A}-1} \tag{1}\] where preprocessed sEMG signals \(e_{n}\) are utilized as neural activation signals \(u_{n}\) for estimating muscle activation signals Pau et al. (2012) to simplify the model. \(A\) is a nonlinear factor which has the range of highly non-linearity (-3) to linearity (0.01) Buchanan et al.. #### 2.1.2 Muscle-Tendon Model Hill's modelling technique is used to compute the muscle-tendon force \(F^{mt}\), which consists of an elastic tendon in series with a muscle fibre. The muscle fibre includes a contractile element (CE) in parallel with a passive elastic element (PE), as shown in Fig. 1. \(l_{n}^{mt},l_{n}^{m},l_{n}^{t}\) is the muscle-tendon length, muscle fibre length and tendon length of the nth muscle respectively. The model is parameterized by the physiological parameters maximum isometric muscle force \(F_{0,n}^{m}\), the optimal muscle fibre length \(l_{0,n}^{m}\), the maximum contraction velocity \(v_{0,n}\), the slack length of the tendon \(l_{s,n}^{t}\) and the pennation angle \(\varphi_{0,n}\), which are difficult to measure in vivo and varies between the age, gender. We employ the following vector \(\chi_{n}\) to represent the personalized physiological parameters of the nth muscle that require identification. \[\chi_{n}=[l_{0,n}^{m},v_{0,n},F_{0,n}^{m},l_{s,n}^{t},\varphi_{0,n}]\] Pennation angle \(\varphi_{n}\) is the angle between the orientation of the muscle fibre and tendon, and the pennation angle at the current muscle fibre length \(l_{n}^{m}\) is calculated by \[\varphi_{n}(l_{n}^{m};\chi_{n})=\sin^{-1}(\frac{l_{0,n}^{m}\sin\varphi_{0,n}} {l_{n}^{m}}) \tag{2}\] In this study, the tendon is assumed to be rigid Millard et al. (2013) and thus the tendon length \(l_{s,n}^{t}=l_{n}^{t}\) is adopted. The consequent fibre length can be calculated as follows: \[l_{n}^{m}=(l_{n}^{mt}-l_{n}^{t}){\cos^{-1}\varphi_{n}} \tag{3}\] The \(F_{CE,n}\), is the active force generated by CE, which can be written as \[F_{CE,n}(a_{n},l_{n}^{m},v_{n};\chi_{n})=a_{n}f_{v}(\overline{v}_{n})f_{a}( \overline{l}_{n,a}^{m})F_{0,n}^{m} \tag{4}\] The function \(f_{a}(\cdot)\) represents the active force-length relationship at different muscle fibre lengths and muscle activations, which are written as \[f_{a}(\overline{l}_{n,a}^{m})=e^{-(\overline{l}_{n,a}^{m}-1)^{2}k^{-1}} \tag{5}\] where the \(\overline{l}_{n,a}^{m}=l_{n}^{m}/(l_{0,n}^{m}(\lambda(1-a_{n})+1)\) is the normalized muscle fibre length concerning the corresponding activation levels and \(\lambda\) is a constant, which is set to 0.15 Lloyd and Besier (2003). The \(k\) is a constant to approximate the force-length relationship, which is set to 0.45 Thelen (2003). The function \(f_{v}(\overline{v}_{n})\) represents the force-velocity relationship between the \(F_{n}^{m}\) and the normalized contraction velocity \(\overline{v}_{n}\) Zhao et al. (2020) \[f_{v}(\overline{v}_{n})=\left\{\begin{array}{cc}\frac{0.3(\overline{v}_{n} +1)}{-\overline{v}_{n}^{t}+0.3}&\overline{v}_{n}\leq 0\\ \frac{2.3\overline{v}_{n}+0.039}{1.3\overline{v}_{n}+0.039}&\overline{v}_{n}> 0\end{array}\right. \tag{6}\] Figure 1: Wrist model with the primary muscle units associated with wrist flexion and extension where \(\overline{v}_{n}=v_{n}/v_{0,n}\). \(v_{0,n}\) is typically set as 10 \(l_{0,n}^{m}\)/sec Zajac (1989). The \(v_{n}\) is the derivative of the muscle fibre length with respect to time \(t\). Note that the passive force \(F_{PE,n}\) is the force produced by PE which can be calculated as \[F_{PE,n}(l_{n}^{m};\chi_{n})=\left\{\begin{array}{cc}0&l_{n}^{m}\leq l_{0,n} ^{m}\\ f_{P}(\overline{l}_{n}^{m})F_{0,n}^{m}&l_{n}^{m}>l_{0,n}^{m}\end{array}\right. \tag{7}\] where \(\overline{l}_{n}^{m}=l_{n}^{m}/l_{0,n}^{m}\) the normalized muscle fibre length. The \(f_{p}(\cdot)\) is \[f_{P}(\overline{l}_{n}^{m})=\frac{e^{10\overline{l}_{n}^{m}-1)}}{e^{5}} \tag{8}\] The \(F_{n}^{mt}\) is the summation of the active force \(F_{CE,n}\) and the passive force \(F_{PE,n}\), which can be written as \[F_{n}^{mt}(a_{n},l_{n}^{m},v_{n};\chi_{n})=(F_{CE,n}+F_{PE,n})\cos\varphi_{n} \tag{9}\] #### 2.1.3 Joint Kinematic Modelling Technique The single joint configuration is presented to estimate the wrist's continuous joint motion. The muscle-tendon length \(l_{n}^{mt}\) and moment arm \(r_{n}\) against wrist joint angle \(q\) are calculated using the polynomial equation and the scale coefficient Ramsay et al. (2009). The total joint torque is calculated as \[\tau(\mathbf{a},\mathbf{r},\mathbf{l}^{m},\mathbf{v};\tilde{\chi})=\sum_{n=1} ^{N}F_{n}^{mt}(a_{n},l_{n}^{m},v_{n};\chi_{n})r_{n} \tag{10}\] Where N represents the number of muscles included. \(\mathbf{a},\mathbf{r},\mathbf{l}^{m},\mathbf{v},\tilde{\chi}\) represent the sets of muscle activation, muscle arm length, muscle fibre length, muscle contraction velocity and the physiological parameters of all included muscles, respectively. Since the muscle activation level is not directly related to the joint motion, it is necessary to compute joint acceleration using forward dynamics. In this research, we only consider the simple single hinge joint's flexion and extension and take the model of the wrist joint's flexion as an example. The wrist joint is assumed to be a single hinge joint, and the palm and fingers are assumed to be a rigid segment rotating around the wrist joint in the sagittal plane. Thus, we can have the following relationship based on the Lagrange equation \[\ddot{q}=\frac{\tau-mgL\sin q-c\dot{q}}{I} \tag{11}\] where \(q,\dot{q},\ddot{q}\) are the joint motion, joint angular velocity and joint angular acceleration. \(I\) is the moment of inertia of hand, which is equal to \(mL^{2}+I_{p}\). \(I_{p}\) is the moment of inertia at the principal axis which is parallel to the flexion/extension axis. \(m\) and \(L\) are the mass of the hand and the length between the rotation centre to the hand's centre of mass, which are measured from subjects. \(C\) is the damping coefficient representing the elastic and viscous effects from tendons and ligaments. \(\tau\) calculated from Eq. 10. The three parts mentioned above constitute the main components of the MSK forward dynamics model, which establishes the process from sEMG to joint angle motion from a model-based method. The reason why the model-based approach is introduced here is because it is the foundation on which the PINN framework is built later. We embedded the entire MSK forward dynamics approach into the neural network The reason why the model-based method is introduced here is because this is the basis on which the PINN framework is built. We embed the entire MSK forward dynamics method into the neural network ### PINN framework The proposed PINN framework for the estimation of joint angle and muscle forces together with physiological parameters identification is introduced. The computational graph is illustrated in Fig. 2. To be specific, in the data-driven component, a Fully connected Neural Network (FNN) is utilised to automatically extract the high-level features and build the relationship between sEMG signals and the joint angle/muscle forces, while the physics-informed component entails the underlying physical relationship between the joint angle and muscle forces. In this manner, in the data-driven component, the recorded sEMG signals and discrete time steps \(t\) are first fed into FNN. With the features extracted by FNN and integration of the MSK forward dynamics model, the predicted muscle forces and joint angles could be achieved. #### 2.2.1 Architecture and Training of FNN To demonstrate the effectiveness of the proposed physics-informed deep learning framework, we chose an architecture of FNN which was composed of four fully-connected blocks and two regression blocks. Specifically, each the fully-connected block has one Linear layer, one ReLU layer and one dropout layer. Similar to the fully-connected blocks, there are one ReLU layer and one dropout layer in each regression block. Between these two regression blocks, one is for the prediction of joint angle, the other is for the prediction of muscle forces. The training was performed using the Adam algorithm with an initial learning rate of \(1*10^{-4}\). In the model training phase, the batch size is set as 1, and FNN is trained by stochastic gradient descent with momentum. Additionally, the maximum iteration is 1000 and the dropout rate is 0.3. #### 2.2.2 Design of Loss Functions Unlike state-of-the-art methods, the loss function of the proposed framework consists of the MSE loss and the physics-based losses. The MSE loss is to minimise the mean square error of the ground truth and prediction, while the physics-informed losses are for the parameters identification and guidance on the muscle force prediction. The total loss shows below: \[L_{total}=L_{q}+L_{r,1}+L_{r,2} \tag{12}\] \[L_{q}=MSE(q) \tag{13}\] \[L_{r,1}=R_{1}(q) \tag{14}\] \[L_{r,2}=R_{2}(q) \tag{15}\] where \(L_{q}\) denotes the MSE loss function of the joint angle since it has the ground truth from measurement, while \(L_{r,1}\) represents the first residual loss function imposed by the MSK forward dynamics introduced in Sec. 2.1, which helps identify physiological parameters simultaneously. \(L_{r,2}\) denotes the second residual loss function which is utilized for the prediction of muscle forces. \(R_{1}(q),R_{2}(q)\) both indicate the function of the predicted joint angle. The mathematical formulas for each part of the loss will be specified next. MSE LossMSE loss is calculated by \[MSE(q)=\frac{1}{T}\sum_{t=1}^{T}(\hat{q}_{t}-q_{t})^{2} \tag{16}\] where \(q_{t}\) is denoted as the ground truth of the joint angle at time step t, and \(\hat{q}_{t}\) is the predicted joint angle from the network. Additionally, \(T\) denotes the total sample number. Physics-informed LossPhysics-informed governing laws, reflecting underlying relationships among the muscle force and kinematics in human motion, are converted to constraints during the Neural Network's training phase. The first residual loss function designed from the Lagrange equation which expressed the motion state of the MSK system is Figure 2: A computational graph of the proposed PINN. shown below \[\begin{split} R_{1}(q)&=\frac{1}{T}\sum_{t=1}^{T}(I \dot{q}_{t}+C\dot{q}_{t}+mgL\sin\hat{q}_{t}\\ &-\tau_{t}(\mathbf{a},\mathbf{r},\mathbf{l}^{m},\mathbf{v}^{m}; \tilde{\chi}))\end{split} \tag{17}\] \(\tau_{t}\) represents the joint torque which is calculated by Eq. 10. The moment arm \(\mathbf{r}\) against the joint angle is calculated using the polynomial equation and the scale coefficient Ramsay et al. (2009), which can be represented as \(\mathbf{r}(q)\). We use \(\mathbf{l}^{mt}(q)\) to represent the relationship between joint angle and the muscle-tendon length which is exported from Opensim. Therefore, muscle fibre length \(\mathbf{l}^{m}(q)\) can also be expressed in terms of the joint angle \(q\) from Eq. 3. The \(\mathbf{v}(q)\) is the derivative of the muscle fibre length \(\mathbf{l}^{m}(q)\), which can be represented in terms of \(q,\tilde{q}\). As mentioned in Eq. 11, this equation expresses the dynamic equilibrium equation that the dynamical system should satisfy at time \(t\). Therefore, for the MSK forward dynamics surrogate model from the neural network, this equation is still satisfied. When the predicted angle of the neural network is \(\hat{q}_{t}\) at time \(t\), the joint torque at this time can be expressed as \(\tau_{t}(\mathbf{a},\mathbf{r}(\hat{q}_{t}),\mathbf{l}^{mt}(\hat{q}_{t}), \mathbf{v}^{m}(\hat{q}_{t});\tilde{\chi})\) = \(\tau_{t}(\mathbf{a},\hat{q}_{t},\hat{q}_{t};\tilde{\chi})\), which can be regarded as an ordinary differential equation (ODE) with unknown physiological parameters \(\tilde{\chi}\) in respect of the predicted joint angle \(\hat{q}_{t}\). During the training phase, \(\hat{q}_{t},\hat{q}_{t}\) can be automatically derived through backpropagation. These physiological parameters \(\tilde{\chi}\) are defined as the weights of the surrogate neural network which are then automatically updated as the neural network minimises the residual loss function \(L_{r,1}\) during backpropagation. The estimation of \(\hat{\chi}\) can be written as \[\hat{\chi}=\underset{\tilde{\chi}}{\arg\min}L_{r,1} \tag{18}\] Due to the time-varying nature and difficulty in measurement of muscle forces, it is challenging for us to get the measured muscle forces as the ground truth for training. The second residual loss function \(L_{r,2}\) is designed for the guidance of the muscle forces prediction which is based on \(L_{r,1}\). During each iteration, we identify physiological parameters through \(L_{r,1}\). These potential physiological parameters are used to calculate the muscle forces through the MSK forward dynamics model embedded as the learning target of the neural network. The loss function \(L_{r,2}\) is given by \[R_{2}(q)=\frac{1}{T}\sum_{t=1}^{T}\sum_{n=1}^{N}(\hat{F}_{t,n}^{mt}-F_{t,n}^{ mt}(a_{t,n},l_{t,n}^{m},v_{t,n}^{m};\hat{\chi}_{n})) \tag{19}\] where \(\hat{F}_{t,n}^{mt}\) is the estimated muscle force from the network at the time step \(t\). \(F_{t,n}^{mt}\) represents the muscle force calculated by Eq. 9. \(l_{t,n}^{m},v_{t,n}^{m}\)indicates the muscle fibre length and muscle contraction speed of the nth muscle at time \(t\). They all can be expressed in terms of the \(q,\hat{q}\). \(\hat{\chi}_{n}\) stands for the identified physiological parameters for the nth muscle by Eq. 18. Specifically, muscle force of the nth muscle at time \(t\) can be expressed as follows \(F_{t,n}^{mt}(a_{t,n},\hat{q}_{t},\hat{q}_{t};\hat{\chi}_{n})=F_{t,n}^{mt}(a_{t,n},l_{t,n}^{m},v_{t,n}^{m};\hat{\chi}_{n})\) which can be regarded as another ODE with respect to the joint angle \(\hat{q}_{t}\) as well, guiding the muscle forces prediction in the absence of the ground truth. ## 3 Dataset and Experimental Methods A self-collected dataset of wrist motions is utilised to demonstrate the feasibility of the proposed framework. #### 3.0.1 Selection and initialisation of physiological parameters to be identified Among the physiological parameters of the muscle-tendon units involved, we only chose the maximum isometric muscle force \(F_{0,n}^{m}\) and the optimal muscle fibre length \(l_{0,n}^{m}\) for the identification in order to increase the generality of the model. The nonlinear shape factors \(A\) in the activation dynamics also need to be identified. Physiological parameters other than these were obtained by linear scaling based on the initial values of the generic model from Opensim. The selection and initialisation of all the physiological parameters are summarized table. 1. Since there may be differences in terms of magnitude and scale between each parameter because of their different physiological nature, it is necessary to deflate them in about the same interval before training the network. #### 3.0.2 Self-Collected Dataset Approved by the MaPS and Engineering Joint Faculty Research Ethics Committee of the University of Leeds (MEEC18-002), 6 subjects participated in this experiment. The consent forms are signed by all subjects. We take the subject's weight data and the length of their hand in order to calculate the moment of inertia of the hand. In the experiment, subjects were informed to maintain a fully straight torso with the \(90\) abducted shoulder and the \(90\) flexed elbow joint. The continuous wrist flexion/extension motion was recorded using the VICON motion capture system. The joint angle was computed at 250 Hz through the upper limb model using 16 reflective markers. Meanwhile, sEMG signals were recorded by Avanti Sensors at 2000 Hz from the main wrist muscles, including FCR, FCU, ECRL, ECRB and ECU. Moreover, the sEMG signals and motion data were synchronised and re-sampled at 1000 Hz. 5 repetitive trials were performed for each subject, and a three-minute break was given between trials to prevent muscle fatigue. The measured sEMG signals were band-pass filtered (20 Hz and 450 Hz), fully rectified, and low-pass filtered (6 Hz). Then, they were normalised concerning the maximum voluntary contraction recorded before the experiment, resulting in the enveloped sEMG signals. Each wrist motion trial, consisting of time steps \(t\), pre-processed sEMG signals and wrist joint angles, is formed into at by 7 matrix. #### 3.0.3 Evaluation Criteria To quantify the estimation performance of the proposed framework, root mean square error (RMSE) is first used as the metric. RMSE is the response variable. Specifically, RMSE indicates the discrepancies in the amplitude and between the estimated variables and the ground truth, which can be calculated by \[RMSE=\sqrt{\frac{1}{T}\sum_{t=1}^{T}(\hat{y_{t}}-y_{t})^{2}} \tag{20}\] where \(y_{t}\) and \(\hat{y_{t}}\) indicate the ground truth and the corresponding predicted value, respectively. The coefficient of determination which denoted \(R^{2}\) is used as another metric, which could be calculated by \[R^{2}=1-\frac{\sum_{i=1}^{T}(y_{t}-\hat{y}_{t})^{2}}{\sum_{t=1}^{T}(y_{t}-\hat {y}_{t})^{2}} \tag{21}\] Where \(T\) is the number of samples, \(y_{t}\) is the measured ground truth at the time \(t\), \(\hat{y}_{t}\) is the corresponding estimation of the model, and \(\bar{y}\) is the mean value of all the samples. #### 3.0.4 Baseline Methods and Parameters Setting To verify the effectiveness of the proposed physics-informed deep learning framework in the prediction of the muscle forces and joint angle, several state-of-the-art methods, including FNN, SVR Zhang et al. (2020), are considered as the baseline methods for the comparison. Specifically, FNN has the same neural network architecture as our proposed model which has four fully-connected blocks and two regression blocks but without the physics-informed component. Stochastic gradient descent with Adam optimiser is employed for FNN training, the batch size is set as 1, the maximum iteration is set as 1000, and the initial learning rate is 0.001. The radial basis function (RBF) is selected as the kernel function of the SVR method and the parameter \(C\) which controls the tolerance of the training samples is set as 100 and the kernel function parameters \(\gamma\) which controls the range of the kernel function influence is set to 1. ## 4 Results In this section, we evaluate the performance of the proposed framework on a self-collected dataset which is detailed in Sec. 3 by comparing it with selected baseline methods. Specifically, we first present the process of parameter \begin{table} \begin{tabular}{l c c c c c} \hline \hline Parameters & FCR & FCU & ECRL & ECRB & ECU \\ \hline \(F_{m}^{m}\)(N) & 407 & 479 & 337 & 252 & 192 \\ \(I_{0,n}^{o}\)(m) & 0.062 & 0.051 & 0.081 & 0.058 & 0.062 \\ \(v_{0,n}\)(m/s) & 0.62 & 0.51 & 0.81 & 0.58 & 0.62 \\ \(l_{s,n}^{t}\)(m) & 0.24 & 0.26 & 0.24 & 0.22 & 0.2285 \\ \(\varphi_{0,n}\)(rad) & 0.05 & 0.2 & 0 & 0.16 & 0.06 \\ \hline A & & & 0.01 & & \\ \hline \hline \end{tabular} \end{table} Table 1: Physiological parameters involved in the forward dynamics setup of wrist flexion-extension motion for subject 1 identification by our model and verify the reliability of the identification results. Then, the overall comparisons depict the outcomes of both the proposed framework and the baseline methods. This includes representative results showcasing the included muscles' force and joint angles, alongside comprehensive predictions for six healthy subjects. In addition, we investigate the robustness and generalization performance of the proposed framework in the intrasession scenario. Finally, the effects of the network architectures and hyperparameters on our model are evaluated separately. The proposed framework and baseline methods are trained using PyTorch on a laptop with GeForce RTX 3070 Ti graphic cards and 32G RAM. ### Parameters Identification The subject-specific physiological parameters are identified during the training. Table. 2 presents the estimation and the variation of the parameters of subject one. The variation pertains to the extent of disparity between the initial guess and the determined value derived from our model. The initial guess of the parameters is given in Table. 1 in Sec. 2. Physiological boundaries of the parameters are chosen according to Saul et al. (2015). The boundaries of maximum Figure 4: Evolution of the optimal muscle fiber length \(l_{0}^{m}\) identified during the training process of subject 1. Figure 3: Evolution of the maximum isometric muscle force \(F_{0}^{m}\) identified during the training process of subject 1. isometric force \(F_{0}^{m}\) are set to \(50\%\) of the initial guess, while the boundaries of optimal muscle fibre length \(l_{0}^{m}\) are set to \(\pm 0.001\) of the initial guess. As shown in the Table. 2, the parameters identified through our proposed framework are all within the physiological range and possess physiological consistency. Furthermore, the deviations of the optimal fibre length exhibit minimal disparity from the initial value, while the maximum isometric forces diverge significantly from the initial value. As well as the two main muscle-tendon physiological parameters, the identified non-linear shape factor A muscle activation dynamics parameter A is -2.29 which is physiologically acceptable in the range of -3 to 0.01. Fig. 4 shows the evolution of the physiological parameters identified during the training process. In conjunction with the comparison of results obtained through the PINN framework within physiological ranges, we establish the credibility of the results from another perspective. We set the estimation of the physiological parameters from the MSK forward dynamics model optimized by Genetic Algorithm (GA) as a reference for comparison Zhao et al. (2020). As depicted in Fig. 4, the red dashed lines represent the reference values estimated by the GA optimization method, the black dashed lines indicate the estimates from our proposed framework, and the blue solid lines illustrate the variation process of the parameters. The comparison revealed small differences between the values gained from two different methods. ### Overall Comparisons The overall comparisons between the proposed framework and baseline methods are then performed. Fig. 5 depicts the representative results of the proposed framework for the wrist joint prediction, including the wrist angle, muscle force \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Parameter index} \\ \cline{2-4} & \multicolumn{2}{c}{\(l_{0}^{lm}\)(m)} & \multicolumn{2}{c}{\(F_{0}^{m}\)(N)} \\ \cline{2-4} Muscle index & Estimation\(\&\)Ariation & Estimatio\(\&\)Ariation \\ \hline FCR & 0.056 & 90.78\% & 475.2 & 116.71\% \\ FCU & 0.061 & 120.62\% & 644.1 & 129.37\% \\ ECRB & 0.050 & 86.15\% & 166.7 & 66.05\% \\ ECRL & 0.082 & 101.85\% & 475.2 & 140.91\% \\ ECU & 0.052 & 84.51\% & 286.6 & 148.58\% \\ \hline \hline \end{tabular} \end{table} Table 2: Identified physiological parameters of Subject one Figure 5: Representative results of the wrist joint through the proposed data-driven model. The predicted outputs include the wrist angle, FCR muscle force, FCU muscle force, ECRL muscle force, ECRB muscle force, and ECU muscle force. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Subject & Methods & FCR(N) & FCU(N) & ECRL(N) & ECRB(N) & ECU(N) & Angle(rad) \\ \hline \multirow{3}{*}{S1} & Ours & 4.99/0.98 & 4.62/0.97 & 4.35/0.98 & 5.33/0.96 & 3.12/0.96 & 0.09/0.98 \\ & FNN & 3.27/0.99 & 5.37/0.96 & 4.20/0.98 & 4.29/0.98 & 3.03/0.96 & 0.09/0.98 \\ & SVR & 6.02/0.97 & 5.59/0.96 & 6.05/0.96 & 4.50/0.98 & 5.21/0.94 & 0.11/0.97 \\ \hline \multirow{3}{*}{S2} & Ours & 6.01/0.96 & 4.47/0.97 & 3.71/0.98 & 4.95/0.97 & 2.56/0.97 & 0.08/0.98 \\ & FNN & 5.57/0.97 & 4.69/0.97 & 2.79/0.99 & 5.76/0.96 & 2.04/0.98 & 0.06/0.99 \\ & SVR & 7.35/0.95 & 5.58/0.96 & 5.93/0.96 & 6.01/0.96 & 4.31/0.96 & 0.09/0.98 \\ \hline \multirow{3}{*}{S3} & Ours & 5.43/0.98 & 3.74/0.98 & 3.49/0.97 & 3.91/0.98 & 3.20/0.96 & 0.13/0.96 \\ & FNN & 6.25/0.97 & 4.27/0.98 & 5.29/0.97 & 5.59/0.96 & 2.97/0.96 & 0.13/0.96 \\ & SVR & 7.63/0.95 & 6.91/0.96 & 7.97/0.95 & 6.25/0.96 & 3.21/0.94 & 0.14/0.96 \\ \hline \multirow{3}{*}{S4} & Ours & 5.54/0.97 & 5.21/0.97 & 5.31/0.97 & 3.58/0.97 & 3.92/0.97 & 0.04/0.99 \\ & FNN & 5.38/0.97 & 5.82/0.96 & 5.57/0.97 & 4.77/0.96 & 4.28/0.96 & 0.08/0.98 \\ & SVR & 6.27/0.96 & 6.35/0.96 & 6.45/0.96 & 5.42/0.96 & 5.39/0.95 & 0.10/0.97 \\ \hline \multirow{3}{*}{S5} & Ours & 5.41/0.96 & 5.23/0.97 & 3.98/0.98 & 3.87/0.97 & 4.14/0.95 & 0.11/0.97 \\ & FNN & 4.57/0.97 & 4.61/0.97 & 4.53/0.97 & 4.41/0.97 & 3.82/0.96 & 0.14/0.96 \\ & SVR & 8.65/0.95 & 5.34/0.97 & 6.16/0.96 & 5.79/0.96 & 5.75/0.94 & 0.18/0.94 \\ \hline \multirow{3}{*}{S6} & Ours & 3.21/0.99 & 3.27/0.98 & 5.61/0.97 & 6.28/0.94 & 3.97/0.95 & 0.06/0.99 \\ & FNN & 3.97/0.98 & 5.11/0.97 & 5.50/0.97 & 6.24/0.94 & 4.32/0.94 & 0.05/0.99 \\ \cline{1-1} & SVR & 7.09/0.94 & 6.19/0.96 & 7.15/0.95 & 5.72/0.95 & 4.97/0.93 & 0.07/0.98 \\ \hline \hline \end{tabular} \end{table} Table 3: RMSE AND R\({}^{2}\) of the proposed framework and baseline methods of wrist joint Figure 6: Average RMSEs across all the subjects in (a) included muscles’ force and (b) joint angle, respectively. of FCR, muscle force of FCU, muscle force of ECRL, muscle force of ECRB, muscle force of ECRB, and muscle force of ECU, respectively. For the prediction of the joint angle, we use the measured value as the ground truth. However, for the prediction of muscle forces, since it is challenging to measure directly, we use the muscle forces calculated from the sEMG-driven MSK forward dynamics model as the ground truthZhao et al. (2020). As presented in Fig. 5, the predicted values of muscle forces and joint angle can fit the ground truths well, indicating the great dynamic tracking capability of the proposed framework. To quantitatively evaluate the performance of our model, detailed comparisons of all the subjects between the PINN and the baseline methods are presented in Table. 3. We use the data with the same flexion speed to train and test the proposed framework and baseline methods in the wrist joint, according to Table. 3, deep learning-based methods, including the proposed framework, FNN, achieve better-predicted performance than the machine learning-based method, the SVR, as evidenced by smaller RMSEs and higher \(R^{2}\) in most cases. Because these deep learning-based methods could automatically extract high-level features from the collected data. Fig. 6 (a) illustrates the average RMSEs of the muscle forces of the PINN framework and baseline methods across all the subjects. Among deep learning-based methods, our proposed framework achieves an overall performance similar to that of the FNN without direct reliance on actual muscle force labels. Fig. 6 (b) illustrates the average RMSEs of the joint angle. Our model achieves satisfactory performance with lower standard deviations, and its predicted results are with smaller fluctuations compared with the pure FNN method and the machine learning method. In the training process, the labelled joint angles are not only trained by the MSE loss but also enhanced by the physics-informed losses. The embedded physics laws provide the potential relationships between the output variables as learnable features for the training process. ### Evaluation of Intression Scenario The intraression scenario is also considered to validate the robustness of the proposed framework. For each subject, we train the model with data of one flexion speed and then test the performance using data of a different flexion speed from the training data which can be seen as a dataset that our trained model is unfamiliar with. Fig. 7 depicts the corresponding experimental results, Our model demonstrates exceptional performance in datasets with different distributions, but the predicted results of some baseline methods are degraded. In particular, concerning the prediction results of the muscle forces of the ECRB and ECU, the prediction results yielded by our proposed framework demonstrate a notably enhanced congruence with the underlying ground truth. The SVR method also demonstrates its efficacy in accurately tracking specific predictions, such as the prediction of the muscle forces of the FCU and ECRL. Nevertheless, errors remain in other prediction results. Hence, the application of this method is constrained by various factors, including the Figure 7: Evolution of the physiological parameters identified during the training process. nature of the prediction target, leading to both instability and inherent limitations. The SVR method also exhibits the proficiency of dynamical tracking in part of the prediction results, such as the prediction of the muscle forces of the FCU and ECRL. However, the error remains in other prediction results which reflects the limitation of the stability. Although the FNN method shows favourable performance on the training data, when it comes to the intrasession scenario, the performance got much worse than that on the training data. Our proposed framework manifests a discernible capability to predict muscle forces and joint angles on data characterized by the diverse distribution in the absence of the real value of the muscle forces. ### Effects of Hyperparameters To investigate the effects of hyperparameters, including learning rate, types of activation functions, and batch size, on the performance of the proposed framework, the detailed results are shown in Table. 4, Table. 5, and Table. 6. Specifically, we consider three different learning rates, i.e., 0.01, 0.001 and 0.0001, and the maximum iteration is set as 1000. According to Table. 4, we can find that the proposed framework achieves better performance with a smaller learning rate. As observed in Table. 5, the proposed framework with ReLU and Tanh could achieve comparable performance, but better than the one using Sigmoid as the activation function. In Table. 6, the performance of the proposed framework achieves better performance with the decrease of the batch size and achieves its best performance when the batch size is 1. Even though setting the batch size to 1 for training the deep learning framework can still retain the drawbacks such as over-fitting, poor generalisation, unstable training, etc., in previous experiments, they have been proven not to be issues. series sequences, such as Recurrent Neural Network (RNN) and Long Short-Term Memory (LsTM) is another possible solution for the problem. For our proposed framework, the number of physiological parameters that can be efficiently identified is limited considering the network's performance and the time cost of model training. However, a more complex but physiologically consistent model with more parameters will directly benefit the accuracy of the muscle force prediction in our proposed model. Therefore, in future work, we will devoted to enhancing the model's performance to effectively identify a broader range of physiological parameters which will enable a deeper integration of the MSK forward dynamics model into our framework. In order to improve the experimental feasibility and generalisation of the model, we partially simplified the MSK forward dynamics model by reducing the number of individualised physiological parameters and by choosing a rigid rather than an elastic tendon. A more physiologically accurate representation of muscle tissues with connective tissues and muscle fibres could inform the muscles' length and velocity-dependent force generation capacity. In addition, We will achieve precise physiological parameter identification and muscle force prediction on more accurate, complex MSK models in the future. ## 6 Conclusion This paper presents a novel physics-informed deep-learning framework for joint angle and muscle force estimation. Specifically, our proposed model uses the ODE based on the MSK forward dynamics as the residual loss for the identification of personalized physiological parameters and another residual constraint based on the muscle contraction dynamics for the estimation of muscle forces in the absence of the ground truth. Comprehensive experiments on the wrist joint for muscle forces and joint angles' prediction indicate the feasibility of the proposed framework. However, it is worth noting that the entire MSK forward dynamics model embedded in the framework needs to be adjusted when the proposed approach is extended to other application cases.
2309.07912
An Observationally Driven Multifield Approach for Probing the Circum-Galactic Medium with Convolutional Neural Networks
The circum-galactic medium (CGM) can feasibly be mapped by multiwavelength surveys covering broad swaths of the sky. With multiple large datasets becoming available in the near future, we develop a likelihood-free Deep Learning technique using convolutional neural networks (CNNs) to infer broad-scale physical properties of a galaxy's CGM and its halo mass for the first time. Using CAMELS (Cosmology and Astrophysics with MachinE Learning Simulations) data, including IllustrisTNG, SIMBA, and Astrid models, we train CNNs on Soft X-ray and 21-cm (HI) radio 2D maps to trace hot and cool gas, respectively, around galaxies, groups, and clusters. Our CNNs offer the unique ability to train and test on ''multifield'' datasets comprised of both HI and X-ray maps, providing complementary information about physical CGM properties and improved inferences. Applying eRASS:4 survey limits shows that X-ray is not powerful enough to infer individual halos with masses $\log(M_{\rm{halo}}/M_{\odot}) < 12.5$. The multifield improves the inference for all halo masses. Generally, the CNN trained and tested on Astrid (SIMBA) can most (least) accurately infer CGM properties. Cross-simulation analysis -- training on one galaxy formation model and testing on another -- highlights the challenges of developing CNNs trained on a single model to marginalize over astrophysical uncertainties and perform robust inferences on real data. The next crucial step in improving the resulting inferences on physical CGM properties hinges on our ability to interpret these deep-learning models.
Naomi Gluck, Benjamin D. Oppenheimer, Daisuke Nagai, Francisco Villaescusa-Navarro, Daniel Anglés-Alcázar
2023-09-14T17:58:49Z
http://arxiv.org/abs/2309.07912v2
An Observationally Driven Multifield Approach for Probing the Circum-Galactic Medium with Convolutional Neural Networks ###### Abstract The circum-galactic medium (CGM) can feasibly be mapped by multiwavelength surveys covering broad swaths of the sky. With multiple large datasets becoming available in the near future, we develop a likelihood-free Deep Learning technique using convolutional neural networks (CNNs) to infer broad-scale physical properties of a galaxy's CGM and its halo mass for the first time. Using CAMELS (Cosmology and Astrophysics with MachinE Learning Simulations) data, including IllustrisTNG, SIMBA, and Astrid models, we train CNNs on Soft X-ray and 21-cm (HI) radio 2D maps to trace hot and cool gas, respectively, around galaxies, groups, and clusters. Our CNNs offer the unique ability to train and test on "multifield" datasets comprised of both HI and X-ray maps, providing complementary information about physical CGM properties and improved inferences. Applying eRASS:4 survey limits shows that X-ray is not powerful enough to infer individual halos with masses \(\log(M_{\rm halo}/M_{\odot})<12.5\). The multifield improves the inference for all halo masses. Generally, the CNN trained and tested on Astrid (SIMBA) can most (least) accurately infer CGM properties. Cross-simulation analysis - training on one galaxy formation model and testing on another - highlights the challenges of developing CNNs trained on a single model to marginalize over astrophysical uncertainties and perform robust inferences on real data. The next crucial step in improving the resulting inferences on physical CGM properties hinges on our ability to interpret these deep-learning models. keywords: galaxies: general, groups, clusters, intergalactic medium - X-ray: general - radio lines: general - software: simulations ## 1 Introduction New telescopes are currently engaged in comprehensive surveys across large sky areas and reaching previously unobtainable depths, aiming to map the region beyond the galactic disk but within the galaxy's virial radius: the circum-galactic medium, or the CGM (Tumlinson et al., 2017). However, these telescopes have inherent limitations in detecting emissions from gaseous halos surrounding typical galaxies. Nevertheless, they offer an exceptional opportunity to characterize the broad properties of the CGM that extend beyond their original scientific scope. The CGM contains multiphase gas, partly accreted from the filaments of the cosmic web that is continuously being reshaped, used in star formation, and enriched by astrophysical feedback processes occurring within the galaxy (Keres et al., 2005; Oppenheimer et al., 2016; Christensen et al., 2016; Angles-Alcazar et al., 2017; Hafen et al., 2019). A simple way to characterize the CGM is by temperature. The cool phase gas has a temperature of approximately \(T\sim 10^{4}\) K and has been the focus of UV absorption line measurements (e.g., Cooksey et al., 2010; Tumlinson et al., 2013; Werk et al., 2013; Johnson et al., 2015; Keeney et al., 2018). The hot phase of the CGM, with temperatures \(T>10^{6}\) K, is observable via X-ray facilities (e.g., Bregman et al., 2018; Bogdan et al., 2018; Mathur et al., 2023) and can contain the majority of a galaxy's baryonic content. Understanding both the cool and hot phases of the CGM may answer questions regarding where we may find baryons (Anderson and Bregman, 2011; Werk et al., 2014; Li et al., 2017; Oppenheimer et al., 2018), how galaxy quenching proceeds (Tumlinson et al., 2011; Somerville et al., 2015), and how the metal products of stellar nucleosynthesis are distributed (Peeples et al., 2014). New, increasingly large datasets that chart the CGM across multiple wavelengths already exist. In particular, two contrasting wavelengths map diffuse gas across nearby galaxies: the X-ray and the 21-cm (neutral hydrogen, HI) radio. First, the _eROSITA1_ mission has conducted an all-sky X-ray survey, enabling the detection of diffuse emission from hot gas associated with groups and clusters and potentially massive galaxies (Predehl et al., 2021). Secondly, in the 21-cm radio domain, the pursuit of detecting cool gas encompasses initiatives that serve as precursors to the forthcoming Square Kilometer Array (SKA) project. Notable among these are ASKAP (Johnston et al., 2007) and MeerKAT (Jonas and MeerKAT Team, 2016), both of which have already conducted comprehensive surveys of HI gas in galaxy and group environments through deep 21-cm pointings. Cosmological simulations provide theoretical predictions of CGM maps, yet divergences arise due to varying hydrodynamic solvers and subgrid physics modules employed in galaxy formation simulations (Somerville et al., 2015; Tumlinson et al., 2017; Dave et al., 2020). As a result, we see very different predictions for the circumgalactic reservoirs surrounding galaxies. Distinctively, the publicly available simulations such as IllustrisTNG (Nelson et al., 2018; Pillepich et al., 2018), SIMBA (Dave et al., 2019), Astrid (Bird et al., 2022; Ni et al., 2022), among others (e.g., Schaye et al., 2015; Hopkins et al., 2018; Wetzel et al., 2023), are valuable resources for generating CGM predictions. CAMELS23(Cosmology and Astrophysics with MachinE Learning Simulations) is the first publicly available suite of simulations that includes thousands of parameter and model variations designed to train machine learning models (Villaescusa-Navarro et al., 2021, 2022). It contains four different simulations _sets_ covering distinct cosmological and astrophysical parameter distributions: LH Latin Hypercube, 1,000 simulations), 1P (1-Parameter variations, 61 simulations), CV (Cosmic Variance, 27 simulations), and EX (Extreme, 4 simulations). Of these, the CV set is uniquely significant as it fixes cosmology and astrophysics to replicate the observable properties of galaxies best, providing a fiducial model. We exclude the numerous CAMELS simulations that vary cosmology and astrophysical feedback to prevent unrealistic galaxy statistics. Thus, utilizing the diverse CAMELS CV sets, we explore three universe realizations that make distinguishing predictions for the CGM. Footnote 2: CAMELS Project Website: [https://www.camel-simulations.org](https://www.camel-simulations.org) Footnote 3: CAMELS Documentation available at [https://camels.readthedocs.io/en/latest/index.html](https://camels.readthedocs.io/en/latest/index.html) In this study, we develop an image-based Convolutional Neural Network (CNN) to infer CGM properties from the CAMELS IllustrisTNG, SIMBA, and Astrid CV-set simulations. The definitions and ranges for all CGM properties are outlined in Table 1. Two significant and differently structured astrophysical feedback parameters that impact CGM properties, stellar and AGN feedback, remain predominantly unconstrained. The CV set does not explore the range of CAMELS feedback parameters like the other sets. However, we choose the CV set as a proof-of-concept and plan to include the much larger LH set that completely marginalizes over astrophysics (Villaescusa-Navarro et al., 2021) in the future. The CNN is trained and tested on diverse simulations, yielding valuable insights into the CGM properties. Additionally, we apply observational multiwavelength survey limits to the CNN for each field, guiding the design and approach of new instruments and novel surveys, maximizing their scientific returns on CGM properties, and significantly advancing our understanding of galaxy formation and large-scale structure. This paper is outlined as follows. SS2 lays out the methods used to complete this work and includes subsections on specific simulation information (SS2.1), dataset generation (SS2.2), CNNs (SS2.3), and network outputs (SS2.4). We begin SS3 by presenting results using individual simulations to infer first the entire halo mass (SS3.1), then a global CGM property, the mass of the CGM over the mass of the halo, or \(f_{\rm cgm}\) (SS3.2), and the metallicity of the CGM (SS3.3) which exhibits large variation. We show results based on idealized soft X-ray and HI images and assess the impact of realistic observations with observational survey limits (SS3.4). We also perform _cross simulation inference_, where one trains a CNN on one galaxy formation model or simulation and tests on another to gauge its robustness (SS3.5). We discuss the interpretability of the cross-simulation inference analysis (SS4.1), the applicability and limitations of CNNs applied to the CGM (SS4.2), and a possible avenue for future work as an expansion of this analysis (SS4.3). Lastly, SS5 concludes. ## 2 Methods In this section, we introduce the simulations (SS2.1) followed by how our halo-centric "map" datasets are generated and a description of the global properties we are training the network to infer (SS2.2). Then, SS2.3 describes the neural network applied to these datasets. Finally, we specify the network outputs, including statistical measures, to evaluate the performance of the CNN (SS2.4). We define some vocabulary and common phrases within this work. _Fields_ refer to X-ray and 21-cm HI (hereafter HI), where using one field corresponds to either X-ray or HI; two fields, X-ray and HI, make up the multiplied. With our CNN architecture, the number of fields is equivalent to the number of channels. _Parameters_ and _hyperparameters_ define the inner workings of the CNN, where the latter must be optimized. This should not be confused with parameters in the context of astrophysical feedback. _Properties_ describe the attributes of the CGM that are inferred by the network: \(M_{\rm halo}\), \(f_{\rm cgm}\), \(\log(Z_{\rm cgm})\), \(M_{\rm cgm}\), \(f_{\rm cool}\), and \(\log(T_{\rm cgm})\). The _parameter space_ reflects the range of values for the CGM properties (between the 16\({}^{\rm th}\) and 84\({}^{\rm th}\) percentiles) that each simulation encapsulates. ### Simulations We use the CV ("Cosmic Variance") set from three simulation suites, each of which uses a different hydrodynamic scheme: CAMELS-IllustrisTNG (referred to as IllustrisTNG) utilizing AREPO (Springel, 2010; Weinberger et al., 2020), CAMELS-SIMBA (referred to as SIMBA) utilizing GIZMO (Hopkins, 2015), and CAMELS-Astrid (referred to as Astrid) utilizing MP-Gadget (Springel, 2005). These simulations encompass 27 volumes spanning \((25h^{-1}{\rm Mpc})^{3}\) with fixed cosmological parameters (\(\Omega_{\rm M}=0.3\) and \(\sigma_{\rm B}=0.8\)) with varying random seeds for each volume's initial condition. The CAMELS astrophysical parameters for feedback are set to their fiducial values. We exclusively use the \(z=0\) snapshots for this work. IllustrisTNG is an adaptation of the original simulation as described in Nelson et al. (2019) and Pillepich et al. (2018), using \begin{table} \begin{tabular}{l l l} \hline **Property** & **Definition** & **Range** \\ \hline \(M_{\rm halo}\) & Logarithmic Halo mass in \(R_{200c}\) & \(11.5-14.3\) \\ \(f_{\rm cgm}\) & Mass ratio of CGM gas to total mass & \(0.0-0.23\) \\ & within \(R_{200c}\) & \(3.6-1.3\) \\ \hline \(Z_{\rm cgm}\) & Logarithmic CGM metallicity in 200 kpc & -\(3.6-1.3\) \\ \(M_{\rm cgm}\) & Logarithmic CGM mass in 200 kpc & \(8.0-12.5\) \\ \(f_{\rm cool}\) & Ratio of cool, low-ionized CGM gas & \(0.0-1.0\) \\ & within 200 kpc & \(3.9-7.6\) \\ \hline \end{tabular} \end{table} Table 1: Definitions and global value ranges of the CGM properties to be inferred and constrained by the network. These are the global value ranges, encompassing the individual ranges of IllustrisTNG, SIMBA, and Astrid. They remain consistent throughout any combination of simulations during training and testing. Properties are further distinguished by those radially defined by \(R_{200c}\) and those by 200 kpc. the AREPO (Springel, 2010) magnetohydrodynamics code employing the N-body tree-particle-mesh approach for solving gravity and magnetohydrodynamics via moving-mesh methods. Like all simulation codes used here, IllustrisTNG has subgrid physics modules encompassing stellar processes (formation, evolution, and feedback) and black hole processes (seeding, accretion, and feedback). Black hole feedback uses a dual-mode approach that applies thermal feedback for high-Eddington accretion rates and kinetic feedback for low-Eddington rate accretion rates. The kinetic mode is directionally pushed and is more mechanically efficient than the thermal mode (Weinberger et al., 2017). SIMBA, introduced in Dave et al. (2019) uses the hydrodynamics-based "Meshless Finite Mass" GIZMO code (Hopkins, 2015, 2017), with several unique subgrid prescriptions. It includes more physically motivated implementations of 1) AGN feedback and 2) black hole growth. SIMBA's improved subgrid physics model for AGN feedback is based on observations, utilizing kinetic energy outflows for both radiative and jet feedback modes operating at high and low Eddington ratios, respectively. Additionally, it applies observationally motivated X-ray feedback to quench massive galaxies. SIMBA's black hole growth model is phase-dependent. Cool gas accretion onto BHs is achieved through a torque-limited accretion model (Angles-Alcazar et al., 2017), and when accreting hot gas, SIMBA transitions to Bondi accretion. ### Dataset Generation To create our halo-centric map datasets, we use \(\rm{yt}\)-based software (Turk et al., 2011) that allows for consistent and uniform analysis across different simulation codes. We generate maps of all halos within the CV set with masses of at least \(M_{\rm halo}=10^{11.5}\)\(\rm{M_{\odot}}\) along the three cardinal axes. There are approximately 5,000 halos for each simulation. The highest halo mass is \(10^{14.3}\)\(\rm{M_{\odot}}\), for a nearly 3 dex span in halo mass. Refer to Table 2 for additional details. We categorize all the halos within the simulations by halo mass, where Sub-L* halos are within the range \(11.5\leq\log(M_{\rm halo}/M_{\odot})\leq 12\), \(\rm{L^{*}}\) halos are within the range \(12\leq\log(M_{\rm halo}/M_{\odot})\leq 13\), and groups are within the range \(13\leq\log(M_{\rm halo}/M_{\odot})\leq 14.3\). The relationship between \(\log(M_{\rm halo}/M_{\odot})\) and \(\log(M_{\rm cgm})\), \(\log(T_{\rm cgm})\), \(f_{\rm cgm}\), and \(\log(Z_{\rm cgm})\) for all simulations, the parameter space, is shown in Figure 1. The mean for each property is indicated with a solid line. Shaded regions represent the \(16^{\rm th}-84^{\rm th}\) percentiles, and dotted points indicate the "statistically low" region for halos with halo masses above \(\log(M_{\rm halo}/M_{\odot})>13.0\). In agreement with previous work (Oppenheimer et al., 2021; Delgado et al., 2023; Ni et al., 2023; Gebhardt et al., 2023), we illustrate how the properties of gas beyond the galactic disk can differ significantly between feedback implementations. For \(\log(M_{\rm cgm})\) (top left), Astrid (blue) shows little scatter below \(\log(M_{\rm halo}/M_{\odot})>12.5\), IllustrisTNG (pink) shows similar but less extreme scatter, and SIMBA (purple) has consistent scatter throughout. In \(\log(T_{\rm cgm})\)(right), Astrid again has a low scatter throughout the entire \(M_{\rm halo}\) range. This scatter increases slightly for IllustrisTNG and again for SIMBA. Astrid has the most scatter for \(f_{\rm cgm}\) (bottom left), whereas IllustrisTNG and SIMBA display comparable scatter for lower masses, tapering off for higher masses. Finally, \(\log(Z_{\rm cgm})\) illustrates that all three simulations have significant and similar scatter. For \(M_{\rm cgm}\), \(\log(T_{\rm cgm})\), and \(f_{\rm cgm}\). Astrid has higher values for the entire \(M_{\rm halo}\) range, followed by IllustrisTNG and SIMBA. This is not the case in \(\log(Z_{\rm cgm})\), where there is significant overlap. The scatter on \(M_{\rm halo}\) was also computed with respect to the total flux per map, corresponding to the sum of all pixel values in X-ray and HI separately. When binned by \(M_{\rm halo}\), there are correlations only with IllustrisTNG and Astrid for X-ray (see Fig. A2). A more detailed discussion about the map trends and pixel counts is in Appendix A. From the snapshot data obtained from the X-ray and HI maps, we provide an equation describing the calculation of each CGM property (\(M_{\rm halo}\), \(f_{\rm cgm}\), \(Z_{\rm cgm}\), \(M_{\rm cgm}\), \(f_{\rm cool}\), and \(T_{\rm cgm}\)): \[\begin{split} M_{\rm halo}=&\sum m_{\rm DM}(r\,< \,R_{200\rm c})\\ &+\sum m_{\rm gas}(r\,<\,R_{200\rm c})+\sum m_{\rm star}(r\,<\,R_{200 \rm c})\\ \end{split} \tag{1}\] \[f_{\rm cgm}=\frac{\sum m_{\rm cgm}(r\,<\,R_{200\rm c})}{M_{\rm halo}} \tag{2}\] \[Z_{\rm cgm}=\sum\left(z_{\rm cgm}\times\,m_{\rm cgm}\right)\,(r\,<200\,\,{\rm kpc}) \tag{3}\] \[M_{\rm cgm}=\sum m_{\rm cgm}(r\,<\,200\rm kpc) \tag{4}\] \[f_{\rm cool}=\frac{\sum m_{\rm cool}(r\,<\,200\,\,{\rm kpc})}{\sum m_{\rm cgm }(r\,<\,200\,\,{\rm kpc})} \tag{5}\] \[T_{\rm cgm}=\sum\left(z_{\rm cgm}\times\,m_{\rm cgm}\right)\,(r\,<\,200\,\,{ \rm kpc}) \tag{6}\] where \(m\) is the mass of dark matter (DM), gas, or stellar (star) particles enclosed within \(r<200\) kpc. The subscript "cgm" refers to any gas that is not star-forming. \(z_{\rm cgm}\) is the particle gas metallicity. \(m_{\rm cool}\) is CGM gas with \(T<10^{6}\) K. \(t_{\rm cgm}\) is the particle gas temperature. For the definitions and numerical ranges of the above CGM properties, see Table 1. We generate one channel for each field (HI or X-ray), adding them together in the multifield case (HI+X-ray). Each map utilizes values obtained through mock observation, as described below. For X-ray, we map X-ray surface brightness emission in the soft band between 0.5 and 2.0 keV. HI, or "Radio" refers to the 21-cm emission-based measurement that returns column density maps, which is a data reduction output of 21-cm mapping techniques. Each map is 128x128 pixels, spanning \(512\times 512\) kpc\({}^{2}\) with a 4 kpc resolution. The depth spans \(\pm 1000\) kpc from the center of the halo. Two types of maps are generated for each field: those with no observational limits, called idealized maps, and maps with observed limits imposed. We first explain the generation of idealized maps. X-ray maps are created using the pyXSIM package (ZulHone & Hallman, 2016). While pyXSIM can generate lists of individual photons, we use it in a non-stochastic manner to map the X-ray emission across the kernel of the fluid element. Therefore, our X-ray maps are idealized in their ability to map arbitrarily low emission levels. Radio-based HI column density maps are created using the Trident package (Hummels et al., 2017) where the Haardt & Madau (2012) ionization background is assumed with the self-shielding criterion of Rahmati et al. (2013) applied. Figure 2: Each column illustrates maps for IllustrisTNG, SIMBA, and Astrid, respectively. Each row corresponds to maps for idealized X-rays, X-rays with observational limits, idealized HI, and HI with observational limits. These maps display the same halo across the CV set of the three simulations. Figure 2 depicts maps of the same massive halo in the three simulations: IllustrisTNG, SIMBA, and Astrid, from left to right, respectively. The four rows illustrate 1) idealized X-ray, 2) observationally limited X-ray, 3) idealized HI, and 4) observationally limited HI. For X-ray with observational limits, we set the surface brightness limit to \(2.0\times 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcmin\({}^{-2}\) corresponding to the observing depth of the _eROSITA_ eRASS:4 all-sky survey (Predehl et al., 2021). For HI with observational limits, we set the column density limit to \(N_{\rm HI}=10^{19.0}\) cm\({}^{-2}\), which is approximately the limit expected for the 21-cm HI MHONGOOSE Survey at a 15" beam size similar to the _eROSITA_ survey (de Blok et al., 2016). The observational limits are implemented by setting a lower limit floor corresponding to the detectability of the telescope. Accessing the same halo across the three simulations is possible since the CV set shares the same initial conditions between the different simulation suites. The X-ray maps tracing gas primarily above \(T>10^{6}\) K are brightest for Astrid and dimmrest for SIMBA, a trend also seen when the observational limits are imposed. The HI maps, probing \(T\sim 10^{4}\) K gas, are less centrally concentrated than the X-ray and often trace gas associated with satellites. We expand upon the first column in Fig. 2 in Fig. A1, formatted similarly, for a range of halo masses within IllustrisTNG from \(\log(M_{\rm halo}/M_{\odot})=13.83\) (leftmost) to \(\log(M_{\rm halo}/M_{\odot})=11.68\) (rightmost). X-ray emission, which traces gas with a temperature above \(10^{6}\) K, indicates a strong correlation with halo mass. Features seen here include wind-blown bubbles (Predehl et al., 2020), satellite galaxies creating bow shocks (Kang et al., 2007; Bykov et al., 2008; Zinger et al., 2018; Li et al., 2022), and emissions associated with galaxies themselves. HI does not have this same correlation with halo mass, strengthening our choice in creating the HI+X-ray multifield. ### Convolutional Neural Network The advantage of employing CNNs lies in their capacity to simultaneously learn multiple features from various channels or fields (X-ray and HI). Fields can be used either independently or together for training, validating, and testing without modifications to the network architecture and only minor changes in the scripts wherever necessary. This work adopts likelihood-free inference methods, suitable for cases where determining a likelihood function for large and complex datasets is computationally demanding or is not attainable. Our CNN architecture is based on the architecture used with the CAMELS Multifield Dataset (CMD) in Villaescusa-Navarro et al. (2021), inferring two cosmological and four astrophysical feedback parameters. Aside from now inferring six CGM _properties_, the remaining modifications stem from replacing the LH set (Villaescusa-Navarro et al., 2022) with the CV set. We must be capable of accommodating the unevenly distributed discrete-valued data in the form of "halo-centric" points, which can be seen in a property like halo mass. This is compared to the LH-based CMD dataset, made to infer the evenly distributed cosmological and astrophysical feedback parameters by design. Our CNN makes two main adjustments: 1) the kernel size is changed from 4 to 2 to accommodate a smaller initial network input, and 2) the padding mode is altered from "circular" to "zeros." The padding mode is crucial in guiding the network when the image dimensions decrease, as it no longer perfectly fits the original frame. Changing to "zeros" means filling the reduced areas with zeros to maintain the network's functionality. The CNN architecture is outlined in greater detail in Appendix C in Table C1 for the main body of the CNN and Table C2 for additional functions utilized after the main body layers. The CNN also includes hyperparameters: 1) the _maximum learning rate_ (also referred to as step size), which defines how the application of weights changes during training, 2) the _weight decay_ as a regularization tool to prevent over-fitting by reducing the model complexity, 3) the _dropout value_ (for fully connected layers) as random neuron disabling to prevent over-fitting, and 4) the _number of channels_ in the CNNs (set to an integer larger than one). To optimize these hyperparameters, we employ Optuna (Akiba et al., 2019)4, a tool that efficiently explores the parameter space and identifies the values attributed to returning the lowest validation loss, thereby achieving the best performance. Footnote 4: [https://github.com/pfnet/optuna/](https://github.com/pfnet/optuna/) We divide the full dataset into a training set (60%), validation set (20%), and testing set (20%). Only the training set contains the same halo along three different axis projections (setting the network parameter split=3). In contrast, the latter sets include neither the axis projections of any halo nor the original image of the halos assigned to the training set. The split is performed during each new training instance for a new combination of fields and simulations. We set the same random seed across all network operations, such that we aim to drastically reduce the probability of overlap between training, validation, and testing sets. Without the same random seed, the dataset will not be split the same way each time, and one halo could appear in two or more sets, causing inaccurate results. This process is necessary to ensure that the network does not perform additional "learning" in one phase. ### Network Outputs Here, the "moment" network (Jeffrey and Wandelt, 2020) takes advantage of only outputting the mean, \(\mu\), and variance, \(\sigma\), of each property for increased efficiency, instead of a full posterior range. The minimum and maximum values used for calculating the network error for the six CGM properties are kept the same throughout this work, regardless of which simulation is used for training. Doing so ensures that the results are comparable in the cross-simulation analysis or training on one simulation and testing on another. We additionally include four metrics to determine the accuracy and precision of the CNN's outputs for each CGM property: the root mean squared error (RMSE), the coefficient of determination (\(R^{2}\)), the mean relative error (\(\epsilon\)), and the reduced chi-squared (\(\chi^{2}\)). In the formulae below, we use the subscript \(i\) to correspond with the index value of the properties [1-6], the marginal posterior mean, \(\mu_{i}\), and standard deviation, \(\sigma_{i}\). Four different statistical measurements are used to make such conclusions, and TRUE\({}_{\rm i}\) is used to denote the true value of any given CGM property with respect to the simulation-based maps. **Root mean squared error, RMSE**: \[\text{RMSE}_{\rm i}=\sqrt{\langle(\text{TRUE}_{\rm i}-\mu_{\rm i})^{2}\rangle} \tag{7}\] where smaller RMSE values can be interpreted as increased model accuracy in units that can be related to the measured property. **Coefficient of determination, \(R^{2}\)**: \[R_{i}^{2}=1-\frac{\Sigma_{i}(\text{TRUE}_{\rm i}-\mu_{\rm i})^{2}}{\Sigma_{i }(\text{TRUE}_{\rm i}-\overline{\text{TRUE}}_{\rm i})^{2}} \tag{8}\] representing the scale-free version of the RMSE, where the closer \(R^{2}\) is to one, the more accurate the model. **Mean relative error, \(\epsilon\)**: \[\epsilon_{i}=\left<\frac{\sigma_{i}}{\mu_{i}}\right> \tag{9}\] where smaller \(\epsilon_{i}\) values can be interpreted as increased model precision. This is also the type of error predicted by CNN. **Reduced chi-squared, \(\chi^{2}\)**: \[\chi_{i}^{2}=\frac{1}{N}\sum_{i=1}^{N}\left(\frac{\rm TRUE_{i}-\mu_{i}}{\epsilon_ {i}}\right)^{2} \tag{10}\] where this quantifies how "trustworthy" the posterior standard deviation is, such that values close to one indicate a properly quantified error. Values greater than one indicate that the errors are underestimated, and those smaller than one are overestimated. We do not expect deviations far from one when analyzing inferences from CNNs trained and tested on the same simulation. However, values much greater than one are expected if network training and testing occur on different simulations. The variation in parameter spaces between simulations can be seen in Fig. 1. It is also important to note that the values reported in subsequent figures correspond to the subset of the data that has been plotted, not the entire set unless otherwise noted. To achieve such a reduced set, we randomly select a fraction of data points that vary with halo mass - for example, approximately \((1/30)^{\rm th}\) of \(\log(M_{\rm halo}/M_{\odot})=11.5\) halos, but all halos above \(\log(M_{\rm halo}/M_{\odot})=13.0\) are plotted. ## 3 Results We present our main results in this section. First, we discuss training and testing on one idealized field at a time for the same simulation (single-simulation analysis), focusing on three inferred properties: 1) halo mass in \(R_{200c}\) (\(M_{\rm halo}\) in SS3.1), 2) the mass ratio of CGM gas to the total mass inside \(R_{200c}\) (\(f_{\rm cgm}\) in SS3.2), and 3) the metallicity of the CGM inside 200 kpc (\(\log(Z_{\rm cgm})\) in SS3.3). We do not display the case of a multiplied here, as the results do not indicate significant improvement. Following this, we show the results for the observationally limited case (for properties \(M_{\rm halo}\) and \(M_{\rm cgm}\)), strengthening the motivation for using a multifield (SS3.4). We also organize the network errors (RMSE) by mass bin (see Table 2) and by simulation (training and testing on the same simulation) for the multifield case with observational limits (SS3.4.1). Finally, we provide the results of a cross-simulation analysis encompassing all three simulations, with and without observational limits for comparison (SS3.5). We utilize Truth-Inference scatter plots to display the inferences on the CGM properties. Each plot consists of multiple panels distinguished by field, simulation, or both. Panels visualize the true value, \(\rm TRUE_{i}\), on the x-axis and the inferred posterior mean, \(\mu_{i}\), on the y-axis, with error bars corresponding to the posterior standard deviation, \(\sigma_{i}\). Four statistics (for the subset of data that is plotted and _not_ the entire dataset) are also provided for each panel: the root-mean-square-error (RMSE, Equation 7), coefficient of determination (\(R^{2}\), Equation 8) values, mean relative error (\(\epsilon\), Equation 9), and reduced \(\chi^{2}\) values (Equation 10). Definitions and equations for each are in SS2.4. The black diagonal line also represents a "perfect inference" one-to-one line between the true and inferred values. ### Inferred Halo Mass Halo mass emerges as a readily interpretable property, directly deducible from the network, owing to its clear expectations: "true" high-mass halos should yield correspondingly high "inferred" halo masses, regardless of the simulation used for training and testing. Figure 3 illustrates the Inference-Truth plots for \(M_{\rm halo}\) across all three simulations for a subset of the data. We define halo mass in Equation 1 as the sum of dark matter, gas, and stars within \(r<R_{200c}\). The top row corresponds to the results using idealized X-ray maps to infer \(M_{\rm halo}\), and similarly for the bottom row using HI maps. Columns are ordered by the simulation used for training and testing: IllustrisTNG (left), SIMBA (middle), and Astrid (right). The points are colored by halo mass throughout. We examine the CNN with input X-ray maps first. In the first panel, we train and test on IllustrisTNG and obtain inferences that indicate a relatively well-constrained monotonic relationship. Some points remain further away from the "truth" values set by the black diagonal line in the low-mass end, suggesting that X-ray may not be the best probe for these low-mass halos. The next panel visualizes the training and testing results on SIMBA, with a slight improvement in the higher mass range and an overall relatively well-constrained, monotonic trend with a few outliers that stray far from the black line. This is exactly what is expected and is the same across simulations and fields. There are a few more outliers than IllustrisTNG and slightly larger error bars across the entire mass range. Finally, the third panel demonstrates that the CNN trained on and tested with Astrid has excellent inference power, as indicated by smaller error bars throughout the mass range. We can now look at the results obtained using HI input maps. In the first panel, with training and testing on IllustrisTNG, we obtain a clear and well-constrained monotonic relationship with relatively little scatter. There is a slight improvement in the error predictions when using HI instead of X-ray, as indicated by the change in the \(\chi^{2}\) value from 0.918 with X-ray to 0.938 with HI. Visually, we can see the improvement in the lower mass range, as there is less scatter. The middle panel shows training and testing on SIMBA, where the inference made is significantly worse than it is with X-ray throughout the entire parameter space, especially intermediate to low masses with increased scatter and larger error bars. The last panel shows training and testing on Astrid. Training and testing the CNN on Astrid with HI input maps yields the best inference of the three galaxy formation models, indicated by the highest \(R^{2}\) and lowest \(\epsilon\). However, the inference made with HI is outperformed by that made with X-ray in the intermediate to high mass range. Quantitatively, Astrid provides the most accurate and precise inference for both fields following the RMSE and \(\epsilon\) values, respectively. It also has the highest \(R^{2}\) score, indicating a CNN trained and tested on Astrid can best explain the variability in the data. SIMBA has the lowest \(R^{2}\) value overall with HI input maps, making it the least accurate in this case. Investigating the \(\chi^{2}\) values, CNNs training and testing on 1) IllustrisTNG consistently overestimate the error (\(\chi^{2}<1\)), 2) SIMBA consistently underestimates the error (\(\chi^{2}>1\)), and 3) Astrid overestimate the X-ray error but underestimate the HI error. ### Inferred CGM Gas Fraction Figure 4 shows the Truth-Inference plots for \(f_{\rm cgm}\) in the same format as Fig. 3, with the color bar still indicating \(M_{\rm halo}\). We see that \(f_{\rm cgm}\) does not have a monotonic trend, seen explicitly in Fig. 1. Higher masses tend to be more constrained, illustrated by less deviation from the black line and smaller errors than the lower mass halos. However, this is likely due to having fewer higher mass halos for the network to learn from. We define \(f_{\rm cgm}\) in Equation 2, as the sum of non-star-forming gas within a radius of \(r<200\) kpc divided by halo mass. The CNN performs poorly with IllustrisTNG on idealized X-ray maps, resulting in scattered points with large error bars. The network underestimates the error bars, as indicated by a \(\chi^{2}\) value greater than one. The next panel shows the results with SIMBA, for which there is better agreement and less scatter toward the higher and intermediate halo masses. However, for the low-mass halos, there is no distinctive trend, though the network can predict the values well overall but with somewhat large error bars (also underestimated). SIMBA also has slightly lower \(f_{\rm{cgm}}\) values than IllustrisTNG (c.f., Fig. 1). Finally, a CNN with Astrid provides an excellent inference for \(f_{\rm{cgm}}\) and accurately estimates the network error. The values are systematically larger, matching Fig. 1. Similarly, we display HI in the bottom row, with overall trends matching those seen with X-ray. However, HI offers tighter constraints at lower mass halos (higher \(f_{\rm{cgm}}\) values). This indicates that HI is a slightly better probe for \(f_{\rm{cgm}}\) than X-ray. Interestingly, SIMBA now overestimates the network error (\(\chi^{2}\) value less than one), while IllustrisTNG and Astrid underestimate the errors. Overall, \(f_{\rm{cgm}}\) performs worse than \(M_{\rm{halo}}\), but the CNN is learning to infer this property using a single idealized field. It does not appear that the quality of inference by the CNN depends on where the range of \(f_{\rm{cgm}}\) lies with respect to the entire value space spanned by all three simulations - IllustrisTNG returns the worst performance but has intermediate \(f_{\rm{cgm}}\) values, with an underestimate of the error. Astrid yields the most accurate and precise inferences for X-ray and HI fields, with lower scatter and error values in predicting \(M_{\rm{halo}}\) compared to IllustrisTNG and SIMBA. While SIMBA generally performs worse, it exhibits relatively good results in this case, especially with HI. ### Inferred Metallicity Figure 5 shows the Truth-Inference plots for metallicity, plotted as the logarithm of the absolute value of \(Z\) (note, \(\log(Z_{\odot})=1.87\), Asplund et al. (2009) on this scale. Metallicity presents an interesting challenge to our CNN as there are often \(\sim 1\) dex of scatter in \(Z\) at the same halo mass with no obvious trend (see Fig. 1). When training and testing on IllustrisTNG (top left), we see that higher-mass halos are slightly better constrained than low-mass halos, which are more scattered and have larger (and overall underestimated) error bars. We define the metallicity of the CGM in Equation 3 as the sum of the gas particle metallicity times the mass of non-star-forming gas within a radius of \(r<200\)kpc. Training and testing on SIMBA results in significant scatter across the entire mass range, with larger and underestimated error bars. \(L^{*}\) and group halos have higher metallicity values overall than in the previous panel. The last panel shows training and testing with Astrid, returning the best overall inference on \(\log(Z_{\rm{cgm}})\) across the Figure 3: The Truth–Inference plots for \(M_{\rm{halo}}\) when training and testing on the same simulation using idealized single-field data. IllustrisTNG, SIMBA, and Astrid are shown from left to right, and X-ray and HI are shown on the top and bottom rows, respectively. The data is at \(z=0.0\). We plot a mass-dependent fractional sample of halos from the testing set. entire mass range. Even though the error is underestimated, Astrid has much higher accuracy and precision based on RMSE, \(R^{2}\), and \(\epsilon\) values. We argue this is quite an impressive demonstration of our CNN's ability to predict a value with significant scatter at a single halo mass. The bottom row illustrates this same inference but now using HI, where we see similar trends as with X-ray, though slightly more constrained in the case of IllustrisTNG and SIMBA and slightly less constrained in the case of the low mass end of Astrid. The same upward shift for \(L^{*}\) and group halos is seen with SIMBA, alluding to SIMBA's strong astrophysical feedback prescriptions that impact higher mass halos. This is also seen by the changes in lower (higher) \(\chi^{2}\) values for IllustrisTNG and SIMBA (Astrid). We conclude that neither X-ray nor HI is powerful enough on its own to infer \(\log(Z_{\rm cgm})\). Surprisingly, the entire metallicity of the CGM can be well inferred using HI, especially in the case of Astrid, despite it being a small fraction of overall hydrogen, which itself is a primordial element. ### Observational Limits and Multifield Constraints Simulations must consider the limitations of current and future observational multiwavelength surveys, such that a one-to-one correlation between them and the developing models can exist. The specific limits used in this work come from the eROSITA eRASS:4 X-ray luminosity of \(2\times 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcmin\({}^{-2}\), and the typical radio telescope column density for measurements of HI as \(10^{19}\) cm\({}^{-2}\). Again, we include RMSE values, \(R^{2}\), \(\epsilon\), and \(\chi^{2}\), which are especially important to distinguish between single and multifield inferences. The top row of Figure 6 displays the Truth-Inference plots, highlighting the power of using multiple fields to infer \(M_{\rm halo}\) by training and testing a CNN with the IllustrisTNG observationally limited datasets. Utilizing X-ray (top left), it is clear that we cannot make an inference toward lower halo masses (Sub-L*). This is expected, given the _eROSITA_-inspired limits, which show X-ray emission strongly correlating with halo mass, following Fig. 11. The inability to make a clear inference in this mass regime despite providing the CNN with the most information (nearly 3500 separate Sub-L* halos, see Table 2) reiterates the weaknesses of X-ray. The X-ray inference improves in the L* range, but is still highly scattered. Chadayamuri et al. (2022) targeted L* galaxies by stacking _eROSITA_ halos and found a weak signal, which appears to be supported by the assessment here. Groups provide a much better inference for \(M_{\rm halo}\) given that these objects should be easily detectable via _eROSITA_. In the middle panel, we explore HI with observational limits to infer \(M_{\rm halo}\). Interestingly, HI does a far better job for sub-L* halos, as these are robustly detected in the 21-cm mapping (see Fig. 11). The inference worsens for L* halos and much of the Group range. Thus far, HI shows improvements via a lower RMSE value, an \(R^{2}\) value closer to 1, and a lower \(\epsilon\) value. It also indicates that the network predicts a greater error underestimation due to a higher \(\chi^{2}\) value. As neither X-ray nor HI is robust enough to infer \(M_{\rm halo}\) alone properly, we now train and test the network on the combined HI+X Figure 4: The Truth–Inference plots for \(f_{\rm cgm}\), with idealized X-ray (top) and idealized HI (bottom), where the color bar still represents halo mass. Astrid performs the best with the tightest constraints and smallest errors, while IllustrisTNG performs the worst, likely due to the sharp rise of \(f_{\rm cgm}\) at low mass. ray "multifield". The multifield approach is specifically used when one field alone may not be enough to constrain a property fully or only constrain a property within a certain range of values. The secondary or tertiary fields would then be able to fill in some gaps or tighten the constraints within the inference. Additionally, with the ability to adjust the network based on current observations, we form computational counterparts to future surveys to aid their construction. We achieve stronger constraints throughout the entire mass range, even with observational limits from both X-ray and HI. X-ray probes the L* and Group mass range well, while HI probes the sub-L* mass well, alleviating the previously unresolved noise of the left panel. We see quantitative improvement in the multifield approach in lower values for RMSE, \(R^{2}\), and \(\epsilon\), which comes at the price of increased underestimation of errors seen with a slight increase in \(\chi^{2}\) value. The bottom row of Fig. 6 provides similar results and trends for \(M_{\rm cgm}\) (defined in Equation 4) via Illustris TNG with X-ray, HI, and the multifield using observational limits. X-ray here is also not powerful enough as a probe to infer this property, especially in the low halo mass region. We then look at HI, where there is a better overall inference in the low halo mass region. However, HI produces more scatter towards the high halo masses than with X-ray. The last panel displays results from the HI+X-ray multifield, which is an overall improvement compared to either field alone. The constraints are tighter overall, and the scatter is reduced, as seen in the RMSE values, \(R^{2}\), \(\epsilon\), and \(\chi^{2}\). Additional Truth-Inference multifield plots with observational limits for the remaining CGM properties can be found in Appendix B. #### 3.4.1 Visualizing the CNN Error To quantify the CNN error across all six CGM properties (\(M_{\rm halo}\), \(f_{\rm cgm}\), log(\(Z_{\rm cgm}\)), \(M_{\rm cgm}\), \(f_{\rm cool}\), and log(\(T_{\rm cgm}\))), we plot the error on each property binned by halo mass. In the left panel of Figure 7, we plot the error (neural network error, or mean relative error) for each property when considering the observational limits on HI, X-ray, and the multifield HI+X-ray for a CNN that is trained and tested on Illustris TNG. Panels are separated by halo mass, where we use the full dataset instead of the subset in the Truth-Inference plots. We outline the general trends of this figure and point out interesting features. In the Sub-L* panel, X-ray maps alone provide the highest error, followed by the multifield, and then HI with the lowest error, to be expected. Note that there is an infinitesimal difference between the multifield and using HI alone. In the second panel (L*), the margin of error between X-ray and HI is decreasing, meaning that X-ray is becoming increasingly more important in the intermediate halo mass range. The multifield is also strictly improving upon using HI alone. With Groups, the multifield offers a greater improvement over either field alone, except for log(\(M_{\rm cgm}\)) where X-ray has a slightly lower error. Focusing on \(f_{\rm cgm}\), the errors are generally smaller than those from \(M_{\rm halo}\), but this may reflect the quantity range that is inferred as \(f_{\rm cgm}\) Figure 5: The Truth–Inference plots for metallicity, with idealized X-ray (top) and idealized HI (bottom), where the color bar still represents halo mass. Astrid performs the best, while SIMBA performs the worst, as it has the most varied Z values across the mass range, while Astrid has the most confined values. is mainly between \(0.0-0.16\) while \(M_{\rm halo}\) varies between \(11.5-14.3\). Meanwhile, \(\log(Z_{\rm cgm})\) has similar error levels between HI and X-ray, with a small improvement for the multifield for sub-L* and L*. The errors in \(\log(Z_{\rm cgm})\) vary between \(0.16-0.24\), so measuring metallicity at this level of accuracy is promising, but distinguishing high values of metallicity from low ones is disappointing for IllustrisTNG (see Fig. 14). The last three sets of properties, \(\log(M_{\rm cgm})\), \(f_{\rm cool}\), and \(\log(T)\), have not been illustrated previously as Truth-Inference plots. They depict similar trends and show general multifield improvement. HI infers sub-L* the best, while X-ray infers Groups the best. The multifield is most important for L* halos, and across all six properties, there is a significant improvement in the inference. Other halo categories usually do not result in as much improvement; in some cases, the multifield performs slightly worse. We note that inference of \(f_{\rm cool}\) for groups is a significant improvement, from 0.102 (X-ray) and 0.125 (HI) to 0.084 (multifield), which reflects that the CNN integrates both observations of cool gas (HI) and hot gas (X-ray) in this fraction. The right panel of Fig. 15 outlines the errors in IllustrisTNG, SIMBA, and Astrid for the multifield HI+X-ray with observational limits for all six properties. Halo mass again separates the three panels. Generally, a CNN trained and tested on SIMBA has the highest error over the entire mass range, while a CNN trained and tested on Astrid returns a better inference. \(f_{\rm cgm}\) breaks this trend, as it is significantly worse for L* mass halos when using IllustrisTNG, which is directly due to the drastic inflection point seen in Fig. 1. Additionally, Astrid can infer \(\log(Z_{\rm cgm})\) remarkably well for L* mass halos, compared to the high error when using IllustrisTNG. This can be seen in Fig. 14 where IllustrisTNG has much more scatter across the entire mass range, while Astrid shows little scatter. ### Cross Simulation Inference Until now, each Truth-Inference plot has been created by training and testing on the same simulation. In this section, we provide the results obtained when training on one simulation or galaxy formation model and testing on another to prove the degree of robustness across any particular simulation. We do this both for an X-ray with observational limits and the multifield with observational limits. In Figure 16, we demonstrate the cross-simulation inference between IllustrisTNG, SIMBA, and Astrid, using X-ray with observational limits only on the \(M_{\rm halo}\) property. The plots on the diagonal Figure 14: Truth–Inference figures for X-ray (left), HI (middle), and HI+X-ray (right) for IllustrisTNG with observational limits imposed on \(M_{\rm halo}\) (_top row_) and \(M_{\rm cgm}\) (_bottom row_) using IllustrisTNG. X-ray provides a poor inference, especially for lower mass galaxies, as there are very few, sometimes no emission lines detected if they are too faint. On the other hand, the inference produced from HI results in more uniform errors throughout the mass range, as HI is detected around both low and high-mass halos. Combined with their observational limits, the inference is enhanced with tighter constraints at all mass scales. correspond to training and testing on IllustrisTNG, SIMBA, and Astrid from upper left to lower right (repeated from the upper panels of Fig. 3). The top row refers to CNNs trained on IllustrisTNG, where each panel from left to right has been tested on IllustrisTNG, SIMBA, and Astrid, respectively. When tested on SIMBA, most points are close to the black line but with significantly more scatter. When tested on Astrid, we can only recover good constraints for the high halo mass range. There is much more scatter in the low mass range, as a majority of them are overestimated, except for a few outliers, most likely resulting from the inability of X-ray to probe the low halo mass range. When training on SIMBA but then testing on IllustrisTNG, there is still quite a bit of scatter in the low-mass halos, and the high-mass halos are now overestimated. This matches the expectations from brightness differences between IllustrisTNG (brighter) and SIMBA (dimmer). When testing on Astrid, all points are shifted up and overestimate halo mass. Finally, training on Astrid and testing on IllustrisTNG cannot recover any of the results. There is a lot of scatter for the low halo mass range with large error bars, with points that do not follow the expected trends in IllustrisTNG for intermediate and high masses. Astrid underestimates the majority of the halo masses. When testing on SIMBA, the results cannot be recovered either, as most points underestimate halo mass. Even though training and testing on Astrid seem to provide the best constraints on halo mass with X-ray observational limits, it is the least robust simulation out of the three, as measured by its ability to be applied to other simulations as a training set. In contrast, other models trained on the Astrid LH set (Ni et al., 2023; de Santi et al., 2023) are the most robust, as the parameter variations produce the widest variation in galaxy properties, in turn making ML models more robust to changes in baryonic physics. IllustrisTNG is the most robust in this case, as it returns the results of the other two simulations with the least amount of scatter. One oddity in the statistical measurements produced comes from training on either IllustrisTNG or SIMBA and testing on Astrid, which results in a negative \(R^{2}\) value, indicating a significant mismatch in the models. Another unusual statistic is in the extremely high \(\chi^{2}\) values from three cases: 1) training on IllustrisTNG and testing on Astrid, 2) training on SIMBA and testing on Astrid, and 3) training on Astrid and testing on either IllustrisTNG or SIMBA. Each reiterates the lack of robust results that can be achieved with Astrid. Figure 9 illustrates the cross-simulation results on \(M_{\rm halo}\) with observational limits on the multifield HI+X-ray for IllustrisTNG, SIMBA, and Astrid. The top left panel shows this multifield, trained on and tested with IllustrisTNG, where overall, \(M_{\rm halo}\) can somewhat be constrained throughout the entire parameter space. The second panel on the diagonal corresponds to the same multifield but is now trained on and tested with SIMBA. The constraints here are weaker throughout the entire parameter space as there is more overall scatter, though the trend is the same as expected. The last panel on the diagonal shows the network trained on and tested with Astrid, where we can obtain the tightest constraints overall, especially in the higher halo mass range. The few outliers towards the mid (L*) to low (Sub-L*) mass range with larger error bars may need further investigation. The top row shows training with IllustrisTNG and testing on IllustrisTNG, SIMBA, and Astrid, respectively. When training on IllustrisTNG and testing on SIMBA, we expect that for a given mass halo in IllustrisTNG, that same halo will look dimmer and, therefore, less massive in SIMBA. This is seen here, as most halos are below the black line. When the network is now tested on Astrid, a similar but opposite expectation is met. With the knowledge that for a given halo mass in IllustrisTNG, that same halo will look brighter and, therefore, more massive in Astrid, this trend also makes sense as we Figure 7: _Left:_ Average RMSE values split by halo category for training and testing on IllustrisTNG, with fields X-ray, HI, and the multifield HI+X-ray with observational limits, for all six properties. These bars are representative of the _full_ dataset. We provide a dashed vertical line to distinguish between properties that are radially bound by \(R_{200c}\) and those by 200 kpc. _Right:_ Average RMSE values split by simulation (training and testing on IllustrisTNG, SIMBA, or Astrid), with HI+X-ray and observational limits, for all six properties. These bars are representative of the _full_ dataset. Neither panel is entirely comparable to the Truth–Inference plots, as these categorize errors by halo mass and are for the full dataset. see a large majority of the points shifted above the black line. We can conclude that with observational limits on the multifield, training on IllustrisTNG can return the trends in SIMBA and Astrid, but there is an offset in recovered \(M_{\rm halo}\) explainable by the shift in observables. The middle row shows training with SIMBA and testing on IllustrisTNG, SIMBA, and Astrid, respectively. When the network trains on IllustrisTNG, it can recover the inference and achieve good constraints. The same halo in SIMBA will seem brighter in IllustrisTNG, so the shift in most points upward above the black line is, therefore, as expected. When testing on Astrid, we still recover the inference and achieve good constraints, but we see the same shift as we saw when training on IllustrisTNG and testing on Astrid. This also aligns with the expectations, as halos in Astrid will seem much brighter than those in SIMBA. We can conclude that with observational limits on the multifield, SIMBA is also robust enough to recover the inference and constraints for \(M_{\rm halo}\). The bottom row shows training with Astrid and testing on IllustrisTNG, SIMBA, and Astrid, respectively. When the network tests Figure 8: Cross-simulation results for IllustrisTNG, SIMBA, and Astrid on X-ray for \(M_{\rm halo}\), with observational limits. The x-axis of each panel corresponds to the true values of \(M_{\rm halo}\), and the y-axis corresponds to the inference values of \(M_{\rm halo}\), as before. The y-axis labels indicate that the panels on the top row were trained on IllustrisTNG, the middle row on SIMBA, and the bottom row by Astrid. The columns are labeled such that the panels in the first column were tested on IllustrisTNG, the second column’s panels on SIMBA, and the third on Astrid. The diagonal panels are the result of training and testing on the same simulation. Training and testing on Astrid provide the tightest constraints and the best inference. These points are a fraction of the full dataset. on IllustrisTNG, we can recover the general trend with slightly less strong constraints. We can recover the general trend with slightly less strong constraints when the network tests on SIMBA. Halos in Astrid will be brighter than the same halos in IllustrisTNG and SIMBA, so with the majority of the points below the black line when testing on IllustrisTNG and SIMBA. We can conclude that a CNN trained on Astrid cannot recover the inference and constraints for \(M_{\rm halo}\). We see the same statistical nuances as in the previous figure: negative \(R^{2}\) values and large \(\chi^{2}\) values in the same configurations. By adding observational constraints for both HI and X-ray, the simulations gain a further level of similarity, which enhances their constraining power in the cross-simulation analysis. Figure 10 shows the results from using the multifield (HI+X-ray) approach with observational limits on \(M_{\rm cgm}\), with observational limits. The layout of the plot is analogous to that of Fig. 9. Training on IllustrisTNG (top row) over-predicts the results for intermediate and low mass halos when testing on SIMBA and under-predicts the same results when testing on Astrid. This aligns with the expectations in the bottom left panel of Figure 1, outlining the relationship between halo mass and \(M_{\rm cgm}\). Training on SIMBA (middle row) under-predicts intermediate and low mass halos results when testing on IllustrisTNG and Astrid. Note that there is much more scatter when testing on Figure 9: Cross-simulation results to infer \(M_{\rm halo}\) using the multifield with observational limits for IllustrisTNG, SIMBA, and Astrid. The layout is the same as in Fig. 8. Even with the observational limits of HI and X-ray, training and testing on Astrid have the best overall inference for \(M_{\rm halo}\). IllustrisTNG, especially for objects with low \(M_{\rm cgm}\) values. Training on Astrid (bottom row) does reasonably well when testing on IllustrisTNG with some scatter in the intermediate and low mass halos. However, it over-predicts these intermediate and low mass halos when testing on SIMBA. Although able to return similar trends, cross-simulation training and testing display offsets related to different CGMs across the simulations. However, it is enlightening to see that the cross-simulation inference improves when more bands are included, which indicates broad properties like \(M_{\rm halo}\) and \(M_{\rm cgm}\) are more robustly characterized by observing in multiple bands. ## 4 Discussions In this section, we discuss the interpretation of cross-simulation analysis (SS4.1), the applications and limitations of CNNs when applied to the CGM (SS4.2), and an intriguing direction for future work (SS4.3). ### Cross-Simulation Interpretability In the previous sections, we explored the robustness of simulations by examining cross-simulation inference with and without observational limits. Fig. 9 presents cross-simulation inferences for the multifield H+X-ray with observational limits on \(M_{\rm halo}\). Upon initial Figure 10: Cross-simulation results to infer \(M_{\rm cgm}\) using the multifield with observational limits for IllustrisTNG, SIMBA, and Astrid. The layout is the same as in Figure 8. inspection, training, and testing on Astrid offer the tightest constraints across the entire mass bin. In general, a test simulation will overpredict (underpredict) properties when trained on a simulation with CGM observables that are dimmer (brighter). Among the three simulations, a CNN trained on IllustrisTNG is the most robust, as it accurately captures the differences between halo mass measurements when training on SIMBA and Astrid. However, more work must be done to show that a CNN trained on IllustrisTNG will produce the most robust predictions when applied to real observational data. A novel aspect to further explore is training and testing on multiple simulations, varying the feedback parameters such that the CNN would marginalize over the uncertainties in baryonic physics. The effort to train and test on different simulations mimics training on a simulation and predicting real observational data. Although it is disappointing to see such deviations in the results of cross-simulation analysis, we know that some simulations outperform others. Using observational limits that resemble the ranges of detection of current instruments as a simulation constraint, we can begin directly comparing simulations and observations. From the results of Oppenheimer et al. (2021), we know that IllustrisTNG reproduces X-ray properties of group-mass halos better than SIMBA (c.f. Fig. 9). Astrid may probably have a high \(f_{\rm gas}\); however, no mock X-ray observations have yet been conducted for this simulation. Ultimately, no simulation is a perfect representation of the real universe, and it is thus crucial to develop CNNs that can marginalize over uncertainties in subgrid physics. Robustness quantification, or how well a network trained on one simulation can infer a given quantity when tested on another simulation within any set of simulations and machine learning algorithm, including the CAMELS suites, is crucial for furthering their development (Villaescusa-Navarro et al., 2021; Villanueva-Domingo and Villaescusa-Navarro, 2022; Echeverri et al., 2023; de Santi et al., 2023). Lack of robustness can be due to either 1) differences between simulations, 2) networks learning from numerical effects or artifacts, or 3) lack of overlapping between simulations in the high-dimensional parameter space. These reasons are not surprising due to the use of the CV set within CAMELS, and there could be slight variations in feedback that are unaccounted for. Using the LH set instead would improve the results obtained in this work. Additionally, precision (smaller error bars) without accuracy (recovering the "true" values) is meaningless. So, although Astrid generally has the smallest error bars, this alone shows strong biases when tested on other models. Future work can be done to address the inability to obtain robust constraints while performing cross-simulation analysis. One avenue is through domain adaptation (Ganin et al., 2015), which allows for a smoother transition between training and testing on different simulations such that we obtain robust results. ### Applicability and Limitations of CNNs Applied to the CGM We have applied a CNN following the structural format used by Villaescusa-Navarro et al. (2022) and modifying it to infer underlying _properties_ of the CGM of individual halos with fixed cosmology and astrophysics within the CAMELS CV set. The former CNN infers six independent parameters (two cosmological and four astrophysical feedback) _by the design of the LH simulation set_. Our trained CGM CNN learns to predict properties with high co-dependencies (e.g., \(\log(M_{\rm halo})\) and \(\log(T_{\rm cgm})\)) and related quantities (\(f_{\rm cgm}\) and \(M_{\rm cgm}\)). In the latter case, there are two different ways to quantify CGM mass in two distinct apertures- \(M_{\rm cgm}\) is the CGM mass inside 200 kpc, and \(f_{\rm cgm}\) is CGM mass over total mass inside \(R_{200c}\). We attempted to infer one property at a time instead of all six and found only a marginal improvement. The CNN implemented in this work, categorized as a moment network (Jeffrey and Wandelt, 2020), has the flexibility of inferring multiple properties simultaneously, but requires a rigorous hyperparameter search as detailed in SS2.3. A concern that often appears with any simulation-based approach is the possibility of biases seeing into the result, generally due to incomplete modeling of physical processes. We aim to alleviate this concern first by using the CV set within the CAMELS simulations, where the values of cosmological and astrophysical feedback parameters are fixed to their fiducial values. The LH set, which was not used in this work (but could easily be integrated as part of future efforts), increases the chances of successful cross-simulation analysis as the astrophysical dependencies are completely marginalized over. From this standpoint, the CV set is not best suited to produce robust cross-simulation analysis. Utilizing the CV set, we gain valuable insights into the distinctions among the simulations and their effects on the results of CGM properties in this study. In addition to using the LH set, we can explore training and testing on more than one simulation or performing a similar analysis on the broader parameter space of TNG-SB28 (Ni et al., 2023). We apply CNNs to the CGM datasets to 1) determine the degree to which physical properties of the CGM can be inferred given a combination of fields and simulations and 2) examine different observing strategies to determine how combining different wavebands can infer underlying CGM properties. We demonstrate the feasibility of applying a CNN to observational datasets and return values and errors for CGM properties, including \(M_{\rm halo}\) and \(M_{\rm cgm}\). Additionally, the inference of \(M_{\rm halo}\) is more robustly determined when another field, along with its associated observational limits, is added. However, training on one simulation and testing on another uphold the notion that predictions can produce significantly divergent results compared to the true values, as seen in Figs. 8 and 9. As mentioned in SS4.1, although IllustrisTNG, SIMBA, and Astrid have been tuned to reproduce galaxies' and some gas properties, they make varied predictions for gaseous halos. In future efforts to improve this work, the LH set would replace the CV set, under the expectations of improvement as all astrophysics is marginalized over. Should this not be the case, domain adaptation is the longer-term solution to help bridge the many gaps between different subgrid physics models. Another interesting future direction would include training and testing on combinations of simulations, though this is ideally performed with the LH set. ### Future Work In expanding the scope of this work to additional wavelengths in the future, we also aim to advance our understanding of where the CNN extracts important information from within a given map. We can use the information gained from this type of analysis, which has not been applied to CGM data before this work, to inform future observational surveys on how best to achieve the greatest scientific returns given wavelength, survey depth, and other specifications. Additionally, this type of analysis will be necessary to determine machine learning verification and validation. To achieve this, we hypothesize that moving towards higher resolution simulations, including IllustrisTNG-100 and EAGLE, among others, will have a more significant impact across a wide range of scales, especially in the case of observational limits. ## 5 Conclusions In this study, we use convolutional neural networks (CNNs) trained and tested on CAMELS simulations based on the IllustrisTNG, SIMBA, and Astrid galaxy formation models to infer six broad-scale properties of the circum-galactic medium (CGM). We focus on halo mass, CGM mass, metallicity, temperature, and cool gas fraction. We simulate two observational fields, X-ray and 21-cm HI radio, that can represent the broad temperature range of the CGM. We test our CNN on datasets with and without (idealized) observational limits. Our key findings include: 1. When training and testing the CNN on the same simulation: 1. Comparing all the CGM properties the CNN is trained to infer, it performs the best overall on \(M_{\rm halo}\) and \(M_{\rm cgm}\), both with and without observational limits. For IllustrisTNG with observational limits, the RMSE values returned for \(M_{\rm halo}\) are \(\sim 0.14\) dex, and \(M_{\rm cgm}\) are \(\sim 0.11\) dex when combining X-ray and HI data. 2. The "multifield" CNN trained simultaneously on X-ray and HI data with observational limits allows for the best inference across the entire mass range using the same inputs without the discontinuities seen when trained only on one field. Obtaining interpretable inferences on halo mass for the continuous range of \(11.5\leq\log(M_{\rm halo}/M_{\odot})\leq 14.5\) requires a multifield, even though various combinations may be better over smaller mass bins than others. Sub-\(L^{*}\) halos (\(M_{\rm halo}=10^{11.5-12}\) M\({}_{\odot}\)) are only marginally better inferred with HI than the multifield. Moving to \(L^{*}\) halos (\(M_{\rm halo}=10^{12-13}\) M\({}_{\odot}\)) and the more massive groups (\(M_{\rm halo}>10^{13}\) M\({}_{\odot}\)), there is a drastic improvement when using the multifield over both X-ray and HI alone. Our exploration demonstrates that CNN-fed multiple observational fields with detectable signals can continuously improve the inference of CGM properties over a large mass range given the same input maps. 3. The performance of CNNs tends to improve when training on simulations that produce less scatter in CGM properties at fixed halo mass, with Astrid generally providing more accurate results compared to CNNs trained on TNG or SIMBA (and tested on the same model). Generally, \(Z_{\rm cgm}\) has worse performance, especially for SIMBA and IllustrisTNG. However, the CNN trained on Astrid displays superior performance for metallicity despite significant scatter at a given halo mass. Generally, the narrower the dispersion of properties, the better the CNN performs. 4. When adding observational limits to the multifield CNN, the inference accuracy declines, but still returns RMSE values indicating success. Recovering total mass from observations appears to be feasible with our CNN. HI mapping is especially critical for recovering CGM properties of sub-L* and L* galaxies. 2. For CNN cross-simulation analysis (training on one simulation and testing on another): 1. When applying cross-simulation analysis by training on one simulation and testing on another, the inferred values generally correlate with the true physical properties. Still, they are frequently offset, indicating strong biases and overall poor statistical performance. 2. Interestingly, the cross-simulation analysis reveals that using the HI+X-ray multifield with observational limits improves the halo mass inference compared to that from X-ray maps alone. In the process of adding constraints in this case, the difference between the individual simulation parameter spaces becomes smaller and acts as tighter boundary conditions for the network. Our results have broader implications for applying deep learning algorithms to the CGM than those outlined here. First, performing a cross-simulation analysis and determining that the CNN is robust opens the possibility of replacing one of the simulations with real data to infer the actual physical properties of observed systems. Second, adding more wavelengths is easily implemented within image-based neural networks. To continue making connections to both current and future multiwavelength surveys, we can expand the number of fields used in this architecture beyond X-ray and HI, including image-based CGM probes like the Dragonfly Telescope that can map the CGM in optical ions, like H\(\alpha\) and NII (Lokhort et al., 2022), and UV emission from ground or space-based probes (Johnson et al., 2014, 2018; Burchett et al., 2018; Peroux and Howk, 2020). Most importantly, this method would allow simulation differences to be marginalized over while still obtaining correlations and constraints. We can overcome the current challenges of cross-simulation analysis by training our CNN on multiple CAMELS simulations and parameter variations existing and in production (including expanding to EAGLE (Schaye et al., 2015), RAMSES (Teyssier, 2010), Enzo (Bryan et al., 2014), and Magneticum5) while integrating additional wavebands. It is crucial to identify the primary source of information for the CNN to increase the quantity of simulations and wavelengths used as inputs. Future work incorporates performing saliency analysis with integrated gradients to determine the most important pixels in a given map. It allows for more targeted and efficient adjustments to improve inferences. This can reveal what underlying physical properties are universally recoverable and robustly predictable in observations. The CGM demarcates a region of space defined by nebulous boundaries, which poses a unique challenge to traditional analysis techniques like Principal Component Analysis (PCA). Moreover, there are no established methods for characteristically analyzing the CGM. The phrase "characteristically analyzing" implies categorizing entities distinctly. For instance, traditional analysis can be used with galaxies to classify them into various categories based on their unique evolutionary traits, as evidenced by Lotz et al. (2004). However, the CGM refers to the area surrounding the galactic disk until the accretion shock radius, where neither boundary is precisely defined as they cannot be directly observed. Applying the same traditional analysis approach to a CGM dataset would require a rigid pipeline, making it difficult to incorporate new simulations or wavelengths without extensive reconfiguration. Deep learning offers a more flexible and versatile approach as a solution. Footnote 5: [http://www.magneticum.org/](http://www.magneticum.org/) ## Acknowledgements We thank Shy Genel and Matthew Ho for valuable feedback and suggestions for the paper. The CAMELS simulations were performed on the supercomputing facilities of the Flatiron Institute, which is supported by the Simons Foundation. This work is supported by the NSF grant AST 2206055 and the Yale Center for Research Computing facilities and staff. The work of FVN is supported by the Simons Foundation. The CAMELS project is supported by the Simons Foundation and the NSF grant AST 2108078. DAA acknowledges support by NSF grants AST-2009687 and AST-2108944, CXO grant TM2-23006X, Simons Foundation Award CCA-1018464, and Cottrell Scholar Award CS-CSA-2023-028 by the Research Corporation for Science Advancement. ## Data Availability CAMELS data is publicly available at [https://camels.readthedocs.io/en/latest/](https://camels.readthedocs.io/en/latest/). Original data are available from the authors upon request by emailing naomi.gluck@yale.edu.
2309.17240
Data-driven localized waves and parameter discovery in the massive Thirring model via extended physics-informed neural networks with interface zones
In this paper, we study data-driven localized wave solutions and parameter discovery in the massive Thirring (MT) model via the deep learning in the framework of physics-informed neural networks (PINNs) algorithm. Abundant data-driven solutions including soliton of bright/dark type, breather and rogue wave are simulated accurately and analyzed contrastively with relative and absolute errors. For higher-order localized wave solutions, we employ the extended PINNs (XPINNs) with domain decomposition to capture the complete pictures of dynamic behaviors such as soliton collisions, breather oscillations and rogue-wave superposition. In particular, we modify the interface line in domain decomposition of XPINNs into a small interface zone and introduce the pseudo initial, residual and gradient conditions as interface conditions linked adjacently with individual neural networks. Then this modified approach is applied successfully to various solutions ranging from bright-bright soliton, dark-dark soliton, dark-antidark soliton, general breather, Kuznetsov-Ma breather and second-order rogue wave. Experimental results show that this improved version of XPINNs reduce the complexity of computation with faster convergence rate and keep the quality of learned solutions with smoother stitching performance as well. For the inverse problems, the unknown coefficient parameters of linear and nonlinear terms in the MT model are identified accurately with and without noise by using the classical PINNs algorithm.
Junchao Chen, Jin Song, Zijian Zhou, Zhenya Yan
2023-09-29T13:50:32Z
http://arxiv.org/abs/2309.17240v1
Data-driven localized waves and parameter discovery in the massive Thirring model via extended physics-informed neural networks with interface zones ###### Abstract In this paper, we study data-driven localized wave solutions and parameter discovery in the massive Thirring (MT) model via the deep learning in the framework of physics-informed neural networks (PINNs) algorithm. Abundant data-driven solutions including soliton of bright/dark type, breather and rogue wave are simulated accurately and analyzed contrastively with relative and absolute errors. For higher-order localized wave solutions, we employ the extended PINNs (XPINNs) with domain decomposition to capture the complete pictures of dynamic behaviors such as soliton collisions, breather oscillations and rogue-wave superposition. In particular, we modify the interface line in domain decomposition of XPINNs into a small interface zone and introduce the pseudo initial, residual and gradient conditions as interface conditions linked adjacently with individual neural networks. Then this modified approach is applied successfully to various solutions ranging from bright-bright soliton, dark-dark soliton, dark-antidark soliton, general breather, Kuznetsov-Ma breather and second-order rogue wave. Experimental results show that this improved version of XPINNs reduce the complexity of computation with faster convergence rate and keep the quality of learned solutions with smoother stitching performance as well. For the inverse problems, the unknown coefficient parameters of linear and nonlinear terms in the MT model are identified accurately with and without noise by using the classical PINNs algorithm. keywords: Deep learning, XPINNs algorithm with interface zones, Massive Thirring model, Data-driven localized waves, Parameter discovery + Footnote †: journal: **** ## 1 Introduction The application of deep learning in the form of neural networks (NNs) to solve partial differential equations (PDEs) has recently been explored in the field of scientific machine learning due to the universal approximation and powerful performance of NNs [1; 2; 3]. In particular, by adding a residual term from the given PDE to the loss function with the aid of automatic differentiation, the physics-informed neural networks (PINNs) approach has been proposed to accurately solve forward problems of predicting the solutions and inverse problems of identifying the model parameters from the measured data [4]. Unlike the traditional numerical techniques such as the finite element method and finite difference method, the PINNs method is a mesh-free algorithm and requires the relatively small amounts of data [5; 6]. Based on these advantages, the PINNs has also been applied extensively to different types of PDEs, including integro-differential equations [6], fractional PDEs [7], and stochastic PDEs [8; 9]. However, one of the main disadvantages of PINNs is the high computational cost associated with training NNs, which can lead to the poor performance and low accuracy. Especially, for modelling problems involving the long-term integration of PDEs, the large amount of data will lead to a rapid increase in the cost of training. To reduce the computational cost, several improved methods based on PINNs have been introduced to accelerate the convergence of models without loss of performance. The conservative PINNs (cPINNs) method has been established on discrete subdomains obtained after dividing the computational domain for nonlinear conservation laws, where the subdomains are stitched together on the basis of flux continuity across subdomain interfaces [10]. The extended PINNs (XPINNs) approach with the generalized domain decomposition has been proposed to any type of PDEs, where the XPINNs formulation offers highly irregular, convex/non-convex space-time domain decomposition thereby reducing the training cost effectively [11]. The _hp_-variational PINNs (_hp_-VPINNs) framework based on the nonlinear approximation of NNs and _hp_-refinement via domain decomposition has has been formulated to solved PDEs using the variational principle [12]. Based on cPINNs and XPINNs, a parallel PINNs (PPINNs) via domain decomposition has recently been developed to effectively address the multi-scale and multi-physics problems [13; 14]. The augmented PINNs (APINNs) has been proposed with soft and trainable domain decomposition and flexible parameter [15]. The PINNs/XPINNs algorithm has been employed to approximate supersonic compressible flows [16] and high-speed aerodynamic flows [17], and to quantify the microstructure of polycrystalline nickel [18]. The highly ill-posed inverse water wave problems governed by Serre-Green-Naghdi equations have been solved by means of PINNs/XPINNs to choose optimal location of sensors [19]. The unified scalable framework for PINNs with temporal domain decomposition has been introduced for time-dependent PDEs[20]. The multi-stage training algorithm consisting of individual PINNs has recently been presented via domain decomposition only in the direction of time [21; 22]. Indeed, the theoretical analyses of PINNs/XPINNs including convergence and generalization properties have been proved for linear and nonlinear PDEs through the rigorous error estimates [23; 24; 25], and the conditions under which XPINNs improve generalization have been discussed by proving certain generalization bounds [26]. In addition, the activation function is one of the important hyperparameters in NNs, which usually acts on affine transformation. The activation function possesses the ability to introduce nonlinearity in NNs in order to capture nonlinear features, and popular activation functions include rectified linear units, maxout units, logistic sigmoid, hyperbolic tangent function and so on [27]. In order to improve the performance of NNs and the convergence speed of objective function, global and local adaptive activation functions have been proposed by introducing scalable parameters into the activation function and adding a slope recovery term into the loss function [28; 29]. Furthermore, a generalized Kronecker NNs for any adaptive activation functions has been established by Jagtap et al [30]. The comprehensive survey, the applications-based taxonomy of activation functions and the state-of-the-art adaptive activation functions as well as the systematic comparison of various fixed and adaptive activation functions have been performed in [27]. These adaptive activation functions have also been used extensively in XPINNs (for example, on the inverse supersonic flows [16]). As a special class of nonlinear PDEs, integrable systems possess remarkable mathematical structures such as infinitely many conservation laws, Lax pairs, abundant symmetry, various exact solutions and so on. For such type of nonlinear system, the PINNs algorithm could benefit greatly from the underlying integrable properties and achieve a more accurate numerical prediction. Particularly, a rich variety of solutions with local features such as soliton, breather, rogue wave and periodic wave can be derived exactly by using the classical methods for studying integrable equations, including the inverse scattering transformation, the Darboux transformation, the Hirota bilinear method and the Backlund transformation. These solutions potentially provide us with a large number of training samples for numerical experiments in the framework of PINNs. For example, the classical and improved PINNs method based on explicit solutions has been used to study data-driven nonlinear wave and parameter discovery in many integrable equations, including KdV equation [4; 31], mKdV equation [31], nonlinear Schrodinger (NLS) equation [4; 32; 33; 34], derivative NLS equation [35; 36], Manakov system [37; 21; 38], Yajima-Oikawa system [39], Camassa-Holm equation [40] and etc. It is worth noting that a modified PINNs approach developed by introducing conserved quantities from integrable theory into NNs, has been shown to achieve higher prediction accuracy [41; 42]. Moreover, the PINNs scheme has been applied to discover Backlund transformation and Miura transform [43], which are two typical transformations in the field of integrable systems. The PINNs schemes based on Miura transformations have been established as an implementation method of unsupervised learning for predicting new local wave solutions [44]. In addition, the PINN deep learning has been employed to solve forward and inverse problems of the defocusing NLS equation with a rational potential [45], the logarithmic NLS with \(\mathcal{PT}\)-symmetric harmonic potential [46], the \(\mathcal{PT}\)-symmetric saturable NLS equation [47] and other nearly integrable models [48, 49, 50, 51, 22]. More recently, the deep learning method with Fourier neural operator was used to study solitons of both integrable nonlinear wave equations [52] and integrable fractional nonlinear wave equations [53]. The massive Thirring (MT) model takes the following form in laboratory coordinates \[\begin{split}\mathrm{i}u_{x}+v+u|v|^{2}&=0,\\ \mathrm{i}v_{t}+u+v|u|^{2}&=0.\end{split} \tag{1}\] This two-component nonlinear wave evolution model is known to be completely integrable in the sense of possessing infinitely many conserved quantities [54] and by means of the inverse scattering transform method [55, 56]. It reduces to the massless Thirring model [57] if the linear terms \(u\) and \(v\) are ignored. In field theory, the MT model (1) corresponds to an exactly solvable example for one-dimensional nonlinear Dirac system arising as a relativistic extension of the NLS equation [57]. In nonlinear optics, both components in Eq.(1) represent envelopes of the forward and backward waves respectively [58]. Indeed, the MT model (1) is a particular case of the coupled mode equations with self-phase modulation [58]. This coupled mode system has been extensively used to describe pulse propagation in periodic or Bragg nonlinear optical media [59, 60, 61, 62, 63], deep water waves for a periodic bottom in ocean [64, 65] and superpositions of two hyperfine Zeeman sublevels in atomic Bose-Einstein condensates [66]. The Coleman correspondence between the sine-Gordon equation and the MT model has been found and used to generate solutions in [67]. Various solutions of the MT model like soliton, rogue wave and algebro-geometric solutions as well as other integrable properties have been widely studied by using many different techniques such as the Darboux transformation, Backlund transformation, Hirota bilinear method and dressing method [68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83]. With the aid of the bilinear Kadomtsev-Petviashvili (KP) hierarchy reduction approach, one of the authors has systematacially constructed tau-function solutions of the MT model (1) including bright, dark soliton and breather as well as rogue wave in compact form of determinants [84, 85]. In this paper, we aim to explore data-driven localized waves such as soliton, breather and rogue wave, as well as parameter discovery in the MT model (1) via the deep learning algorithm. In forward problems of predicting nonlinear wave, we will employ the traditional PINNs framework to learn the simple solutions including one-soliton of bright/dark type and first-order rogue wave. For the solutions of two-soliton, one-breather and second-order rogue wave with complicated structures, it is necessary to enlarge the computational domain to capture the complete pictures of dynamic behaviors such as soliton collisions, breather oscillations and rogue wave superpositions. These requirements force us to utilize the XPINNs approach with domain decomposition for the data-driven experiments correspondingly. More specifically, the computational domain is divided into a number of subdomains along the time variable \(t\) when we treat the MT model as a time evolution problem, hence the XPINNs architecture is also regarded as a multi-stage training algorithm in this situation. In particular, we slightly modify the interface line in the domain decomposition of XPINNs into a small interface zone shared by two adjacent subdomains. In the training process, the pseudo initial, residual and gradient points are randomly selected in the small interface zones and are subjected to three types of interface conditions for better stitching effect and faster convergence. In inverse problems of discovering parameter, the classical PINNs algorithm is applied to identify the coefficients of linear and nonlinear terms in the MT model (1) in the absence and presence of noise. The remainder of the paper is organized as follows. In Section 2, we firstly introduce the classical PINNs method and develop the modified XPINNs approach with small interface zones, in which the domain decomposition with sampling points and the schematic of individual PINN with interface conditions are illustrated in detail. Then dynamic behaviors of data-driven localized waves in the MT model (1) ranging widely from one-soliton of bright/dark type, bright-bright soliton, dark-dark soliton, dark-antidark soliton, general breather, Kuznetsov-Ma breather and rogue wave of first- and second-order, are presented via the traditional PINNs and modified XPINNs methods. In Section 3, based on three types of localized wave solution, we discuss data-driven parameter discovery in the MT model (1) with and without noise via the classical PINNs algorithm. Conclusions and discussions are given in the last section. ## 2 Data-driven localized waves of the MT model In this section, we will use PINNs and XPINNs to solve forward problems of the MT model (1). More precisely, we will focus on the the MT model (1) with initial and Dirichlet boundary conditions as follows: \[\left\{\begin{array}{l}\mathrm{i}u_{x}+v+u|v|^{2}=0,\\ \mathrm{i}v_{t}+u+v|u|^{2}=0,\\ u(x,T_{0})=u_{0}(x),\;\;v(x,T_{0})=v_{0}(x),\;\;x\in[L_{0},L_{1}],\\ u(L_{0},t)=u_{lb}(t),\;\;v(L_{0},t)=v_{lb}(t),\\ u(L_{1},t)=u_{ub}(t),\;\;v(L_{1},t)=v_{ub}(t),\end{array}\right.t\in[T_{0},T_ {1}]. \tag{2}\] By taking the complex-valued solutions as \(u=p+\mathrm{i}q\) and \(v=r+\mathrm{i}\) is with the real-valued functions \((p,q)\) and \((r,s)\) being the real and imaginary parts of \(u\) and \(v\) respectively, the MT model (2) is decomposed into four real equations. According to the idea of PINNs, we need to construct a complex-valued neural network to approximate the solutions \(u\) and \(v\). The complex-valued PINNs for the MT model (2) can be defined as \[\begin{array}{l}f_{u}:=\mathrm{i}\hat{u}_{x}+\hat{v}+\hat{u}|\vartheta|^{2},\\ f_{v}:=\mathrm{i}\hat{v}_{t}+\hat{u}+\hat{v}|\hat{u}|^{2}.\end{array} \tag{3}\] By rewriting \(\hat{u}=\hat{p}+\mathrm{i}\hat{q}\) and \(\hat{v}=\hat{r}+\mathrm{i}\hat{s}\) with the real-valued functions \((\hat{p},\hat{q})\) and \((\hat{r},\hat{s})\), the above models are converted into the following real-valued PINNs: \[\begin{array}{l}f_{p}:=\hat{p}_{x}+\hat{s}+\hat{q}(\hat{r}^{2}+\hat{s}^{2}),\;\;f_{q}:=-\hat{q}_{x}+\hat{r}+\hat{p}(\hat{r}^{2}+\hat{s}^{2}),\\ f_{r}:=f_{t}+\hat{q}+\hat{s}(\hat{p}^{2}+\hat{q}^{2}),\;\;f_{s}:=-\hat{s}_{t}+ \hat{p}+\hat{r}(\hat{p}^{2}+\hat{q}^{2}),\end{array} \tag{4}\] which possess two inputs \((x,t)\) and four outputs \((\hat{p},\hat{q},\hat{r},\hat{s})\). ### Methodology #### 2.1.1 Physics-informed neural networks Based on the algorithm of PINNs, we firstly establish a fully connected NN of depth \(L\) with an input layer, \(L-1\) hidden-layers and an output layer. In the \(l\)th hidden-layer, we assume that it possesses \(N_{l}\) neurons, then the \(l\)th layer and the previous layer can be connected by using the affine transformation \(\mathcal{A}_{l}\) and the activation function \(\sigma\): \[\mathbf{x}^{l}=\sigma(\mathcal{A}_{l}(\mathbf{x}^{l-1}))=\sigma(\mathbf{W}^{l }\mathbf{x}^{l-1}+\mathbf{b}^{l}), \tag{5}\] where the weight matrix and bias vector are denoted by \(\mathbf{W}^{l}\in R^{N_{l}\times N_{l-1}}\), \(\mathbf{b}^{l}\in R^{N_{l}}\) respectively. Hence, the whole NN can be written as \[\hat{\mathcal{U}}(\mathbf{x}\Theta)=(\mathcal{A}_{L}\circ\sigma\circ\mathcal{ A}_{L-1}\circ\cdots\circ\sigma\circ\mathcal{A}_{1})(\mathbf{x}), \tag{6}\] where \(\hat{\mathcal{U}}(\mathbf{x};\Theta)\) represents four outputs \((\hat{p},\hat{q},\hat{r},\hat{s})\) for predicting the solutions \((p,q,r,s)\), and \(\mathbf{x}\) stands for two inputs \((x,t)\). The set \(\Theta=\{\mathbf{W}^{l},\mathbf{b}^{l}\}_{l=1}^{L}\in\hat{\mathcal{P}}\) contains the trainable parameters with \(\hat{\mathcal{P}}\) being the parameter space. Together with the initial and boundary value conditions, the loss function, which consists of different types of mean squared error (MSE), is defined as \[\mathcal{L}(\Theta)=Loss=\mathbf{W}_{R}MSE_{R}+\mathbf{W}_{IB}MSE_{IB}, \tag{7}\] where \(\mathbf{W}_{R}\), \(\mathbf{W}_{IB}\) are the adjustable weights for the residual and the initial-boundary terms respectively. The terms \(MSE_{R}\) and \(MSE_{IB}\) are given by \[\begin{split} MSE_{R}=\frac{1}{N_{R}}\sum_{i=1}^{N_{R}}\left(\left| f_{p}(x_{R}^{i},t_{R}^{i})\right|^{2}+\left|f_{q}(x_{R}^{i},t_{R}^{i})\right|^{2}+ \left|f_{r}(x_{R}^{i},t_{R}^{i})\right|^{2}+\left|f_{s}(x_{R}^{i},t_{R}^{i}) \right|^{2}\right),\\ MSE_{IB}=\frac{1}{N_{IB}}\sum_{i=1}^{N_{B}}\left(\left|\hat{p}(x_{ IB}^{i},t_{IB}^{i})-p^{i}\right|^{2}+\left|\hat{q}(x_{IB}^{i},t_{IB}^{i})-q^{i} \right|^{2}+\left|f(x_{IB}^{i},t_{IB}^{i})-r^{i}\right|^{2}+\left|s(x_{IB}^{i},t_{IB}^{i})-s^{i}\right|^{2}\right),\end{split} \tag{8}\] where \(\left\{x_{R}^{i},t_{R}^{i}\right\}_{i=1}^{N_{R}}\) is the set of random residual points, \(\left\{x_{IB}^{i},t_{IB}^{i}\right\}_{i=1}^{N_{IB}}\) is the set of training points randomly selected from the dataset of initial and boundary value conditions with \((p^{i},q^{i},r^{i},s^{i})\equiv\)\(\left[p(x_{IB}^{i},t_{IB}^{i}),q(x_{IB}^{i},t_{IB}^{i})\right]\), \(r(x_{IB}^{i},t_{IB}^{i})\), \(s(x_{IB}^{i},t_{IB}^{i})\)]. The \(N_{IB}\) and \(N_{R}\) represent the number of points in both terms of loss functions, the term \(MSE_{R}\) is used to measure the deviation from the differential equations of the MT model on the collocation points with the help of automatic differentiation, while the term \(MSE_{IB}\) enforces the given initial-boundary value conditions as a constraint for ensuring the well-posedness of the MT model. The PINN algorithm is designed to find the optimized parameters in the set \(\Theta\) by minimizing the total loss function. Consequently, the NN outputs generate the approximate solutions \(\hat{u}=\hat{p}+\mathrm{i}\hat{q}\) and \(\hat{v}=\hat{r}+\mathrm{i}\hat{s}\), which not only obey the differential equations in the MT model, but also satisfy the initial-boundary value conditions in Eq.(2). #### 2.1.2 Extended physics-informed neural networks with interface zones The basic idea of XPINNs is to divide the computational domain into a number of subdomains and to employ a separate PINN in each subdomain [11]. Then the training solutions that are subject to the proper interface conditions in the decomposed subdomains are stitched together to obtain a solution over the entire domain. In XPINNs and cPINNs [10; 11], a sufficient number of points need to be selected from a common interface line between two adjacent domains to substitute into the interface conditions. Here we slightly modify this interface line into a small interface zone that is jointly located in two adjacent domains. The main reason for this modification is that apart from the pseudo initial and residual conditions, especially the gradient condition will be introduced more reasonably as interface conditions for the smoothness of learned solutions. This simple improvement will result in better stitching performance and faster convergence as shown below. To apply XPINNs with small interface zones into our model (2), the whole domain is firstly divided into \(N_{s}\) subdomains along the time interval \([T_{0},T_{1}]=[t_{0},t_{n}]=[t_{0},t_{1}]^{0}\bigcup[t_{1},t_{2}]^{1}\bigcup \cdots\bigcup[t_{n-1},t_{n}]^{N_{s}-1}\). The \(k\)-th subdomain (\(k\)=\(0,1,\cdots,N_{s}\)\(-1\)=\(n-1\)) is described as the set: \(\Omega_{k}\equiv\{(x,t)|x\in[x_{lb},x_{ub}]=[L_{0},L_{1}],t\in[t_{k},t_{k+1}]\}\). Then the small interface zones are introduced between two adjacent domains. As our division is along the time variable, the neighborhood of \(t_{k}\) is able to decide the interface zone: \(\Delta\Omega_{k}\equiv\{(x,t)|x\in[x_{lb},x_{ub}],t\in[t_{k}-e_{k}^{-},t_{k}+ e_{k}^{+}]\}\) between \(k-1\)-th subdomain and \(k\)-th one, where \(e_{k}^{+}\) and \(e_{k}^{-}\) are small parameters. Especially, if \(e_{k}^{\pm}=0\), the interface zones reduce to the interface lines of the original XPINNs. The schematic diagram of such domain decomposition with interface zones is displayed in Fig. 1. Figure 1: The illustrated domain decomposition and distribution of sampling points at each stage. This domain decomposition suggests that \(N_{s}\) stages of individual PINNs need to be employed gradually, where the \(k\)-th PINN is implemented in the \(k\)-th subdomain correspondingly. For the 0-th stage, the original PINN is used to train the model in the 0-th subdomain, where the loss function is the same as Eq.(7) without the additional terms. The dataset of the initial value is still generated from the initial value condition in the whole model (2), but one of the boundary value is obtained from the boundary value functions with the time subinterval \(t\in[t_{0},t_{1}]\). For the \(k\)-th stage \((k>0)\), the loss function of PINN in the \(k\)-th subdomain is redefined by adding the interface conditions \[\mathcal{L}(\Theta_{k})=\textit{Loss}_{k}=\textbf{W}_{R}^{(k)}MSE_{R}^{(k)}+ \textbf{W}_{PIB}^{(k)}MSE_{PIB}^{(k)}+\textbf{W}_{IR}^{(k)}MSE_{IR}^{(k)}+ \textbf{W}_{IG}^{(k)}MSE_{IG}^{(k)}, \tag{9}\] where \(\textbf{W}_{R}^{(k)}\), \(\textbf{W}_{PIB}^{(k)}\), \(\textbf{W}_{IR}^{(k)}\) and \(\textbf{W}_{IG}^{(k)}\) are the alterable weights for the residual, the pseudo initial-boundary and interface terms where the last two weights correspond to residual and solution smoothness in the interface zone. The four terms of MSE are expressed as follows: \[MSE_{R}^{(k)}=\frac{1}{N_{R}^{(k)}}\sum\limits_{i=1}^{N_{R}^{(k) }}\left(\left|f_{p}(x_{R_{k}^{\prime}}^{i},t_{R_{k}}^{i})\right|^{2}+\left|f_{ q}(x_{R_{k}^{\prime}}^{i},t_{R_{k}}^{i})\right|^{2}+\left|f_{r}(x_{R_{k}^{ \prime}}^{i},t_{R_{k}}^{i})\right|^{2}+\left|f_{s}(x_{R_{k}^{\prime}}^{i},t_{R _{k}}^{i})\right|^{2}\right), \tag{10}\] \[MSE_{PIB}^{(k)}=\frac{1}{N_{PIB}^{(k)}}\sum\limits_{i=1}^{N_{PIB }^{(k)}}\left(\left|\beta_{PIB_{k}^{\prime}}^{i},t_{PIB_{k}}^{i}\right)-p_{k}^{ i}\right|^{2}+\left|\vartheta_{PIB_{k}^{\prime}}^{i},t_{PIB_{k}}^{i} \right)-q_{k}^{i}\right|^{2}+\left|\hat{r}(x_{PIB_{k}^{\prime}}^{i},t_{PIB_{k} }^{i})-r_{k}^{i}\right|^{2}\] \[+\left|\delta(x_{PIB_{k}^{\prime}}^{i},t_{PIB_{k}}^{i})-s_{k}^{i }\right|^{2})\,,\] \[MSE_{IR}^{(k)}=\frac{1}{N_{IR}^{(k)}}\sum\limits_{i=1}^{N_{R}^{( k)}}\left(\left|f_{p}(x_{IR_{k}^{\prime}}^{i},t_{IR_{k}}^{i})-f_{p^{-}}^{i} \right|^{2}+\left|f_{q}(x_{IR_{k}^{\prime}}^{i},t_{IR_{k}}^{i})-f_{q^{-}}^{i} \right|^{2}+\left|f_{r}(x_{IR_{k}^{\prime}}^{i},t_{IR_{k}}^{i})-f_{r^{-}}^{i} \right|^{2}\right.\] \[\left.+\left|f_{s}(x_{IR_{k}^{\prime}}^{i},t_{IR_{k}}^{i})-f_{s} ^{i}\right|^{2}\right),\] \[MSE_{IG}^{(k)}=\frac{1}{N_{IG}^{(k)}}\sum\limits_{i=1}^{N_{IG}^{( k)}}\left(\left|\partial_{t}p(x_{IO_{k}^{\prime}}^{i},t_{IG_{k}}^{i})-\partial_{t}p^{i -}\right|^{2}+\left|\partial_{t}q(x_{IG_{k}^{\prime}}^{i},t_{IG_{k}}^{i})- \partial_{t}q^{i-}\right|^{2}+\left|\partial_{t}r(x_{IG_{k}^{\prime}}^{i},t_{IG _{k}}^{i})-\partial_{t}r^{i-}\right|^{2}\right.\] \[+\left|\partial_{t}s(x_{IG_{k}^{\prime}}^{i},t_{IG_{k}}^{i})- \partial_{t}s^{i-}\right|^{2})\,,\] where \(\left\{x_{R_{k}^{\prime}}^{i},t_{R_{k}}^{i}\right\}_{i=1}^{N_{R}^{(k)}}\) is the set of random residual points in the \(k\)-th subdomain, \(\left\{x_{PIB_{k}^{\prime}}^{i},t_{PIB_{k}}^{i}\right\}_{i=1}^{N_{PIB}^{(k)}}\) is the set of training points randomly selected from the dataset of pseudo initial-boundary value conditions. The values of \((p_{k}^{i},q_{k}^{i},r_{k}^{i},s_{k}^{i})\) contain two parts: If the training points belong to the sub boundary, i.e., \((x_{PIB_{k}}^{i},t_{PIB_{k}}^{i})\in\left\{(x,t)|x=x_{lb},x_{ub},t\in[t_{k},t_{ k+1}]\right\}\), then \(p_{k}^{i}+\text{i}q_{k}^{i}=u_{lb}(t_{PIB_{k}}^{i}),u_{ub}(t_{PIB_{k}}^{i})\) and \(r_{k}^{i}+\text{i}s_{k}^{i}=v_{lb}(t_{PIB_{k}}^{i}),v_{ub}(t_{PIB_{k}}^{i})\): If Fig. 2: The schematic of PINN for the MT model employed at a stage, where NN and physics-informed network share hyper-parameters with additional loss terms in the interface zone. the training points belong to the interface zone, i.e., \((x^{i}_{PIB_{k}},t^{i}_{PIB_{k}})\in\Delta\Omega_{k}\), then \((p^{i}_{k^{\prime}},q^{i}_{k^{\prime}},t^{i}_{k^{\prime}},s^{i}_{k})\) are called pseudo initial values obtained from the predicting solution outputs \((p^{i}_{k-1},q^{i}_{k-1},p^{i}_{k-1},s^{i}_{k-1})\) at \((x^{i}_{PIB_{k}},t^{i}_{PIB_{k}})\) through the \((k-1)\)-th PINN with the last optimized parameters. Furthermore, \(\left\{x^{i}_{IR_{k}},t^{i}_{IR_{k}}\right\}_{i=1}^{N_{IR}^{(k)}}\) and \(\left\{x^{i}_{IG_{k}},t^{i}_{IG_{k}}\right\}_{i=1}^{N_{IG}^{(k)}}\) denote the set of randomly selected residual points and gradient points respectively in the interface zone \(\Delta\Omega_{k}\). The \((f^{i}_{p^{\prime}},f^{i}_{q^{\prime}},f^{i}_{r^{\prime}},f^{i}_{s^{-}})\) and \((\partial_{t}p^{i-},\partial_{t}q^{i-},\partial_{t}r^{i-},\partial_{t}s^{i-})\) represent residual and gradient outputs at \((x^{i}_{IR_{k}},t^{i}_{IR_{k}})\) and \((x^{i}_{IG_{k}},t^{i}_{IG_{k}})\) respectively from the \(k\)\(-\)\(1\)-th PINN. Notice that due to the domain decomposition along the time direction, only the gradients with respect to the variable \(t\) need to be computed and then be imposed as one of interface conditions. The \(N_{R}^{(k)}\), \(N_{PIB}^{(k)}\), \(N_{IR}^{(k)}\) and \(N_{IG}^{(k)}\) represent the number of points in the corresponding sets. The distributions of these sampling points are illustrated in Fig. 1, where the (pseudo) initial-boundary points and residual points for each subdomain are marked with different color crosses and green dots respectively. In the interface zone \(\Delta\Omega_{k}\), residual and gradient points are distinguished by using red and blue dots. In fact, we can see clearly that the pseudo initial points are also taken in the interface zone \(\Delta\Omega_{k}\) and the relevant MSE term can be equivalently seen as one of interface conditions at \(k\)-th stage. For four terms in the loss function (9), the basic roles of \(MSE_{R}^{(k)}\) and \(MSE_{PIB}^{(k)}\) are same as before described in the PINN algorithm on the \(k\)-th subdomain. The \(MSE_{IR}^{(k)}\) and \(MSE_{IG}^{(k)}\) correspond to the residual continuity and \(C^{1}\) solution continuity across the interface zone which is associated with two different sub-networks jointly. The last two terms and the relevant MSE for pseudo initial points acting as the interface conditions are responsible for transmitting the physical information from the \(k\)\(-\)\(1\)-th subdomain to the \(k\)-th one. To better understand the XPINN algorithm, a schematic of modified PINN in a subdomain is shown in Fig. 2. Apart from the contributions of the NN part and the physics-informed part, and the loss function is changed by adding certain interface conditions that ensure the quality of stitching and improve the convergence rate. By minimizing the loss function below a given tolerance \(\epsilon\) or up to a prescribed maximum number of iterations, we can find the optimal values of weights \(\mathbf{W}\) and biases \(\mathbf{b}\). Then the outputs: \((I,f,\partial_{t})\circ(\hat{p},\hat{q},\hat{r},\hat{s})\) represent three parts: \(I\circ(\hat{p},\hat{q},\hat{r},\hat{s})=(\hat{p},\hat{q},\hat{r},\hat{s})\) are predicting solutions in the subdomain, \(f\circ(\hat{p},\hat{q},\hat{p},\hat{s})=(f_{p^{\prime}},f_{q^{\prime}},f_{q},f_ {q})\) and \(\partial_{t}\circ(\hat{p},\hat{q},\hat{r},\hat{s})=(\partial_{t}\hat{p}, \partial_{t}\hat{q},\partial_{t}\hat{r},\partial_{t}\hat{s})\) are the residuals and gradients on random points from the common interface zone. Finally, these predicting solutions in each subdomain are stitched together and the training errors are measured by the relative \(L^{2}\)-norm errors: \[\mathrm{Err}r_{u}=\frac{\sqrt{\sum\limits_{i=1}^{N}|u_{i}-\partial_{i}|^{2}}}{ \sqrt{\sum\limits_{i=1}^{N}|u_{i}|^{2}}}\,,\qquad\mathrm{Err}r_{v}=\frac{ \sqrt{\sum\limits_{i=1}^{N}|v_{i}-\partial_{i}|^{2}}}{\sqrt{\sum\limits_{i=1}^ {N}|v_{i}|^{2}}}.\] In the above PINN algorithm, the Adam and limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimizers are employed to train parameters for minimizing the loss function, where the former is one version of the stochastic gradient descent method and the latter is a full-batch gradient descent optimization algorithm with the aid of a quasi-Newton method. For smooth and regular solutions, the Adam optimizer is first-order accurate and robust, while the L-BFGS optimizer can seek a better solution due to its second-order accuracy. Therefore, one would first apply the Adam optimiser to achieve a small value of the loss function, and then switch to the L-BFGS optimiser to pursue a smaller loss function [4, 16]. In addition, weights are provided with Xavier initialization, biases are initialized to zeros and the activation function is selected as the hyperbolic tangent (tanh) function. Particularly, it is better to take the optimized weights and biases from the previous stage are as the initialization for the next stage in the XPINN algorithm when both networks have same hidden layers and neurons per layer. Therefore, in the following experiments for several types different data-driven solutions, we uniformly construct the NN with 7 hidden layers and 40 neurons per layer in each subdomain. The recommended weights in the loss function are taken as \(\mathbf{W}_{R}=\mathbf{W}_{IB}=1\) and \(\mathbf{W}_{R}^{(k)}=\mathbf{W}_{PIB}^{(k)}=1\), \(\mathbf{W}_{IR}^{(k)}=0.00001\), \(\mathbf{W}_{IG}^{(k)}=0.0001\) in these numerical simulations. At each stage, we first use the 5000 steps Adam optimization (1000 steps for first stage) with the default learning rate \(10^{-3}\) and then set the maximum iera tions of the L-BFGS optimization to 50000. It is mentioned that in the L-BFGS optimization the iteration will stop when \[\frac{L_{k}-L_{k+1}}{\text{max}\{|L_{k}|,|L_{k+1}|,1\}}\leq e_{m},\] where \(L_{k}\) denotes loss in the \(n\)th step L-BFGS optimization and \(e_{m}\) stands for the machine epsilon. Here all the code is based on Python 3.9 and Tensorflow 2.7, and the default float type is always set to 'float64'. In what follows, we will focus on the data-driven bright, dark soliton and breather as well as rogue wave solutions of the MT model (1). These localized solutions of arbitrary \(N\) order have been derived exactly in terms of determinants by using the bilinear KP hierarchy reduction technique [84; 85]. For these types of solutions, only the data-driven lower-order cases are considered here, and the higher-order cases can be performed similarly, but with more complicated computations. In particular, we will apply the classical PINN to the simple one-soliton and first-order rogue wave solutions, and the XPINN to two-soliton, one-breather and second-order rogue wave solutions. ### Data-driven bright one- and two-soliton solutions The formulae of the bright one-soliton solutions of the MT model (1) are expressed in the form [84] \[u=\frac{\text{i}\alpha_{1}^{*}e^{\xi_{1}}}{\omega_{1}\left[1-\frac{\text{i} \alpha_{1}|^{2}\omega_{0}\xi_{1}+\xi_{1}^{*}}{(\omega_{1}+\omega_{1})^{2}} \right]},\qquad v=\frac{\alpha_{1}^{*}e^{\xi_{1}}}{1+\frac{\text{i}\alpha_{1} |^{2}\omega_{0}\xi_{1}+\xi_{1}^{*}}{(\omega_{1}+\omega_{1})^{2}}}, \tag{11}\] where \(\xi_{1}=\omega_{1}x-\frac{1}{\omega_{1}}t+\xi_{10}\) and \(\omega_{1}\), \(\alpha_{1}\), \(\xi_{10}\) are arbitrary complex constants. The symbol \({}^{*}\) stands for the complex conjugate hereafter. To simulate the bright one-soliton, the parameters in exact solutions (11) are chosen as \(\alpha_{1}=1\), \(\omega_{1}=1+3\text{i}\) and \(\xi_{10}=0\). The intervals of the computational domain \([L_{0},L_{1}]\) and \([T_{0},T_{1}]\) are taken as \([-2,2]\) and \([-3,3]\), respectively. Then the initial and Dirichlet boundary conditions in (2) are presented explicitly. By taking the grid points for the space-time region as \(400\times 600\) with the equal step length, we get the data sets of the discretized initial-boundary value. For this simple one-soliton solutions, we employ the original PINN algorithm to conduct the numerical experiment. Here, \(N_{IB}=1000\) training points are randomly selected from the initial-boundary data sets and \(N_{R}=20000\) collocation points are generated in the whole domain by using the Latin Hypercube Sampling (LHS) method. By optimizing the learnable parameters continually in the NN, we arrive at the learned bright one-solitons \(\hat{u}\) and \(\hat{v}\). The relative \(L^{2}\)-norm errors for \((|\hat{u}|,|\hat{v}|)\) are \((1.183\text{e}{-03},7.085\text{e}{-04})\). Figs. 3(a) and 3(b) exhibit the three-dimensional (3D) structure and two-dimensional (2D) density profiles of the learned bright one-soliton for the solutions \((|\hat{u}|,|\hat{v}|)\), respectively. Fig. 3(c) shows the 2D density profiles of the point-wise absolute errors between exact and learned solutions. The maximum point-wise absolute errors are \((3.621\text{e}{-03},2.046\text{e}{-02})\), which appear at near the central peaks of soliton with the high gradients. The expressions for the bright two-soliton solutions of the MT model (1) are given by \[u=\frac{g}{f^{*}}\,,\quad v=\frac{h}{f}\,, \tag{12}\] where tau functions \(f\), \(g\) and \(h\) are defined as \[\begin{split}& f=1+c_{11}\text{e}^{\xi_{1}+\xi_{1}^{*}}+c_{21} \text{e}^{\xi_{2}+\xi_{1}^{*}}+c_{12}\text{e}^{\xi_{1}^{*}+\xi_{2}^{*}}+c_{22 \text{e}^{\xi_{2}^{*}+\xi_{2}^{*}}}+c_{121}\text{e}^{\xi_{1}+\xi_{2}+\xi_{1}^{ *}+\xi_{2}^{*}},\\ & g=\frac{\text{i}\alpha_{1}^{*}}{\omega_{1}}e^{\xi_{1}}+\frac{ \text{i}\alpha_{2}^{*}}{\omega_{2}}e^{\xi_{2}}-\frac{\text{i}\omega_{1}^{*}}{ \omega_{1}\omega_{2}}c_{121}\text{e}^{\xi_{1}+\xi_{2}+\xi_{1}^{*}}-\frac{\text {i}k_{2}^{*}}{\omega_{1}\omega_{2}}c_{122}\text{e}^{\xi_{1}+\xi_{2}+\xi_{2}^{ *}},\\ & h=\alpha_{1}^{*}e^{\xi_{1}}+\alpha_{2}^{*}e^{\xi_{2}}+c_{121} \text{e}^{\xi_{1}+\xi_{2}+\xi_{1}^{*}}+c_{122}\text{e}^{\xi_{1}+\xi_{2}+\xi_{2 }^{*}}\,,\end{split} \tag{13}\] with \[c_{ij^{*}}=\frac{\mathrm{i}\omega_{i}\kappa_{i}^{*}\kappa_{j}}{ \left(\omega_{i}+\omega_{j}^{*}\right)^{2}},\quad c_{12i^{*}}=\left(\omega_{1}- \omega_{2}\right)\omega_{i}^{*}\left[\frac{\alpha_{2}^{*}c_{1i^{*}}}{\omega_{1} \left(\omega_{2}+\omega_{i}^{*}\right)}-\frac{\alpha_{1}^{*}c_{2i^{*}}}{\omega_ {2}\left(\omega_{1}+\omega_{i}^{*}\right)}\right]\,,\] \[c_{121^{*}2^{*}}=\left|\omega_{1}-\omega_{2}\right|^{2}\left[ \frac{c_{11^{*}}c_{22^{*}}}{\left(\omega_{1}+\omega_{2}^{*}\right)\left( \omega_{2}+\omega_{1}^{*}\right)}-\frac{c_{12^{*}}c_{21^{*}}}{\left(\omega_{1 }+\omega_{1}^{*}\right)\left(\omega_{2}+\omega_{2}^{*}\right)}\right]\,.\] Here \(\xi_{i}=\omega_{i}x-\frac{1}{\omega_{i}^{*}}t+\xi_{i0}\) and \(\omega_{i}\), \(\alpha_{i}\), \(\xi_{i0}\) (\(i\)=\(1,2\)) are arbitrary complex constants. Fig. 4: The train loss of the data-driven bright two-soliton solution in each subdomain with Adam (blue) and L-BFGS (orange) optimizations. Fig. 3: The data-driven bright one-soliton solution of the MT model (2): (a) and (b) The 3D and 2D density profiles of the learned one-soliton, respectively; (c) The 2D density profiles of the point-wise absolute errors between exact and learned solutions, \(\mathrm{Error}_{a}=\left|u-\hat{a}\right|\) and \(\mathrm{Error}_{b}=\left|v-\hat{v}\right|\). For the bright two-soliton solutions, the parameters are taken as \(\alpha_{1}=\alpha_{2}=\frac{3}{2}\), \(\omega_{1}=\frac{3}{2}+\frac{3}{2}\)i, \(\omega_{2}=\frac{3}{5}+\frac{1}{2}\)i and \(\zeta_{10}=\zeta_{20}=0\). The computational domain \([L_{0},L_{1}]\) and \([T_{0},T_{1}]\) are defined as \([-5,5]\) and \([-3,3]\) respectively, then we have the explicit initial and Dirichlet boundary conditions in (2). The grid points for the space-time region are set to \(600\times 1200\) with the equal step length, the data sets of the discretized initial-boundary value can be obtained. Compared with the bright one-soliton solutions, the bright two-soliton solutions possess the relatively complicated structures. These features are not well captured by the traditional PINN algorithm, so we must apply the XPINN algorithm to perform this numerical experiment. The computational domain is divided to 5 subdomains as shown in Table 1, and \([e_{k}^{-},e_{k}^{+}]\) are chosen uniformly as \([5,3]\times\frac{3-(-3)}{1200}\) with \(k=1,2,3,4\) to decide four small interface zones. At each stage, the numbers of training points \(N_{PIB}^{(k)}\) are given in Table 1 respectively, and the numbers of collocation points \(N_{R}^{(k)}\) are set to \(N_{R}^{(k)}=20000\) uniformly. These points are generated by using the LHS method. To guarantee the smooth stitching at the line \(t=t_{k}\), it is necessary to take residual points and gradient points with the equal grids in the interface zone \(\Delta\Omega_{k}\). Hence, we simply take such two types of points as the corresponding part of the grid points in the process of discretising the exact solution. This implies that here the numbers of both types of sample points are \(N_{IR}^{(k)}=N_{IG}^{(k)}=8\times 600=4800\). The learned solutions in each subdomain are firstly obtained by training the learnable parameters in each NN. The loss function plots at each stage are given in Fig. 4. Furthermore, by stitching together these predicted solutions in sequence, we have the learned bright two-solitons solutions \(\hat{u}\) and \(\hat{v}\). The relative \(L^{2}\)-norm errors for \((|\hat{u}|,|\hat{v}|)\) in each subdomain and the whole domain are given in Table 1. The 3D structure and 2D density profiles of the learned bright two-soliton for the solutions \((|\hat{u}|,|\hat{v}|)\) in the whole domain are displayed in Figs. 5(a) and 5(b), respectively. The 2D density profiles of the point-wise absolute errors are depicted in Fig. 5(c), in which the maximum values (\(2.312\mathrm{e}{-}02,1.559\mathrm{e}{-}02\)) occur at near the central peaks of soliton. As we can see from Fig. 5(c), despite the mismatches appear in the high gradient regions, the slight mismatches along the interface line emerge due to the different sub-networks. To reduce such mismatch for smooth stitching, one can change the numbers of interface points, the weights of interface terms in the loss function adequately or introduce the higher-order gradient interface conditions. For the purpose of comparison, we first apply the PINN algorithm directly to learn the bright two-soliton solution. The network structures and numbers of sample points are taken to be the same as in the numerical experiment for the bright one-soliton. But the Adam optimization and the maximum iterations of the L-BFGS optimization are adjusted to 20000 and 50000 respectively. The results show that the relative \(L^{2}\)-norm errors and the maximum point-wise absolute errors for \((|\hat{u}|,|\hat{v}|)\) reach at (\(4.533\mathrm{e}{-}01,5.376\mathrm{e}{-}01\)) and (\(2.951,3.141\)), Fig. 5: The data-driven bright two-soliton solution of the MT model (2). (a) and (b) The 3D and 2D density profiles of the learned two-soliton, respectively; (c) The 2D density profiles of the point-wise absolute errors between exact and learned solutions, \(\mathrm{Error}_{u}=|u-\hat{u}|\) and \(\mathrm{Error}_{v}=|v-\hat{v}|\). respectively. Thus, the original PINN seems to be very poor at predicting the bright two-soliton solution. On the other hand, we conduct the numerical experiment for the bright two-soliton by using the XPINN algorithm with interface lines, namely \([\epsilon_{k}^{-},\epsilon_{k}^{+}]=[0,0]\), where the domain decomposition, all network settings and numbers of sample points are the same as ones of Fig. 5. For training results, the relative \(L^{2}\)-norm errors for \((|\hat{u}|,|\hat{v}|)\) in each subdomain and the whole domain are listed in Table 2 and the maximum values of point-wise absolute errors reach at \((2.410\text{e}{-02},2.594\text{e}{-02})\). Hence, the XPINN algorithm with small interface zones performs slightly better in predicting effects than one with interface lines. Furthermore, we discuss the influence of the size of small interface zone \(\Delta\Omega_{k}\), which is essentially governed by the thickness \([\epsilon_{k}^{-},\epsilon_{k}^{+}]\). Here we write \([\epsilon_{k}^{-},\epsilon_{k}^{+}]=[h_{k}^{-},h_{k}^{+}]\times\frac{3-(-3)}{ 1200}\) with integers \(h_{k}^{-}\) and \(h_{k}^{+}\). In order to preserve the smoothness of stitching at the line \(t=t_{k}\) thoroughly, we need to choose uniform sampling points with the same grid width. Hence, residual points and gradient points in the interface zone \(\Delta\Omega_{k}\) are taken the same as ones of the grid points in the discretisation of exact solution, which suggests that the numbers of both types of points can be calculated as \(N_{IR}^{(k)}=N_{IG}^{(k)}=(h_{k}^{-}+h_{k}^{+})\times\)600. Then, with respect to different size of interface zone \(\Delta\Omega_{k}\), we perform the training experiments for the bright two-soliton, where the domain decomposition, network architectures, the settings of training step and numbers of sample points are the same as ones in Fig. 5. According to the results shown in Table 3, one can see that relative \(L^{2}\) norm errors increase gradually as the thickness of interface zone enlarges. ### Data-driven dark one- and two-soliton solutions The dark one-soliton solutions of the MT model (1) are written as [84] \[u=\rho_{1}e^{\frac{i\phi}{\phi}}\frac{1-\frac{\mathrm{i}\omega_{1}^{*}K_{1}}{ \omega_{1}+\omega_{1}^{*}}e^{\xi_{1}+\xi_{1}^{*}}}{1-\frac{\mathrm{i}\omega_{1 }^{*}}{\omega_{1}+\omega_{1}^{*}}e^{\xi_{1}+\xi_{1}^{*}}},\qquad v=\rho_{2}e^{ \frac{1}{\phi}}\frac{1-\frac{\mathrm{i}\omega_{1}^{*}H_{1}}{\omega_{1}+\omega_ {1}^{*}}e^{\xi_{1}+\xi_{1}^{*}}}{1+\frac{\mathrm{i}\omega_{1}}{\omega_{1}+ \omega_{1}^{*}}e^{\xi_{1}+\xi_{1}^{*}}},\qquad\phi=\rho\left(\frac{\rho_{2}}{ \rho_{1}}x+\frac{\rho_{1}}{\rho_{2}}t\right), \tag{14}\] with \[K_{1}=-\frac{\omega_{1}-\mathrm{i}\alpha}{\omega_{1}^{*}+\mathrm{i}\alpha}, \qquad H_{1}=-\frac{\omega_{1}-\mathrm{i}\alpha(1+\rho_{1}\rho_{2})}{\omega_{ 1}^{*}+\mathrm{i}\alpha(1+\rho_{1}\rho_{2})},\qquad\xi_{1}=\frac{\rho_{2}}{ \alpha\rho_{1}}p_{1}x-\frac{\rho_{1}\alpha\rho}{\rho_{2}}\frac{t}{\omega_{1}} +\xi_{10}.\] Here \(\alpha\), \(\rho_{1}\), \(\rho_{2}\) are real constants with the restriction that \(\rho=1+\rho_{1}\rho_{2}\neq 0\), \(\omega_{1}\), \(\xi_{10}\) are complex constants. These parameters have to satisfy the following algebraic constraint: \[|\omega_{1}-\mathrm{i}\alpha\rho|^{2}=\alpha^{2}\rho_{1}\rho_{2}\rho\,.\] For the dark one-soliton solutions, the parameter values \(\rho_{1}=\rho_{2}=\alpha=1\), \(\omega_{1}=-\sqrt{2}+2\mathrm{i}\) and \(\xi_{10}=0\) are chosen for the following numerical simulation. In the computational domain, the intervals \([L_{0},L_{1}]\) and \([T_{0},T_{1}]\) correspond to \([-3,3]\) and \([-3,3]\). The initial and Dirichlet boundary conditions in (2) are obtained explicitly from the above exact solutions. Similarly, the data sets of the discretized initial-boundary value are provided by taking the grid points \(400\times 600\) with the equal step length in the space-time region. In this simple case, the classical PINN algorithm is used to conduct the numerical experiment for dark one-soliton solutions. Furthermore, \(N_{IB}=1000\) training points are randomly chosen from the initial-boundary data sets and \(N_{R}=20000\) collocation points are generated in the whole domain by using the LHS method. After training the learnable parameters in the NN, the predicted dark one-soliton solutions \(\hat{u}\) and \(\hat{v}\) are successfully acquired. For this solutions, the relative \(L^{2}\)-norm errors for \((|\hat{u}|,|\hat{v}|)\) reach at \((3.998\mathrm{e}{-04},7.319\mathrm{e}{-04})\). In Figs. 6(a) and 6(b), we show the 3D structure and 2D density profiles of the learned dark one-soliton solutions \((|\hat{u}|,|\hat{v}|)\), respectively. The 2D density profiles of the point-wise absolute errors are displayed in Fig. 6(c), in which the maximum point-wise absolute errors \((5.333\mathrm{e}{-03},4.492\mathrm{e}{-03})\) are found in the high gradient regions near the central peaks of soliton. The dark two-soliton solutions of the MT model (1) are expressed as \[u=\rho_{1}e^{\frac{i\phi}{\phi}}\frac{\mathcal{S}}{f^{*}},\qquad v=\rho_{2}e^{ \frac{i\phi}{\phi}}\frac{h}{f},\qquad\phi=\rho\left(\frac{\rho_{2}}{\rho_{1}}x+ \frac{\rho_{1}}{\rho_{2}}t\right), \tag{15}\] \begin{table} \begin{tabular}{c|c c c c c c} \hline Layers-neurons & 5-40 & 5-60 & 7-40 & 7-60 & 9-40 & 9-60 \\ \hline \multirow{2}{*}{\(L^{2}\) error} & \(|\hat{u}|\) & 3.151e-03 & 3.660e-03 & 1.942e-03 & 2.922e-03 & 3.794e-03 & 2.448e-03 \\ & \(|\hat{v}|\) & 3.372e-03 & 3.443e-03 & 1.992e-03 & 3.418e-03 & 3.968e-03 & 2.131e-03 \\ \hline \end{tabular} \end{table} Table 4: Relative \(L^{2}\)-norm errors of the data-driven bright two-soliton solution in the whole domain with different numbers of hidden layers and neurons. \begin{table} \begin{tabular}{c|c c c} \hline Learning rate & \(8\times 10^{-4}\) & \(10^{-3}\) & \(1.2\times 10^{-3}\) \\ \hline \multirow{2}{*}{\(L^{2}\) error} & \(|\hat{u}|\) & 3.322e-03 & 1.942e-03 & 2.766e-03 \\ & \(|\hat{v}|\) & 3.265e-03 & 1.992e-03 & 3.132e-03 \\ \hline \end{tabular} \end{table} Table 5: Relative \(L^{2}\)-norm errors of the data-driven bright two-soliton solution in the whole domain with different learning rates in Adam optimization. where tau functions \(f\), \(g\) and \(h\) are defined as \[\begin{split}& f=1+d_{11^{*}}e^{\xi_{1}+\xi_{1}^{*}}+d_{22^{*}}e^{ \xi_{2}+\xi_{2}^{*}}+d_{11^{*}}d_{22^{*}}\Omega_{12}e^{\xi_{1}+\xi_{2}+\xi_{1}^ {*}+\xi_{2}^{*}},\\ & g=1+d_{11^{*}}^{*}K_{1}e^{\xi_{1}+\xi_{1}^{*}}+d_{22^{*}}^{*}K_ {2}e^{\xi_{2}+\xi_{2}^{*}}+d_{11^{*}}^{*}d_{22^{*}}^{*}K_{1}K_{2}\Omega_{12}e^{ \xi_{1}+\xi_{2}+\xi_{1}^{*}+\xi_{2}^{*}},\\ & h=1+d_{11^{*}}^{*}H_{1}e^{\xi_{1}+\xi_{1}^{*}}+d_{22}^{*}H_{2}e^ {\xi_{2}+\xi_{2}^{*}}+d_{11^{*}}^{*}d_{22}^{*}H_{1}H_{2}\Omega_{12}e^{\xi_{1}+ \xi_{2}+\xi_{1}^{*}+\xi_{2}^{*}},\end{split} \tag{16}\] with \[\begin{split}& d_{ii^{*}}=\frac{\mathrm{i}\omega_{i}}{\omega_{i}+ \omega_{i}^{*}},\;\;K_{i}=-\frac{\omega_{i}-\mathrm{i}\alpha}{\omega_{i}^{*}+ \mathrm{i}\alpha},\;\;H_{i}=-\frac{\omega_{i}-\mathrm{i}\alpha\rho}{\omega_{i}^ {*}+\mathrm{i}\alpha\rho},\\ &\Omega_{12}=\frac{|\omega_{1}-\omega_{2}|^{2}}{|\omega_{1}+ \omega_{2}^{*}|^{2}},\;\;\xi_{i}=\frac{\rho_{2}}{\alpha\rho_{1}}\omega_{i}x- \frac{\alpha\rho_{1}\rho}{\rho_{2}}\frac{t}{\omega_{i}}+\xi_{i0},\;\;i=1,2. \end{split}\] Here \(\omega_{i}\), \(\xi_{i0}\) (\(i{=}1,2\)) are complex constants and \(\alpha\), \(\rho_{1}\), \(\rho_{2}\) are real constants with \(\rho{=}1+\rho_{1}\rho_{2}\neq 0\). These parameters need to satisfy the algebraic constraints: \[|\omega_{i}-\mathrm{i}\alpha\rho|^{2}=\alpha^{2}\rho_{1}\rho_{2}\rho.\] Following the detailed analysis of the maximum amplitudes of the dark one-soliton [84], we know that both components \(u\) and \(v\) allow dark and anti-dark solitons. Thus the dark two-soliton solutions could support three types of collisions: dark-dark solitons, dark-anti-dark solitons, and anti-dark-anti-dark ones. For the purpose of illustration, we will proceed here to learn only two cases: the dark-dark solitons and the dark-anti-dark solitons. The parameters in both cases are chosen as \(\omega_{2}=-\sqrt{2}+2\mathrm{i}\), \(\xi_{10}=\frac{1}{4}\) for the dark-dark solitons and \(\omega_{2}=\sqrt{2}+2\mathrm{i}\), \(\xi_{10}=-\frac{1}{2}\) for the dark-anti-dark solitons as well as the common values \(\rho_{1}=\rho_{2}=\alpha=1\), \(\omega_{1}=-1+\mathrm{i}\), \(\xi_{20}=0\). \begin{table} \begin{tabular}{c c c c c c c} \hline \multirow{2}{*}{\(L^{2}\) error} & \(t\)-domain & \([-4,-2.5]\) & \([-2.5,-1]\) & \([-1,1]\) & \([1,2.6]\) & \([2.6,4]\) & \([-4,4]\) \\ \cline{2-7} & \(N_{lb}+(N_{Pl})\) & 1000 & 2000 & 2500 & 3000 & 3500 & \\ \hline \(|\tilde{u}|\) & & 1.270e-03 & 2.582e-03 & 2.637e-03 & 4.937e-03 & 7.284e-03 & 4.157e-03 \\ \(|\vartheta|\) & & 3.838e-04 & 8.310e-04 & 1.874e-03 & 3.844e-03 & 7.434e-03 & 3.655e-03 \\ \hline \end{tabular} \end{table} Table 6: Relative \(L^{2}\)-norm errors of the data-driven dark-dark soliton solution in each subdomain and the whole domain. Figure 6: The data-driven dark one-soliton solution of the MT model (2): (a) and (b) The 3D and 2D density profiles of the learned one-soliton, respectively; (c) The 2D density profiles of the point-wise absolute errors between exact and learned solutions, \(\mathrm{Error}_{u}=|u-\tilde{u}|\) and \(\mathrm{Error}_{v}=|v-\phi|\). In the following data-driven experiments, the computational domains of both cases \([L_{0},L_{1}]\times[T_{0},T_{1}]\) are taken as \([-6,6]\times[-4,4]\) and \([-5,5]\times[-4,4]\), respectively. From the definitions given in (2), both types of initial-boundary conditions can be written explicitly. The data sets of the discretized initial-boundary value are generated by inserting the grid points \(600\times 1200\) with the equal step length in the space-time regions. Similar to the bright two-soliton solutions, the dark two-soliton solutions with the relatively complicated structures force us to use the XPINN algorithm for pursuing the better numerical simulation results. For both cases, we divide the whole computational domain into 5 subdomains as given in Tables 6 and 7 respectively, and define \([e_{k}^{-},e_{k}^{+}]=[5,3]\times\frac{T_{1}-T_{0}}{1200}\) uniformly with \(k=1,2,3,4\) for the corresponding small interface zones. In Tables 6 and 7, we list the numbers of training points \(N_{PIB}^{(k)}\) at each stage. The number of collocation points in each subdomain is \(N_{R}^{(k)}=20000\) uniformly, which are produced by using the LHS method. To ensure smooth stitching at the line \(t=t_{k}\), we take residual points and gradient points in each interface zone as the corresponding part of the grid points in the discretisation of exact solution. Thus the numbers of two types of sample points are \(N_{IR}^{(k)}=N_{IG}^{(k)}=4800\). After training the learnable parameters in each NN and stitching together the predicted solutions in all subdomains, both types of the dark two-soliton solutions \(\hat{a}\) and \(\hat{o}\) are obtained in the whole computational domain. In each subdomain and the whole domain, the relative \(L^{2}\)-norm errors for \((|a|,|\hat{o}|)\) are calculated in Tables 6 and 7. For the dark-dark solitons and the dark-anti-dark solitons, the 3D structure and 2D density profiles of the learned solutions as well as the 2D density profiles of the point-wise absolute errors are shown in Figs. 7 and 8 respectively. In the comparisons between exact solutions and learned ones for two cases, the maximum absolute errors of the solutions \((|\hat{a}|,|\hat{o}|)\) are \((4.940\text{e}{-}02,4.699\text{e}{-}02)\) and \((6.169\text{e}{-}02,3.082\text{e}{-}02)\) near the central peaks and valleys of solitons that correspond to the high gradient regions. In addition, we can observe that apart from the large errors occurring near the wave peaks and valleys, the adjacent sub-networks with the different architectures lead to certain errors along the interface line. This type of mismatch can be reduced by adding more interface information, such as increasing the numbers of interface points and the weights of interface terms in the loss function, and imposing the additional interface conditions. where tau functions \(f\), \(g\) and \(h\) are given by \[\begin{split} f&=\frac{\epsilon^{\zeta_{1}^{\intercal} \zeta_{1}^{\intercal}\zeta_{1}^{\intercal}}}{\omega_{11}+\omega_{11}^{ \intercal}}\frac{\omega_{11}}{\omega_{21}}+\frac{\epsilon^{\zeta_{1}^{ \intercal}}}{\omega_{11}+\omega_{21}^{\intercal}}\frac{\omega_{11}}{\omega_{2 1}}+\frac{\epsilon^{\zeta_{1}^{\intercal}}}{\omega_{21}+\omega_{11}^{ \intercal}}+\frac{1}{\omega_{21}+\omega_{21}^{\intercal}},\\ g&=\frac{\Theta_{1}\Theta_{1}^{\intercal}-\epsilon^{ \zeta_{1}^{\intercal}\zeta_{1}^{\intercal}}}{\omega_{11}+\omega_{11}^{ \intercal}}\frac{\omega_{11}}{\omega_{21}^{\intercal}}+\frac{\Theta_{1} \Theta_{1}^{\zeta_{1}}}{\omega_{11}+\omega_{21}^{\intercal}}+\frac{\Theta_{1} ^{\intercal-1}\epsilon^{\zeta_{1}^{\intercal}}}{\omega_{21}^{\intercal}+ \omega_{11}^{\intercal}}\frac{\omega_{11}^{\intercal}}{\omega_{21}^{ \intercal}}+\frac{1}{\omega_{21}+\omega_{21}^{\intercal}},\\ h&=\frac{\Lambda_{1}\Lambda_{1}^{\intercal}- \epsilon^{\zeta_{1}^{\intercal}\zeta_{1}^{\intercal}}}{\omega_{11}+\omega_{11}^ {\intercal}}\frac{\omega_{11}^{\intercal}}{\omega_{21}^{\intercal}}+\frac{ \Lambda_{1}\epsilon^{\zeta_{1}}}{\omega_{11}+\omega_{21}^{\intercal}}+\frac{ \Lambda_{1}^{\intercal-1}\epsilon^{\zeta_{1}^{\intercal}}}{\omega_{21}^{ \intercal}+\omega_{11}^{\intercal}}\frac{\omega_{11}^{\intercal}}{\omega_{21} ^{\intercal}}+\frac{1}{\omega_{21}+\omega_{21}^{\intercal}},\end{split} \tag{18}\] with \[\Theta_{1}=\frac{\omega_{11}-\mathrm{i}\alpha}{\omega_{21}-\mathrm{i}\alpha}, \qquad\Lambda_{1}=\frac{\omega_{11}-\mathrm{i}\alpha\rho}{\omega_{21}-\mathrm{ i}\alpha\rho},\qquad\zeta_{1}=(\omega_{11}-\omega_{21})\left(\frac{\rho_{2}}{ \alpha\rho_{1}}x+\frac{\rho_{1}\mathrm{a}\rho}{\rho_{2}\omega_{11}\omega_{21}}t \right)+\zeta_{10}.\] Here \(\omega_{11}\), \(\omega_{21}\), \(\zeta_{10}\) are complex constants and \(\alpha\), \(\rho_{1}\), \(\rho_{2}\) are real constants with \(\rho=1+\rho_{1}\rho_{2}\neq 0\), in which these parameters need to obey the algebraic constraint: \[(\omega_{11}-\mathrm{i}\alpha\rho)(\omega_{21}-\mathrm{i}\alpha\rho)=-\alpha^{2 }\rho_{1}\rho_{2}\rho.\] As reported in Ref.[84], the general breather of the MT model (1) contains two special cases: the Akhmediev breather and the Kuznetsov-Ma breather. For the sake of illustration, we will only consider the general one \begin{table} \begin{tabular}{c c c c c c c c} \hline \multirow{2}{*}{\(L^{2}\) error} & \(t\)-domain & \([-7,-4.94]\) & \([-4.94,-2.88]\) & \([-2.88,-0.13]\) & \([-0.13,2.17]\) & \([2.17,4]\) & \([-7,4]\) \\ \cline{2-9} & \(N_{lb}+(N_{Pl})\) & 1000 & 2000 & 2500 & 3000 & 3500 & \\ \hline \(|\hat{u}|\) & & 4.582e-04 & 2.405e-03 & 4.290e-03 & 3.358e-03 & 3.174e-03 & 3.112e-03 \\ \(|\phi|\) & & 5.341e-04 & 4.362e-03 & 1.304e-02 & 6.867e-03 & 3.140e-03 & 7.575e-03 \\ \hline \end{tabular} \end{table} Table 8: Relative \(L^{2}\)-norm errors of the data-driven one-breather solution in each subdomain and the whole domain. \begin{table} \begin{tabular}{c c c c c c c c} \hline \multirow{2}{*}{\(L^{2}\) error} & \(t\)-domain & \([-4,-2.5]\) & \([-2.5,-1]\) & \([-1,1]\) & \([1,2.6]\) & \([2.6,4]\) & \([-4,4]\) \\ \cline{2-9} & \(N_{lb}+(N_{Pl})\) & 1000 & 2000 & 2500 & 3000 & 3500 & \\ \hline \(|\hat{u}|\) & & 6.277e-04 & 1.468e-03 & 1.834e-03 & 2.622e-03 & 4.318e-03 & 2.449e-03 \\ \(|\phi|\) & & 4.381e-04 & 8.711e-04 & 1.135e-03 & 2.161e-03 & 3.759e-03 & 1.959e-03 \\ \hline \end{tabular} \end{table} Table 7: Relative \(L^{2}\)-norm errors of the data-driven dark-antidark soliton solution in each subdomains and the whole domain. Fig. 8: The data-driven dark-antidark soliton solution of the MT model (2): (a) and (b) The 3D and 2D density profiles of the learned dark-antidark soliton, respectively; (c) The 2D density profiles of the point-wise absolute errors between exact and learned solutions, \(\text{Error}_{u}=|u-\hat{u}|\) and \(\text{Error}_{v}=|v-\hat{v}|\). breather and the Kuznetsov-Ma breather here. In two cases, the parameters are taken as \(\rho_{1}=2\rho_{2}=\alpha=1\), \(\omega_{11}=4+2\mathrm{i}\), \(\omega_{21}=-\frac{12}{65}+\frac{99}{65}\); \(\zeta_{10}=0\) for the general one-breather and \(\rho_{1}=-2\rho_{2}=1\), \(\alpha=2\), \(\omega_{11}=5+\mathrm{i}\), \(\omega_{21}=\frac{1}{5}+\mathrm{i}\), \(\zeta_{10}=0\) for the Kuznetsov-Ma breather, respectively. For the data-driven experiments of one-breather solutions, we set the computational domain \([L_{0},L_{1}]\times[T_{0},T_{1}]\) to \([-3,3]\times[-7,4]\) and \([-3,2]\times[-6,4]\) for two examples. The initial-boundary conditions of one-breather solutions can be obtained exactly from the formulae defined in (2). For both types of breather solutions, we still choose \(600\times 1200\) grid points with the same step size to generate the data sets of the discretized initial-boundary value. Similar to the bright and dark two-soliton solutions, we need to employ the XPINN algorithm to approximate the sophisticated breather solutions. More specifically, the whole domains for both types of breather are first divided into 5 subdomains as shown in Tables 8 and 9 respectively, and then the associated small interface zones are determined by defining \([e_{k}^{-},e_{k}^{+}]=[5,3]\times\frac{T_{1}-T_{0}}{1200}\) uniformly with \(k=1,2,3,4\). The numbers of training points \(N_{PIB}^{(k)}\) at each stage are given in Tables 8 and 9, and the number of collocation points \(N_{R}^{(k)}\) in each subdomain is set to 20000 equally, which are obtained by using the LHS method. In each interface zone, residual points and gradient points are taken as the corresponding part of the grid points in the discretisation of exact solution for the smooth stitching at the line \(t=t_{k}\). Hence the numbers of two types of sample points are given by \(N_{IR}^{(k)}=N_{IG}^{(k)}=4800\). By sequentially training the learnable parameters of each NN and combining the learned solutions throughout all subdomains, we have both types of the predicted one-breather solutions \(\hat{u}\) and \(\hat{v}\) in the whole computational domain. In Tables 8 and 9, we list their relative \(L^{2}\)-norm errors for \((|\hat{u}|,|\hat{v}|)\) in each subdomain and the whole domain. Figs. 9 and 10 exhibit the 3D structure and 2D density profiles of the learned solutions as well as the 2D density profiles of the point-wise absolute errors for the general one-breather and the Kuznetsov-Ma breather, respectively. The maximum values of the absolute error for two cases are \((8.944\mathrm{e}{-02},5.460\mathrm{e}{-02})\) and \((3.080\mathrm{e}{-02},6.705\mathrm{e}{-02})\), which occur at the peaks and valleys of the local wave with the high absolute gradients. In analogy to the bright and dark two-soliton solutions, one can see that the slight mismatches along the interface line also appear due to the different sub-networks. As mentioned above, we can introduce more interface information to achieve the smoother stitching. For example, these information can be obtained by adjusting the numbers of interface points, the weights of interface terms in the loss function and the sizes of interface zones adequately, or by imposing the additional smoothness conditions such as the higher-order gradients in the NN of each subdomain. Figure 9: The data-driven one-breather solution of the MT model (2): (a) and (b) The 3D and 2D density profiles of the learned one-breather, respectively; (c) The 2D density profiles of the point-wise absolute errors between exact and learned solutions, \(\mathrm{Error}_{\alpha}=|u-\hat{u}|\) and \(\mathrm{Error}_{\epsilon}=|v-\hat{v}|\). ### Data-driven first- and second-order rogue wave solutions In contrast to the soliton and breather solutions shown above, the rogue wave solutions are expressed in terms of rational polynomials. According to the derivation in [85], the first-order rogue wave solutions read \[u=\rho_{1}\dot{e}^{\dot{\imath}\phi}\frac{(X_{1}^{*}-\tilde{\rho})(Y_{1}^{*}+ \tilde{\rho}^{*})}{X_{1}^{*}Y_{1}^{*}+\frac{1}{4}},\qquad v=\rho_{2}\dot{e}^{ \dot{\imath}\phi}\frac{(X_{1}^{*}-1)(Y_{1}^{*}+1)}{X_{1}Y_{1}+\frac{1}{4}}, \qquad\phi=\rho\left(\frac{\rho_{2}}{\rho_{1}}x+\frac{\rho_{1}}{\rho_{2}}t \right), \tag{19}\] with \[X_{1}=\frac{\rho_{2}\dot{\rho}}{\rho_{1}}x+\frac{\rho_{1}\rho \dot{\rho}}{\rho_{2}(\dot{\jmath}+\dot{\imath}\rho)^{2}}t+\frac{1}{2}\frac{ \dot{\rho}}{\dot{\rho}+\dot{\imath}\rho},\qquad Y_{1}=\frac{\rho_{2}\dot{\rho} }{\rho_{1}}x+\frac{\rho_{1}\rho\dot{\rho}}{\rho_{2}(\dot{\rho}-\dot{\imath} \rho)^{2}}t-\frac{1}{2}\frac{\dot{\rho}}{\dot{\rho}-\dot{\imath}\rho},\] \[\tilde{\rho}=\frac{\dot{\rho}}{\dot{\rho}-\dot{\imath}\rho_{1} \rho_{2}},\ \ \tilde{\rho}=\sqrt{-\rho_{1}\rho_{2}\rho},\ \ \rho=1+\rho_{1}\rho_{2}.\] Here \(\rho_{1}\) and \(\rho_{2}\) are arbitrary real parameters which must satisfy the condition: \(-1<\rho_{1}\rho_{2}<0\). In the first-order rogue wave solutions, we take the parameters \(\rho_{1}=1\) and \(\rho_{2}=-\frac{4}{5}\) to perform the data-driven experiment. For the computational domain, we take the intervals \([L_{0},L_{1}]\) and \([T_{0},T_{1}]\) as \([-2.8,2.8]\) and \([-1.8,1.8]\) respectively. Then the initial and Dirichlet boundary conditions in (2) are written explicitly, and the corresponding discretized data sets are generated by imposing \(600\times 400\) grid points with the equal step length in the space-time domain. Similar to the bright and dark one-soliton solutions, here we utilize the original PINN algorithm to simulate the simple first-order rogue wave solutions. In the initial-boundary data sets and the whole domain, we randomly select \(N_{IB}=1000\) training points and \(N_{R}=20000\) collocation points respectively by using the LHS method. By optimising the learnable parameters of the NN through the iterative training, the approximated first-order rogue wave solutions \(\tilde{u}\) and \(\tilde{v}\) are successfully obtained. In this case, the relative \(L^{2}\)-norm errors for \((|\hat{u}|,|\phi|)\) are (4.283e\(-04\), 5.764e\(-04\)). Fig. 11 exhibits the 3D structure and 2D density profiles [Figs. 11(a) and 11(b)] of the learned first-order rogue wave solutions \((|\hat{u}|,|\phi|)\), and the 2D density \begin{table} \begin{tabular}{c c c c c c c} \hline \(L^{2}\) error & \(t\)-domain & \([-6,-4.13]\) & \([-4.13,-2.25]\) & \([-2.25,0.25]\) & \([0.25,2.33]\) & \([2.33,4]\) & \([-6,4]\) \\ \hline \(N_{lb}\)\(+\)\((N_{Pl})\) & 1000 & 2000 & 2500 & 3000 & 3500 & \\ \hline \(|\hat{u}|\) & 4.724e-04 & 1.713e-03 & 4.448e-03 & 5.553e-03 & 4.872e-03 & 4.011e-03 \\ \(|\hat{\vartheta}|\) & 4.275e-04 & 7.680e-04 & 2.944e-03 & 5.393e-03 & 4.644e-03 & 3.458e-03 \\ \hline \end{tabular} \end{table} Table 9: Relative \(L^{2}\)-norm errors of the data-driven Kuznetsov-Ma breather solution in each subdomain and the whole domain. Figure 10: The data-driven Kuznetsov-Ma breather solutions of the MT model (2): (a) and (b) The 3D and 2D density profiles of the learned Kuznetsov-Ma breather, respectively; (c) The 2D density profiles of the point-wise absolute errors between exact and learned solutions, \(\text{Error}_{\pi}=|u-\hat{u}|\) and \(\text{Error}_{\pi}=|v-\hat{v}|\). profiles of the point-wise absolute errors [Fig. 11(c)], respectively. It can be clearly seen that the predicted solutions agree well with the exact ones, except for the maximum absolute errors (5.316e\(-\)03, 3.203e\(-\)03), which appear at the central peaks of rogue wave with the high gradients. The second-order rogue wave solutions are given by [85] \[u=\rho_{1}\mathrm{e}^{\mathrm{i}\phi}\frac{g}{f^{*}},\qquad v=\rho_{2}\mathrm{e }^{\mathrm{i}\phi}\frac{h}{f},\qquad\phi=\rho\left(\frac{\rho_{2}}{\rho_{1}}x+ \frac{\rho_{1}}{\rho_{2}}t\right), \tag{20}\] where tau functions \(f\), \(g\) and \(h\) are determined by \[f=\left|\begin{array}{cc}m_{11}^{(0,0,0)}&m_{13}^{(0,0,0)}\\ m_{31}^{(0,0,0)}&m_{33}^{(0,0,0)}\end{array}\right|,\quad g=\left|\begin{array} []{cc}m_{11}^{(-1,1,0)}&m_{13}^{(-1,1,0)}\\ m_{31}^{(-1,1,0)}&m_{33}^{(-1,1,0)}\end{array}\right|,\quad h=\left|\begin{array} []{cc}m_{11}^{(-1,0,1)}&m_{13}^{(-1,0,1)}\\ m_{31}^{(-1,0,1)}&m_{33}^{(-1,0,1)}\end{array}\right|, \tag{21}\] with the elements \[m_{11}^{(n,k,l)} =X_{1}Y_{1}+\frac{1}{4},\quad m_{13}^{(n,k,l)}=X_{1}(Y_{3}+\frac {1}{6}Y_{1}^{3})+\frac{1}{8}Y_{1}^{2}-\frac{1}{48},\quad m_{31}^{(n,k,l)}=Y_{ 1}(X_{3}+\frac{1}{6}X_{1}^{3})+\frac{1}{8}X_{1}^{2}-\frac{1}{48},\] \[m_{33}^{(n,k,l)} =(X_{3}+\frac{1}{6}X_{1}^{3})(Y_{3}+\frac{1}{6}Y_{1}^{3})+\frac{ 1}{16}(X_{1}^{2}-\frac{1}{6})(Y_{1}^{2}-\frac{1}{6})+\frac{1}{16}X_{1}Y_{1}+ \frac{1}{64}.\] The variables \(X_{i}\) and \(Y_{i}\)\((i=1,3)\) are as follows: \[X_{i}=\alpha_{i}x+\beta_{i}t+(n+\frac{1}{2})\theta_{i}+k\theta_{i}+l\zeta_{r}+ a_{i},\ \ Y_{i}=\alpha_{i}x+\beta_{i}^{*}t-(n+\frac{1}{2})\theta_{i}^{*}-k\theta_{i}^{*}- l\zeta_{r}+a_{i}^{*},\] with the coefficients: \[a_{1} =\zeta_{3}=0,\ \ \zeta_{1}=1,\ \ \alpha_{1}=\frac{\rho_{2} \hat{\rho}}{\rho_{1}},\ \ \alpha_{3}=\frac{\rho_{2}\hat{\rho}}{\delta\rho_{1}},\quad\beta_{1}=\frac{\rho_{ 1}\rho\hat{\rho}}{\rho_{2}(\hat{\rho}+\mathrm{i}\rho)^{2}},\] \[\beta_{3} =\frac{\rho_{1}\rho(4\mathrm{i}\rho_{1}\rho_{2}\rho-2\hat{\rho} \rho_{1}\rho_{2}-\hat{\rho})}{\delta\rho_{2}(\hat{\rho}+\mathrm{i}\rho)^{4}}, \ \ \theta_{1}=\frac{\hat{\rho}}{\hat{\rho}+\mathrm{i}\rho},\ \ \theta_{3}=\frac{\mathrm{i}\rho^{2}( \rho-1+\mathrm{i}\hat{\rho})}{6(\hat{\rho}+\mathrm{i}\rho)^{3}},\] \[\theta_{1} =\frac{\hat{\rho}}{\mathrm{i}(\rho-1)+\hat{\rho}},\ \ \theta_{3}=\frac{(\mathrm{i}\rho-\hat{\rho})(\rho-1)^{2}}{6[\mathrm{i}(\rho-1)+ \hat{\rho}]^{3}},\ \ \hat{\rho}=\sqrt{-\rho_{1}\rho_{2}\rho},\ \ \rho=1+\rho_{1}\rho_{2}.\] Here \(a_{3}\) is an arbitrary complex parameter, \(\rho_{1}\rho_{2}\) are arbitrary real parameters which need to satisfy the condi Figure 11: The data-driven first-order rogue wave solution of the MT model (2): (a) and (b) The 3D and 2D density profiles of the learned first-order rogue wave, respectively; (c) The 2D density profiles of the point-wise absolute errors between exact and learned solutions, \(\mathrm{Error}_{s}=|u-\hat{u}|\) and \(\mathrm{Error}_{r}=|v-\hat{v}|\). tion: \(-1<\rho_{1}\rho_{2}<0\). According to the Ref.[85], the second-order rogue wave solutions allow the wave patterns of the sole huge peak with \(a_{3}=0\) and three peaks with \(a_{3}\neq 0\). Here, we only perform the data-driven simulation for the second-order rogue wave with three peaks, where we take the parameters \(\rho_{1}=1\), \(\rho_{2}=-\frac{4}{5}\) and \(a_{3}=\frac{1}{2}\). For the computational domain, the intervals \([L_{0},L_{1}]\) and \([T_{0},T_{1}]\) are chosen to be \([-3,2]\) and \([-6,4]\) respectively. As defined in (2), the initial and Dirichlet boundary conditions of the second-order rogue wave solutions can be written immediately. It is straightforward to obtain the data sets of the discretized initial-boundary value by imposing \(600\times 1200\) grid points in the space-time domain with the equal step length. In a similar way to the two-soliton and beather solutions, the complicated local features of second-order rogue wave solutions force us to use the XPINN algorithm for conducting the numerical experiments. First, the whole domain is divided into 5 subdomains as shown in Table 10, and \([\epsilon_{k}^{-},\epsilon_{k}^{+}]=[5,3]\times\frac{T_{1}-T_{0}}{1200}\) with \(k=1,2,3,4\) are defined uniformly for the small interface zones. At each stage, the numbers of training points \(N_{PIB}^{(k)}\) are provided in Table 10, and the number of collocation points \(N_{R}^{(k)}\) is equally fixed at 20000. Here we employ the LHS method to generate these random training points. In each interface zone, residual points and gradient points are selected as the corresponding part of the grid points in the discretisation of exact solution to ensure smooth stitching at the line \(t=t_{k}\). Hence the numbers of two types of sample points can be obtained as \(N_{IR}^{(k)}=N_{IG}^{(k)}=4800\). By training the learnable parameters of each NN in five subdomains and stitching these predicted solutions together, the learned second-order rogue wave solutions \(\hat{u}\) and \(\hat{v}\) can be constructed in the whole domain. The relative \(L^{2}\)-norm errors for \((|\hat{u}|,|\hat{v}|)\) in each subdomain and the whole domain are presented in Table 10. In Fig. 12, we display the 3D structure and 2D density profiles of the learned solutions as well as the 2D density profiles of the point-wise absolute errors for the second-order rogue wave, respectively. From Fig. 12(c), the maximum absolute errors are measured as \((4.988\mathrm{e}{-02},1.327\mathrm{e}{-01})\), which appear at the peaks and valleys of the rogue wave. Particularly, unlike the two-soliton and breather cases, the relatively larger errors at the peaks and valleys make the mismatches along the interface line unnoticeable. This is due to the fact that the small \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \(L^{2}\) error & \(t\)-domain & \([-3,-1.69]\) & \([-1.69,-0.38]\) & \([-0.38,1.38]\) & \([1.38,2.83]\) & \([2.83,4]\) & \([-3,4]\) \\ \cline{2-7} & \(N_{Ib}{+}(N_{PI})\) & 1000 & 2000 & 2500 & 3000 & 3500 & \\ \hline \(|\hat{u}|\) & & 1.182e-03 & 3.587e-03 & 6.940e-03 & 4.241e-03 & 3.249e-03 & 4.739e-03 \\ \(|\hat{v}|\) & & 4.739e-04 & 2.813e-03 & 4.359e-03 & 4.114e-03 & 4.281e-03 & 3.573e-03 \\ \hline \hline \end{tabular} \end{table} Table 10: Relative \(L^{2}\)-norm errors of the data-driven second-order rogue wave solution in each subdomain and the whole domain. Figure 12: The data-driven second-order rogue wave solution of the MT model (2): (a) and (b) The 3D and 2D density profiles of the learned second-order rogue wave, respectively; (c) The 2D density profiles of the point-wise absolute errors between exact and learned solutions, \(\mathrm{Error}_{u}=|u-\hat{u}|\) and \(\mathrm{Error}_{v}=|\bar{v}-\bar{v}|\). regions with higher gradients, such as the sharp peaks in Fig. 12(a), make it difficult for NN learning to capture these features. This type of absolute error can be reduced by secondary or repetitive training by selecting more collocation points in these sharp regions. ## 3 Data-driven parameters discovery of the MT model In this section, we consider the inverse problem of the MT model where both equations in Eq.(2) are replaced by \[\begin{split}&\mathrm{i}u_{x}+\lambda_{1}v+\lambda_{2}u|v|^{2}=0, \\ &\mathrm{i}v_{t}+\lambda_{3}u+\lambda_{4}v|u|^{2}=0,\end{split} \tag{22}\] with unknown real coefficients \(\lambda_{i}\) for \(i=1,2,3,4\). Correspondingly, the associated complex-valued PINNs for the MT model (2) are changed into the following form \[\begin{split}& f_{u}:=\mathrm{i}\hat{u}_{x}+\lambda_{1}\hat{v}+ \lambda_{2}\hat{u}|\hat{v}|^{2},\\ & f_{v}:=\mathrm{i}\hat{v}_{t}+\lambda_{3}\hat{u}+\lambda_{4}\hat{ v}|\hat{u}|^{2}.\end{split} \tag{23}\] which are rewritten as the real-valued PINNs as follows: \[\begin{split}& f_{p}:=\hat{p}_{x}+\lambda_{1}\hat{s}+\lambda_{2} \hat{q}(\hat{r}^{2}+\hat{s}^{2}),\;\;f_{q}:=-\hat{q}_{x}+\lambda_{1}\hat{r}+ \lambda_{2}\hat{p}(\hat{r}^{2}+\hat{s}^{2}),\\ & f_{r}:=\hat{r}_{t}+\lambda_{3}\hat{q}+\lambda_{4}\hat{s}(\hat{ p}^{2}+\hat{q}^{2}),\;\;f_{s}:=-\hat{s}_{t}+\lambda_{3}\hat{p}+\lambda_{4}\hat{ r}(\hat{p}^{2}+\hat{q}^{2}),\end{split} \tag{24}\] by taking the real-valued functions \((\hat{p},\hat{q})\) and \((\hat{r},\hat{s})\) as the real and imaginary parts of \((\hat{u},\hat{v})\), respectively. For the inverse problem, the parameters \(\lambda_{i}\) with \(i=1,2,3,4\) in Eqs.(22) are unknown, some extra information of the internal region can be introduced into the PINNs of the MT model. This implementation leads to the additional loss term defined as \[MSE_{IN}=\frac{1}{N_{IN}}\sum_{i=1}^{N_{IN}}\left(\left|\hat{p}(x^{i}_{IN},t^{ i}_{IN})-p^{i}_{IN}\right|^{2}+\left|\hat{q}(x^{i}_{IN},t^{i}_{IN})-q^{i}_{IN} \right|^{2}+\left|\hat{r}(x^{i}_{IN},t^{i}_{IN})-r^{i}_{IN}\right|^{2}+\left| \hat{s}(x^{i}_{IN},t^{i}_{IN})-s^{i}_{IN}\right|^{2}\right), \tag{25}\] where \(\{x^{i}_{IN},t^{i}_{IN}\}_{i=1}^{N_{IN}}\) is the set of random internal points where the number of points is \(N_{IN}\), and the discretized solutions \((p^{i}_{IN},q^{i}_{IN},r^{i}_{IN},s^{i}_{IN})\) correspond to \([p(x^{i}_{IN},t^{i}_{IN}),q(x^{i}_{IN},t^{i}_{IN}),r(x^{i}_{IN},t^{i}_{IN}),s(x^ {i}_{IN},t^{i}_{IN})]\). Then the new loss function in (7) is modified as \[\mathcal{L}(\Theta)=Loss=\mathbf{W}_{R}MSE_{R}+\mathbf{W}_{IB}MSE_{IB}+ \mathbf{W}_{IN}MSE_{IN}, \tag{26}\] with the weight \(\mathbf{W}_{IN}\). Consequently, the redesigned PINNs of the MT model are used to learn the unknown parameters \(\lambda_{i}\)(\(i=1,2,3,4\)) simultaneously with the solutions \((\hat{p},\hat{q},\hat{r},\hat{s})\). It is noted that here the loss term of the initial-boundary value boundary conditions can be eliminated due to the introduction of the loss term for internal information [6; 7; 86]. Based on any given solutions of the MT model (1), the data-driven experiment for the parameter discovery \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{MT model} & Item & \\ \cline{2-3} & Parameters & Relative errors \\ \hline Correct & \(\lambda_{1}=1,\;\lambda_{2}=1\) & \(\lambda_{1}:0\%,\;\lambda_{2}:0\%\) \\ & \(\lambda_{3}=1,\;\lambda_{4}=1\) & \(\lambda_{3}:0\%,\;\lambda_{4}:0\%\) \\ \hline Identified & \(\lambda_{1}=0.9999868,\;\lambda_{2}=1.000026\) & \(\lambda_{1}:0.00132\%,\;\lambda_{2}:0.00262\%\) \\ (without noise) & \(\lambda_{3}=1.000012,\;\lambda_{4}=1.000082\) & \(\lambda_{3}:0.00118\%,\;\lambda_{4}:0.00819\%\) \\ \hline Identified & \(\lambda_{1}=0.9994288,\;\lambda_{2}=0.9972788\) & \(\lambda_{1}:0.05712\%,\;\lambda_{2}:0.27212\%\) \\ (5 \% noise) & \(\lambda_{3}=1.002426,\;\lambda_{4}=1.000761\) & \(\lambda_{3}:0.24258\%,\;\lambda_{4}:0.07610\%\) \\ \hline Identified & \(\lambda_{1}=0.9981983,\;\lambda_{2}=0.9947988\) & \(\lambda_{1}:0.18017\%,\;\lambda_{2}:0.52012\%\) \\ (10 \% noise) & \(\lambda_{3}=1.005139,\;\lambda_{4}=1.002173\) & \(\lambda_{3}:0.51393\%,\;\lambda_{4}:0.21727\%\) \\ \hline \hline \end{tabular} \end{table} Table 11: The correct MT model and the identified one with different noises driven by the bright one-soliton solutions. can be performed theoretically. However, it is found from the last section for forward problems that although the complicated local solutions such as two-soliton, one-breather and second-order rogue wave can be learned with aid of the XPINN algorithm, the simple solutions such as one-soliton and first-order rogue wave predicted by means of the PINN framework show better accuracy with smaller absolute errors. Hence, here we only choose three types of simple solutions including the bright, dark one-soliton and first-order rogue wave to discover unknown parameters by using the classical PINN algorithm. For three cases, we establish the network architecture consisting of 7 hidden layers with 40 neurons per layer and fix the weights \(\mathbf{W}_{R}=\mathbf{W}_{IB}=\mathbf{W}_{IN}=1\) uniformly. In these trainings, we first perform the 5000 steps Adam optimization with the default learning rate \(10^{-3}\) and then use the L-BFGS optimization with the maximum iterations 50000. In the numerical experiments driven by three types of solutions, the unknown parameter \(\lambda_{i}\) is initialized to \(\lambda_{i}=0\), and the relative error for these unknown parameters is defined as \(\frac{|\hat{\lambda}_{i}-\lambda_{i}|}{\lambda_{i}}\), where \(\hat{\lambda}_{i}\) and \(\lambda_{i}\) denote the predicted and true value respectively for \(i=1,2,3,4\). The parameters and the computational domains in the bright, dark one-soliton and first-order rogue wave solutions are still taken to be same as those for the forward problems in the last section. By imposing \(400\times 600\) grid points with the equal step length in each space-time region, we obtain the discretized data sets of the initial-boundary value and the internal true solution. Furthermore, \(N_{R}=20000\) collocation points, \(N_{IB}=1000\) initial-boundary points and \(N_{IN}=2000\) internal points are generated by using the LHS method. After optimizing the learnable parameters in the NN, the unknown parameters of the MT model in the final experimental results under three types of reference solutions are shown in Tables 11, 12 and 13. From any of the three cases, it can be seen that the unknown parameters are correctly identified by the network model and the relative errors increase observably with the noise intensity. In addition, whether or not the training data is affected by noise, these parameter errors are very small. This observation is an evidence for the robustness and efficiency of the PINN approach for the MT model. \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{MT model} & \multicolumn{2}{c}{Item} \\ \cline{2-3} & Parameters & Relative errors \\ \hline Correct & \(\lambda_{1}=1,\ \lambda_{2}=1\) & \(\lambda_{1}:0\%,\ \lambda_{2}:0\%\) \\ & \(\lambda_{3}=1,\ \lambda_{4}=1\) & \(\lambda_{3}:0\%,\ \lambda_{4}:0\%\) \\ \hline Identified & \(\lambda_{1}=1.000048,\ \lambda_{2}=1.000005\) & \(\lambda_{1}:0.00475\%,\ \lambda_{2}:0.00048\%\) \\ (without noise) & \(\lambda_{3}=0.9999870,\ \lambda_{4}=0.9999554\) & \(\lambda_{3}:0.00130\%,\ \lambda_{4}:0.00446\%\) \\ \hline Identified & \(\lambda_{1}=0.9991396,\ \lambda_{2}=0.9976323\) & \(\lambda_{1}:0.08604\%,\ \lambda_{2}:0.23677\%\) \\ (5 \% noise) & \(\lambda_{3}=0.9989458,\ \lambda_{4}=0.9981177\) & \(\lambda_{3}:0.10542\%,\ \lambda_{4}:0.18823\%\) \\ \hline Identified & \(\lambda_{1}=0.9976596,\ \lambda_{2}=0.9953481\) & \(\lambda_{1}:0.23404\%,\ \lambda_{2}:0.46519\%\) \\ (10 \% noise) & \(\lambda_{3}=0.9975830,\ \lambda_{4}=0.9962552\) & \(\lambda_{3}:0.24170\%,\ \lambda_{4}:0.37448\%\) \\ \hline \hline \end{tabular} \end{table} Table 13: The correct MT model and the identified one with different noises driven by the first-order rogue wave solutions \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{MT model} & \multicolumn{2}{c}{Item} \\ \cline{2-3} & Parameters & Relative errors \\ \hline Correct & \(\lambda_{1}=1,\ \lambda_{2}=1\) & \(\lambda_{1}:0\%,\ \lambda_{2}:0\%\) \\ & \(\lambda_{3}=1,\ \lambda_{4}=1\) & \(\lambda_{3}:0\%,\ \lambda_{4}:0\%\) \\ \hline Identified & \(\lambda_{1}=1.000278,\ \lambda_{2}=0.9995433\) & \(\lambda_{1}:0.02784\%,\ \lambda_{2}:0.04567\%\) \\ (without noise) & \(\lambda_{3}=1.000106,\ \lambda_{4}=0.9999900\) & \(\lambda_{3}:0.01057\%,\ \lambda_{4}:0.00100\%\) \\ \hline Identified & \(\lambda_{1}=1.003280,\ \lambda_{2}=0.9984017\) & \(\lambda_{1}:0.32800\%,\ \lambda_{2}:0.15983\%\) \\ (5 \% noise) & \(\lambda_{3}=1.002683,\ \lambda_{4}=0.9964039\) & \(\lambda_{3}:0.26831\%,\ \lambda_{4}:0.35961\%\) \\ \hline Identified & \(\lambda_{1}=1.006322,\ \lambda_{2}=0.9959229\) & \(\lambda_{1}:0.63217\%,\ \lambda_{2}:0.40771\%\) \\ (10 \% noise) & \(\lambda_{3}=1.005789,\ \lambda_{4}=0.9930986\) & \(\lambda_{3}:0.57889\%,\ \lambda_{4}:0.69014\%\) \\ \hline \hline \end{tabular} \end{table} Table 12: The correct MT model and the identified one with different noises driven by the dark one-soliton solutions. ## 4 Conclusions and discussions In conclusion, we investigate data-driven localized wave solutions and parameter discovery in the MT model via the PINNs and modified XPINNs algorithm. For the forward problems, abundant localized wave solutions including soliton, breather and rogue wave are learned and compared to exact ones with relative \(L^{2}\)-norm and point-wise absolute errors. In particular, we have modified the interface line in the domain decomposition of XPINNs into the small interface zone shared by two adjacent subdomains. Then three types of interface condition including the pseudo initial, residual and gradient ones are imposed in the small interface zones. This modified XPINNs algorithm is applied to learn the data-driven solutions with relatively complicated structures such as bright-bright soliton, dark-dark soliton, dark-antidark soliton, general breather, Kuznetsov-Ma breather and second-order rogue wave. Numerical experiments show that the improved version of XPINNs can achieve better stitching effect and faster convergence for predicting dynamical behaviors of localized waves in the MT model. For the inverse problems, the coefficient parameters of linear and non-linear terms in the MT model are learned with clean and noise data by using the classical PINNs algorithm. The training results reveal that for three types of data-driven solutions, these unknown parameters can be recovered accurately even for the sample data with certain noise intensity. ## Acknowledgement The work was supported by the National Natural Science Foundation of China (Grant Nos. 12226332, 119251080 and 12375003) and the Zhejiang Province Natural Science Foundation of China (Grant No. 2022SJ-GYZC01).
2309.10275
Optimizing Crowd-Aware Multi-Agent Path Finding through Local Communication with Graph Neural Networks
Multi-Agent Path Finding (MAPF) in crowded environments presents a challenging problem in motion planning, aiming to find collision-free paths for all agents in the system. MAPF finds a wide range of applications in various domains, including aerial swarms, autonomous warehouse robotics, and self-driving vehicles. Current approaches to MAPF generally fall into two main categories: centralized and decentralized planning. Centralized planning suffers from the curse of dimensionality when the number of agents or states increases and thus does not scale well in large and complex environments. On the other hand, decentralized planning enables agents to engage in real-time path planning within a partially observable environment, demonstrating implicit coordination. However, they suffer from slow convergence and performance degradation in dense environments. In this paper, we introduce CRAMP, a novel crowd-aware decentralized reinforcement learning approach to address this problem by enabling efficient local communication among agents via Graph Neural Networks (GNNs), facilitating situational awareness and decision-making capabilities in congested environments. We test CRAMP on simulated environments and demonstrate that our method outperforms the state-of-the-art decentralized methods for MAPF on various metrics. CRAMP improves the solution quality up to 59% measured in makespan and collision count, and up to 35% improvement in success rate in comparison to previous methods.
Phu Pham, Aniket Bera
2023-09-19T03:02:43Z
http://arxiv.org/abs/2309.10275v3
# Crowd-Aware Multi-Agent Pathfinding With Boosted Curriculum Reinforcement Learning ###### Abstract Multi-Agent Path Finding (MAPF) in crowded environments presents a challenging problem in motion planning, aiming to find collision-free paths for all agents in the system. MAPF finds a wide range of applications in various domains, including aerial swarms, autonomous warehouse robotics, and self-driving vehicles. The current approaches for MAPF can be broadly categorized into two main categories: centralized and decentralized planning. Centralized planning suffers from the curse of dimensionality and thus does not scale well in large and complex environments. On the other hand, decentralized planning enables agents to engage in real-time path planning within a partially observable environment, demonstrating implicit coordination. However, they suffer from slow convergence and performance degradation in dense environments. In this paper, we introduce CRAMP, a crowd-aware decentralized approach to address this problem by leveraging reinforcement learning guided by a boosted curriculum-based training strategy. We test CRAMP on simulated environments and demonstrate that our method outperforms the state-of-the-art decentralized methods for MAPF on various metrics. CRAMP improves the solution quality up to 58% measured in makespan and collision count, and up to 5% in success rate in comparison to previous methods. ## I Introduction Multi-Agent Path Finding (MAPF) presents challenges with broad applications in autonomous warehouses, robotics, aerial swarms, and self-driving vehicles [1, 2, 3, 4, 5, 6]. The objective is to plan paths for multiple agents to navigate from start to goal positions in obstacle-laden environments. A critical constraint of such systems is to guarantee that agents can navigate concurrently without collisions. Two main categories of approaches are centralized and decentralized planning. Centralized planners [7, 8, 9, 10] aim to find optimal solutions. They are effective in small and sparse environments but face limitations in real-time performance and scalability in dense and complicated environments [11, 12]. These methods require complete a knowledge of the environment and full replanning when a change occurs, leading to exponential computation times with increased agents, obstacles, and world size. Recent studies [13, 10] have sought to discover real-time solutions. However, these solutions remain sub-optimal and still necessitate access to global information about the world. On the contrary, decentralized methods [14, 15, 16, 17, 18, 1] seek to tackle these challenges by allowing each agent to acquire local policies. In these approaches, agents have the capacity to reactively plan paths within partially observable environments. Such methods prove beneficial in situations where agents lack comprehensive knowledge of the world, as is often the case in the domain of autonomous vehicles. Rather than pursuing an optimal solution for all agents, decentralized planners train local policies that rapidly generate sub-optimal solutions as a tradeoff between speed and solution quality. Given that agents make their decisions based on local information, decentralized approaches often face challenges in achieving effective global coordination among agents. In cluttered, dynamic environments characterized by congestion or rapid changes, agents may tend to prioritize their individual goals, potentially resulting in conflicts and inefficiencies that affect the overall system's performance. To tackle these challenges, we introduce a novel approach that extends the capabilities of PRIMAL [14] and incorporates the presence of dense crowds into the policy learning process, guided by a boosted curriculum training strategy. Our proposed methodology, referred to as CRAMP, focuses on training intelligent agents to navigate through dense and dynamic environments efficiently. We formulate the MAPF problem as a sequential decision-making task and employ deep reinforcement learning techniques to teach agents to make optimal decisions while considering the presence of other agents and the constantly changing environment. The key contribution of CRAMP lies in its curriculum-driven training strategy, which progressively exposes agents to increasingly complex scenarios. By starting with simple environments and gradually increasing the difficulty, our approach facilitates smoother learning convergence and improved generalization to real-world MAPF scenarios. Furthermore, we introduce crowd-awareness mechanisms that enable agents to adapt their policies dynamically based on Fig. 1: _Path comparison between PRIMAL and CRAMP, our novel crowd-aware approach for multi-agent pathfinding problem in densely populated environments. The solution proposed by CRAMP is much shorter compared to PRIMAL._
2309.11341
Article Classification with Graph Neural Networks and Multigraphs
Classifying research output into context-specific label taxonomies is a challenging and relevant downstream task, given the volume of existing and newly published articles. We propose a method to enhance the performance of article classification by enriching simple Graph Neural Network (GNN) pipelines with multi-graph representations that simultaneously encode multiple signals of article relatedness, e.g. references, co-authorship, shared publication source, shared subject headings, as distinct edge types. Fully supervised transductive node classification experiments are conducted on the Open Graph Benchmark OGBN-arXiv dataset and the PubMed diabetes dataset, augmented with additional metadata from Microsoft Academic Graph and PubMed Central, respectively. The results demonstrate that multi-graphs consistently improve the performance of a variety of GNN models compared to the default graphs. When deployed with SOTA textual node embedding methods, the transformed multi-graphs enable simple and shallow 2-layer GNN pipelines to achieve results on par with more complex architectures.
Khang Ly, Yury Kashnitsky, Savvas Chamezopoulos, Valeria Krzhizhanovskaya
2023-09-20T14:18:04Z
http://arxiv.org/abs/2309.11341v2
# Improving Article Classification with Edge-Heterogeneous Graph Neural Networks ###### Abstract Classifying research output into context-specific label taxonomies is a challenging and relevant downstream task, given the volume of existing and newly published articles. We propose a method to enhance the performance of article classification by enriching simple Graph Neural Networks (GNN) pipelines with edge-heterogeneous graph representations. SciBERT is used for node feature generation to capture higher-order semantics within the articles' textual metadata. Fully supervised transductive node classification experiments are conducted on the Open Graph Benchmark (OGB) ogbn-arxiv dataset and the PubMed diabetes dataset, augmented with additional metadata from Microsoft Academic Graph (MAG) and PubMed Central, respectively. The results demonstrate that edge-heterogeneous graphs consistently improve the performance of all GNN models compared to the edge-homogeneous graphs. The transformed data enable simple and shallow GNN pipelines to achieve results on par with more complex architectures. On ogbn-arxiv, we achieve a top-15 result in the OGB competition with a 2-layer GCN (accuracy 74.61%), being the highest-scoring solution with sub-1 million parameters. On PubMed, we closely trail SOTA GNN architectures using a 2-layer GraphSAGE by including additional co-authorship edges in the graph (accuracy 89.88%). The implementation is available at: [https://github.com/lyvykhang/edgehetero-nodeproppred](https://github.com/lyvykhang/edgehetero-nodeproppred). Heterogeneous Graph Learning Graph Neural Networks Article Classification Document Relatedness ## 1 Introduction Article classification is a challenging downstream task within natural language processing (NLP) [16]. An important practical application is classifying existing or newly-published articles according to specific research taxonomies. The task can be approached as a graph node classification problem, where nodes represent articles with corresponding feature mappings, and edges are defined by a strong signal of article relatedness, e.g. citations/references. Conventionally, graph representation learning is handled in two phases: unsupervised node feature generation, followed by supervised learning on said features using the graph structure. Graph neural networks (GNNs) can be successfully employed for the second phase of such problems, being capable of preserving the rich structural information encoded by graphs. In recent years, prolific GNN architectures have achieved strong performance on citation network benchmarks (Kipf and Welling, 2017; Hamilton et al., 2017; Velickovic et al., 2018; Frasca et al., 2020; Li et al., 2021). We focus on combining textual information from articles with various indicators of article relatedness (citation data, co-authorship, subject fields, and publication sources) to create a graph with multiple edge types; further referred to as an _edge-heterogeneous_ graph, also known as multi-graphs or multipartite networks (Barabasi and Posfai, 2017). We use two established node classification benchmarks - the citation graphs ogn-arxiv and PubMed - and leverage their connection to large citation databases - Microsoft Academic Graph (MAG) and PubMed Central - to retrieve the metadata fields and enrich the graph structure with additional edge types (Hu et al., 2020; Sen et al., 2008). For feature generation, SciBERT is used to infer embeddings based on articles' titles and abstracts (Beltagy et al., 2019), with the intention of capturing higher-order semantics compared to the defaults, i.e. Skip-Gram or bag-of-words. We test our transformed graphs with a variety of GNN backbone models, converted to support heterogeneous input using the relational graph convolutional network (R-GCN) framework (Schlichtkrull et al., 2018). In essence, we approach a typically homogeneous task using heterogeneous techniques. The method is intuitively simple and interpretable; we do not utilize complex model architectures and training frameworks, focusing primarily on data retrieval and preprocessing to boost the performance of simpler models, thus maintaining a reasonably low computational cost and small number of fitted parameters. A considerable volume of research is devoted to article classification, graph representation learning with respect to citation networks, and the adaptation of these practices to heterogeneous graphs (Wu et al., 2019; Bing et al., 2022). However, the application of _heterogeneous_ graph enrichment techniques to article classification is not well-studied and presents a research opportunity. Existing works on heterogeneous graphs often consider multiple node types, expanding Figure 1: Illustration of the proposed edge heterogeneity, which enables the neighboring feature aggregation for a node \(X_{1}\) to be performed across a variety of subgraphs, leveraging multiple signals of article relatedness (References, Authorship, and Journal depicted here). from article to entity classification; we exclusively investigate the heterogeneity of paper-to-paper relationships to remain consistent with the single-node type problem setting. The emergence of rich metadata repositories for papers, e.g. OpenAlex, illustrates the relevance of our research (Priem et al., 2022). Scalability is often a concern with GNN architectures. For this reason, numerous approaches simplify typical GNN architectures with varying strategies, e.g. pre-computation or linearization, without sacrificing significant performance in most downstream tasks (Frasca et al., 2020; Wu et al., 2019; Prieto et al., 2023). Other solutions avoid GNNs altogether, opting for simpler approaches based on early graph-based techniques like label propagation, which outperform GNNs in several node classification benchmarks (Huang et al., 2021). The success of these simple approaches raises questions about the potential impracticality of deep GNN architectures on large real-world networks with a strong notion of locality, and whether or not such architectures are actually necessary to achieve satisfactory performance. Compared to simple homogeneous graphs, heterogeneous graphs encode rich structural and semantic information, and are more representative of real-world information networks and entity relationships (Bing et al., 2022). For example, networks constructed from citation databases can feature relations between papers, their authors, and shared keywords, often expressed in an RDF triple, e.g. "_paper_ (\(\xrightarrow{\text{(co-)authored by}}\)) _author_," "_paper_ \(\xrightarrow{\text{includes}}\) _keyword_," "_paper_ \(\xrightarrow{\text{cites}}\) _paper_." Heterogeneous GNN architectures share many similarities with their homogeneous counterparts; a common approach is to aggregate feature information from local neighborhoods, while using additional modules to account for varying node and/or edge types (Yang et al., 2022). Notably, the relational graph convolutional network approach (R-GCN) by Schlichtkrull et al. (2018) shows that GCN-based frameworks can be effectively applied to modeling relational data, specifically for the task of node classification. The authors propose a modeling technique where the message passing functions are duplicated and applied individually to each relationship type. This transformation can be generalized to a variety of GNN convolutional operators in order to convert them into their relational (heterogeneous) counterparts. ## 2 Methodology We propose an approach focusing on dataset provenance, leveraging their linkage to large citation and metadata repositories, e.g. MAG and PubMed Central, to retrieve additional features and enrich their graph representations. The proposed method is GNN-agnostic, compatible with a variety of model pipelines (provided they can function with heterogeneous input graphs) and node embedding techniques (results are presented with the provided features, plus the SciBERT embeddings). Figure 1 provides a high-level overview of the method. The tested GNN backbones (see section 3) are converted to support heterogeneous input using the aforementioned R-GCN transformation defined by Schlichtkrull et al. (2018), involving the duplication of the message passing functions at each convolutional layer per relationship type; we employ the PyTorch Geometric (PyG) implementation of this technique, using the mean as the aggregation operator (Fey and Lenssen, 2019). ### Datasets Our experiments are conducted on two datasets: the Open Graph Benchmark (OGB) ogbn-arxiv dataset and the PubMed diabetes dataset. The OGB ogbn-arxiv dataset consists of 169,343 Computer Science papers from arXiv, hand-labeled into 40 subject areas by paper authors and arXiv moderators, with 1,166,243 reference links (Hu et al., 2020). Node features are constructed from textual information by averaging the embeddings of words (which are generated with the Skip-Gram model) in the articles' titles and abstracts. The dataset provides the mapping used between papers' node IDs and their original MAG IDs, which can be used to retrieve additional metadata. The PubMed diabetes dataset consists of 19,717 papers from the National Library of Medicine's (NLM) PubMed database labeled into one of three categories: "Diabetes Mellitus, Experimental," "Diabetes Mellitus Type 1," and "Diabetes Mellitus Type 2," with 44,338 references links (Sen et al., 2008). Bag-of-words vectors from a dictionary of 500 unique words are provided as node features. Similarly, the papers' original PubMed IDs can be used to fetch relevant metadata. ### Data Augmentation We used a July-2020 snapshot of the complete Microsoft Academic Graph (MAG) index (240M papers) - since MAG (and the associated API) was discontinued later - to obtain additional metadata (Zhang et al., 2022).1 Potential indicators of paper relatedness include: authors, venue, and fields of study. Other metadata (DOI, volume, page numbers, etc.) are not useful for our purposes. Hence, we "heterogenize" the dataset with the following additional edge types (in addition to the References edges): * (Co)-Authorship: Two papers are connected if they share an author, with a corresponding edge weight indicating the number of shared authors. This is based on the assumption that a given author tends to perform research on similar disciplines. * Venue: Two papers are connected if they were published at the same venue (conference or journal), assuming that specific conferences contribute to relevant research areas. * Fields of Study (FoS): Two papers are connected if they share at least one field, with an edge weight based on the number of shared fields. Fields of study, e.g. "computer science," "neural networks," etc. are automatically assigned with an associated confidence score (which we do not use), and each paper can have multiple fields of study, making them functionally similar to keywords. An unprocessed version of the PubMed dataset preserving the original paper IDs was used [Namata et al., 2012].2 A January-2023 snapshot of the complete PubMed citation database (35M papers) was accessed for additional metadata. Relevant features include: title, abstract, authors, published journal (indicated by unique NLM journal IDs), and Medical Subject Headings (MeSH@). The latter is an NLM-controlled hierarchical vocabulary used to characterize biomedical article content. As with the Fields of Study, they are functionally similar to curated keywords. As before, we use three additional edge types: Footnote 2: This version of the dataset is hosted by the LINQS Statistical Relational Learning Group. The 2023 annual baseline on the NLM FTP server is accessed to retrieve metadata. All files were downloaded locally and metadata of matching IDs were extracted (19,716 records matched, 1 missing). * (Co)-Authorship: Two papers are connected if they share an author, with a corresponding edge weight indicating the number of shared authors. Unlike MAG, PubMed Central does not provide unique identifiers for authors, so exact author names are used, which can lead to some ambiguity in a minority of cases, e.g. two distinct authors with the same name. * Journal: Two papers are connected if they were published in the same journal. The intuition is that journals publish papers on similar topics. * MeSH: Two papers are connected if they share at least one MeSH subject heading, with an edge weight based on the number of shared subjects. Since the ogbn-arxiv venue and fields and PubMed MeSH relationships result in massive edge lists, posing out-of-memory issues on the utilized hardware, we only create edges between up to \(k\) nodes per unique venue/field/heading, where \(k\) is the mean number of papers per venue/field/heading, in order to reduce the subgraphs' sizes. All edge weights are normalized using logistic sigmoid. In a traditional citation network, the links are typically _directed_, but in our experiments, they are _undirected_ to strengthen the connections of communities in the graph. The graph includes only one node type, "paper." Other approaches, notably in the citation recommendation domain, leverage node types to represent authors and journals [Guo et al., 2017]. However, this work strictly concerns relationships between papers and not between papers and other entities, in order to apply the homogeneous problem settings. Practically, the resultant graph would contain too many nodes, while there the number of features and metadata is insufficient to generate informative representations of other node types, limiting their usefulness in the feature aggregation step. For textual node feature representation, embeddings of the concatenated paper titles and abstracts are inferred using SciBERT [Beltagy et al., 2019]; this is inspired by the work of Cohan et al. [2020], in which SciBERT was pretrained on a citation graph and used to generate high-quality document-level embeddings for node classification purposes. Here, we utilize the base model (scibert-scivocab-uncased) without additional fine-tuning. ### Subgraph Properties Some insights on the characteristics of the defined subgraphs can be derived from Table 1, which lists the following: number of nodes and edges (post-conversion to undirected) in the largest connected component (LCC), total number of edges, average degree, network average clustering coefficient [Schank and Wagner, 2005], and edge homophily ratio (fraction of edges connecting nodes with the same label) [Ma et al., 2022]. While the References graphs do not exhibit the tight clustering typical of real-world information networks, the strong signal of relatedness in the edges has nonetheless ensured their compatibility with message passing GNN paradigms (Wu et al., 2019a). This relatedness is also evident in the Authorship graphs, and the high level of clustering confirms the initial hypothesis that researchers co-author papers within similar topics. The topic-based relationships (FoS and MeSH), include many edges formed between shared generic keywords, e.g. "computer science," leading to rather average homophily. The publication source subgraphs (Venue and Journal) consist of isolated fully-connected clusters per unique source, with no inter-cluster connections, as each paper belongs to only one journal or venue. As with the topic-based relationships, the research area covered by a given publication conference or journal can be quite broad with respect to the paper labels. Figure 2 shows the degree distribution of all edge type subgraphs in both datasets, which gives a clear view of the subgraphs' structures when interpreted with the above metrics. The high frequency of large node degrees in the PubMed Journal subgraph corresponds to large journals, the size of the LCC (2,213) is the number of papers in the largest journal. While not visible for the ogbn-arxiv Venue subgraph due to the aforementioned sampling, a similar distribution would occur for large venues if all possible edges had been included. In contrast, the lower occurrence of high degree nodes and low clustering in the References subgraphs of both datasets indicates greater average _distance_ across the LCC compared to the other subgraphs; such a structure stands to benefit the most from the multi-hop neighborhood \begin{table} \begin{tabular}{c|l|l|l|l|l|l|l} \hline \hline & & Nodes in LCC & Edges in LCC & Edges total & Avg. degree & Avg. clustering coeff. & Homophily \\ \hline \multirow{4}{*}{ogbn-arxiv} & References & 169,343 & 2,315,598 & 2,315,598 & 13.7 & 0.247 & 0.654 \\ \cline{2-8} & Authorship & 145,973 & 6,697,998 & 6,749,335 & 39.9 & 0.702 & 0.580 \\ \cline{2-8} & Venue & 63 & 3,906 & 600,930 & 3.5 & 0.112 & 0.077 \\ \cline{2-8} & FoS & 144,529 & 8,279,492 & 8,279,687 & 48.9 & 0.505 & 0.319 \\ \hline \multirow{4}{*}{PubMed} & References & 19,716 & 88,649 & 88,649 & 4.5 & 0.057 & 0.802 \\ \cline{2-8} & Authorship & 17,683 & 729,468 & 731,376 & 37.1 & 0.623 & 0.721 \\ \cline{2-8} & Journal & 2,213 & 4,895,156 & 11,426,930 & 579.6 & 0.940 & 0.414 \\ \cline{2-8} & MeSH & 18,345 & 1,578,526 & 1,578,530 & 80.1 & 0.456 & 0.550 \\ \hline \hline \end{tabular} \end{table} Table 1: Properties of constructed subgraphs. Note that the References subgraphs are the only ones without isolated nodes. Note that the network average clustering coefficient computation accounts for isolated nodes (which are treated as having “zero” local clustering), hence the value for sparser subgraphs, e.g. ogbn-arxiv Venue, is notably lower than intuitively expected (1, in the absence of isolated nodes). Figure 2: Degree distribution, i.e. frequency of each degree value, of all subgraphs for ogbn-arxiv (left) and PubMed (right), plotted on a log-log scale. Points indicate the unique degree values. feature aggregation performed by GNNs. Relative to the References, the Authorship and topic-based subgraphs (FoS and MeSH) exhibit increased skewness in the distribution and higher average clustering, which indicates the presence of more (near-)cliques, i.e. subsections of the graph wherein (almost) any two papers share an author or topic. Hence, these subgraphs bear the closest structural resemblance to small-world networks (Watts and Strogatz, 1998). The impact of these degree distributions on classification performance is further investigated in section 3.1. ## 3 Experiments and Results We evaluate model performance on the task of _fully supervised transductive node classification_. The metric is multi-class accuracy on the test set. The proposed data preparation scheme is tested with several GNN architectures commonly deployed in benchmarks. We consider two GCN setups (base one and with a jumping knowledge module using concatenation as the aggregation scheme), as well as GraphSAGE (Kipf and Welling, 2017; Xu et al., 2018; Hamilton et al., 2017). We also run experiments with the simplified graph convolutional operator (SGC) (Wu et al., 2019). The increased graph footprint can lead to scalability concerns, hence the performance of such lightweight and parameter-efficient methods is of interest. For ogbn-arxiv, the provided time-based split is used: train on papers published until 2017, validate on those published in 2018, test on those published since 2019. For PubMed diabetes, nodes of each class are randomly split into 60% - 20% - 20% for training - validation - and testing for each run (as performed by Pei et al. (2020) and Chen et al. (2020)). Ablation experiments are also performed to examine the impact of the different edge types (averaged across 3 runs) and to identify the optimal configuration for both datasets, on which we then report final results (averaged across 10 runs). Experiments were conducted on a g4dn.2xlarge EC2 instance (32 GB RAM, 1 NVIDIA Tesla T4 16 GB VRAM). Models are trained with negative log-likelihood loss and early stopping based on validation accuracy (patience of 50 epochs, with an upper limit of 500 epochs). We also scale down the learning rate as the validation loss plateaus. The SciBERT embeddings are pre-computed using multi-GPU distributed inference. ### Ablation Study Ablation results for both datasets are presented in Table 2, separated by node embedding type. First, all possible homogeneous subgraphs are inspected, as this is the conventional input data for this task (see the first 4 rows). The best performance is achieved on the References graphs, followed by the Authorship graphs. Then we build upon the References graph by adding different combinations of other subgraphs. The results demonstrate that transitioning to edge-heterogeneous graphs can yield up to 2.87% performance improvement on ogbn-arxiv and 2.05% on PubMed (see the difference between the yellow- and green-highlighted cells). These results were obtained with a 2-layer GCN base, using the following fixed hyperparameter values: optimizer weight decay of 0.001 and initial learning rate of 0.01, hidden layer dropout probability of 0.5, and hidden feature dimensionality of 128 (ogbn-arxiv) or 64 (PubMed). Cross-checking with the metrics in Table 1 implies improvements from edge-heterogeneity roughly correspond to the edge homophily ratio of the utilized subgraphs, as strong homophily is implicitly assumed by the neighborhood aggregation mechanism of GCN-based models. Subsequently, their performance can be erratic and unpredictable in graphs with comparatively low homophily (Kipf and Welling, 2017; Ma et al., 2022). Since the R-GCN transformation collects neighborhoods from input subgraphs with equal weighting, including a comparatively noisy subgraphs (like ogbn-arxiv Venue) can worsen predictive performance. Changing the R-GCN aggregation operator, e.g. from mean to concatenation, does not alleviate this. This study suggests publication sources do not yield beneficial subgraphs with their current definition; while the aforementioned tight clustering could provide a strong signal for classification with sufficient homophily, any potential benefit is absent because these clusters are noisy. The topic-based subgraphs are more structurally preferable, but noisy edges (from keywords tied to concepts that are higher-level than the paper labels) reduce their usefulness in classification. In all experimental settings, the largest gain from heterogeneity comes from the co-authorship graph, often outperforming configurations that use more subgraphs. These trends are expected, given the characteristics discussed in section 2.3. Replacing the PubMed bag-of-words vectors with SciBERT embeddings does not improve raw accuracy in most cases. Though, the increased feature dimensionality of SciBERT embeddings do stabilize convergence behavior in configurations using multiple subgraphs. A paper might possess only a few non-zero feature dimensions when using bag-of-words; combined with the additional feature averaging step from the R-GCN transformation, the risk of oversmoothing is increased even on shallow 2-layer networks in edge-heterogeneous configurations (however, note that tested single-layer models underfit and thus do not improve performance). In accordance with the findings of Chen et al. (2020), these hypothesized oversmoothing effects are more pronounced when using graphs with high average degree, i.e. the publication source and topic-based subgraphs; nodes with high degree aggregate more information from their neighbors, increasing the likelihood of homogenization as network depth increases. ### Optimal Configuration Results with the optimal configuration identified from the ablation study on ogbn-arxiv and PubMed are listed in Tables 3 and 4, respectively. Note that the actual performance gains may vary slightly since the third-party benchmarks were used with _directed_ References graphs and differently-tuned hyperparameters. The results demonstrate that the additional structural information provided by edge heterogeneity consistently improves final performance of a variety of hetero-transformed GNN frameworks compared to their homogeneous counterparts on both datasets, when making optimal subgraph choices (though, suboptimal choices can still situationally improve performance). These improvements are independent of the tested textual embedding methods, and can occur even when the added subgraphs possess suboptimal graph properties, e.g. lower edge homophily ratio and presence of isolated nodes, compared to the starting References graph. SGC with ogbn-arxiv shows the strongest improvement over the baseline. Most likely, the linear classifier relies less on graph structures, and hence benefits more from the deeper textual semantics captured by SciBERT. Notably, the best results are competitive with the SOTA, while operating on a limited compute budget and low level of complexity (simple 2-layer GNN model pipelines with comparatively few tunable parameters). On ogbn-arxiv, we achieve a top-15 result with the GCN backbone, being the highest-scoring solution with sub-1 million parameters. On PubMed with the aforementioned splitting strategy, we closely trail the best \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{References} & \multirow{2}{*}{Authorship} & Venue / & FoS / & \multicolumn{2}{c|}{ogbn-arxiv} & \multicolumn{2}{c}{PubMed} \\ \cline{3-8} & & & Journal & & SkipGram & SciBERT & Bag-of-Words & SciBERT \\ \hline ✓ & & & & 0.6858 \(\pm\) 0.0008 & 0.7177 \(\pm\) 0.0024 & 0.8646 \(\pm\) 0.0094 & 0.8635 \(\pm\) 0.0043 \\ \hline & ✓ & & & 0.6099 \(\pm\) 0.0008 & 0.6456 \(\pm\) 0.0009 & 0.8026 \(\pm\) 0.0054 & 0.7921 \(\pm\) 0.0121 \\ \hline & & ✓ & & 0.4665 \(\pm\) 0.0003 & 0.5998 \(\pm\) 0.0043 & 0.5440 \(\pm\) 0.0357* & 0.6137 \(\pm\) 0.0090 \\ \hline & & & ✓ & 0.4828 \(\pm\) 0.0088 & 0.5270 \(\pm\) 0.0043 & 0.7499 \(\pm\) 0.0047 & 0.7274 \(\pm\) 0.0158 \\ \hline \hline ✓ & ✓ & & & 0.7079 \(\pm\) 0.0023 & 0.7330 \(\pm\) 0.0010 & **0.8851 \(\pm\) 0.0050** & **0.8747 \(\pm\) 0.0055** \\ \hline ✓ & & & ✓ & 0.6795 \(\pm\) 0.0035 & 0.7136 \(\pm\) 0.0036 & 0.8745 \(\pm\) 0.0103 & 0.8661 \(\pm\) 0.0076 \\ \hline ✓ & & & ✓ & 0.6914 \(\pm\) 0.0007 & 0.7221 \(\pm\) 0.0002 & 0.8737 \(\pm\) 0.0106 & 0.8643 \(\pm\) 0.0091 \\ \hline ✓ & ✓ & ✓ & & 0.6995 \(\pm\) 0.0009 & 0.7334 \(\pm\) 0.0008 & 0.8430 \(\pm\) 0.0389* & 0.8696 \(\pm\) 0.0046 \\ \hline ✓ & ✓ & & ✓ & ✓ & **0.7145 \(\pm\) 0.0024** & **0.7383 \(\pm\) 0.0003** & 0.8301 \(\pm\) 0.1035* & 0.8724 \(\pm\) 0.0050 \\ \hline ✓ & & & ✓ & ✓ & 0.6828 \(\pm\) 0.0015 & 0.7189 \(\pm\) 0.0001 & 0.8306 \(\pm\) 0.0563* & 0.8656 \(\pm\) 0.0104 \\ \hline ✓ & ✓ & ✓ & ✓ & ✓ & 0.7049 \(\pm\) 0.0011 & 0.7327 \(\pm\) 0.0004 & 0.8563 \(\pm\) 0.0120* & 0.8726 \(\pm\) 0.0038 \\ \hline \hline \end{tabular} * Indicates (significant) oversmoothing. \end{table} Table 2: Ablation study for both datasets, 3-run average test accuracy with a 2-layer GCN. The best results are highlighted in bold. Note the table feature differences (Venue, FoS, and Skip-Gram for ogbn-arxiv, Journal, MeSH, and Bag-of-Words for PubMed). \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \hline Model & Validation Accuracy & Test Accuracy & \# Params & Baseline Accuracy & Improvement \\ \hline GCN & 0.7586 \(\pm\) 0.0012 & 0.7461 \(\pm\) 0.0006 & 621,944 & 0.7174 \(\pm\) 0.0029 & +2.87\% \\ \hline GCN+JK & 0.7629 \(\pm\) 0.0007 & 0.7472 \(\pm\) 0.0024 & 809,512 & 0.7219 \(\pm\) 0.0021 & +2.53\% \\ \hline SAGE & 0.7605 \(\pm\) 0.0007 & 0.7461 \(\pm\) 0.0013 & 1,242,488 & 0.7149 \(\pm\) 0.0027 & +3.12\% \\ \hline SGC & 0.7515 \(\pm\) 0.0005 & 0.7419 \(\pm\) 0.0004 & 92,280 & 0.6855 \(\pm\) 0.0002 & +5.64\% \\ \hline \hline \end{tabular} * [https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-arxiv](https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-arxiv). \end{table} Table 3: Results on the best ogbn-arxiv configuration (References, Authorship, Fields of Study subgraphs, and SciBERT embeddings) with hyperparameters described in Appendix 4. The baseline results on the unmodified dataset and the improvement over the baseline (test acc. minus baseline acc.) are also displayed. For GCN, GCN+JK, and SAGE, these are taken from the official leaderboard.* The SGC baseline was obtained by using the (undirected) References subgraph and provided SkipGram features. performance reported by GCNII (90.30%) and Geom-GCN (90.05%) (Chen et al., 2020; Pei et al., 2020), by using a GraphSAGE backbone with just the added co-authorship subgraph input (89.88%). ## 4 Conclusions and Future Work In this paper, we propose a data transformation methodology leveraging metadata retrieved from citation databases to create enriched edge-heterogeneous graph representations based on various additional signals of document relatedness: co-authorship, publication source, fields of study, and subject headings. We also test the substitution of default node features with SciBERT embeddings to capture higher-dimensionality textual semantics. By nature, the methodology is GNN- and embedding-agnostic. Deploying optimal configurations of the transformed graph with a variety of simple GNN pipelines leads to consistent improvements over the starting data, and enables results on par with the SOTA in full-supervised node classification. Overall, results show that our methodology can be an effective strategy to achieve respectable performance on datasets with readily-available article metadata, without necessitating complex GNN architectures and lengthy (pre-)training procedures. As the methodology is compatible with any hetero-transformable GNN backbone and node embedding technique, we expect that deploying the transformed data with SOTA GNN frameworks, e.g. RevGAT by Li et al. (2021) on ogbn-arxiv, will lead to a greater raw performance. Though, the larger memory footprint of the graph may complicate the application of such frameworks. Refining the edge type definitions, e.g. connect papers that share at least two fields of study and/or remove "generic" fields applicable to a majority of papers in the set, can help de-noising and improving the properties of the respective subgraphs. A custom aggregation scheme could be implemented for the heterogeneous transformation dependent on individual subgraph properties, such as a weighted average based on some metric of subgraph "quality," e.g. homophily. To mitigate the increased risk of heterogeneity-induced oversmoothing, additional regularization techniques, e.g. DropEdge by Rong et al. (2020), could be considered. Finally, applying parameter-efficient fine-tuning (PEFT) techniques to the SciBERT model can improve feature separability and thus classification performance (Duan et al., 2023); the effectiveness of different transformer-based language models can also be investigated. ## Acknowledgement The authors would like to thank \(\copyright\) prof. Paul Groth for his supervision and consultation throughout the project.
2309.09240
High-dimensional manifold of solutions in neural networks: insights from statistical physics
In these pedagogic notes I review the statistical mechanics approach to neural networks, focusing on the paradigmatic example of the perceptron architecture with binary an continuous weights, in the classification setting. I will review the Gardner's approach based on replica method and the derivation of the SAT/UNSAT transition in the storage setting. Then, I discuss some recent works that unveiled how the zero training error configurations are geometrically arranged, and how this arrangement changes as the size of the training set increases. I also illustrate how different regions of solution space can be explored analytically and how the landscape in the vicinity of a solution can be characterized. I give evidence how, in binary weight models, algorithmic hardness is a consequence of the disappearance of a clustered region of solutions that extends to very large distances. Finally, I demonstrate how the study of linear mode connectivity between solutions can give insights into the average shape of the solution manifold.
Enrico M. Malatesta
2023-09-17T11:10:25Z
http://arxiv.org/abs/2309.09240v1
**High-dimensional manifold of solutions in neural networks:** ## Abstract **In these pedagogic notes I review the statistical mechanics approach to neural networks, focusing on the paradigmatic example of the perceptron architecture with binary an continuous weights, in the classification setting. I will review the Gardner's approach based on replica method and the derivation of the SAT/UNSAT transition in the storage setting. Then, I discuss some recent works that unveiled how the zero training error configurations are geometrically arranged, and how this arrangement changes as the size of the training set increases. I also illustrate how different regions of solution space can be explored analytically and how the landscape in the vicinity of a solution can be characterized. I give evidence how, in binary weight models, algorithmic hardness is a consequence of the disappearance of a clustered region of solutions that extends to very large distances. Finally, I demonstrate how the study of linear mode connectivity between solutions can give insights into the average shape of the solution manifold.** ###### Contents * 1 Introduction * 2 Gardner's computation * 2.1 Statistical mechanics representation * 2.2 Simple geometric properties of the solution space * 2.3 The typical Gardner volume * 2.4 Replica Method * 2.5 Replica-Symmetric ansatz * 2.6 SAT/UNSAT transition * 3 Landscape geometry * 3.1 Local Entropy * 3.2 Algorithmic hardness * 4 Linear mode connectivity ## 1 Introduction Suppose we are given a dataset \(\mathcal{D}=\left\{\xi^{\mu},y^{\mu}\right\}_{\mu=1}^{p}\) composed by a set of \(P\), \(N\)-dimensional "patterns" \(\xi^{\mu}_{i}\), \(i=1,\ldots,N\) and the corresponding label \(y^{\mu}\). The patterns can represent whatever type of data, for example an image, text or audio. In the binary classification setting that we will consider, \(y^{\mu}=\pm 1\); it will represent some particular property of the data that we want to be able to predict, e.g. if in the image there is a cat or a dog. The goal is to learn a function \(f:\mathbb{R}^{N}\to\pm 1\) that is able to associate to each input \(\xi\in\mathbb{R}^{N}\) the corresponding label. This function should be able to _generalize_, i.e. to _predict_ the label corresponding to a pattern not in the training set. In the following we will consider, in order to fit training set, the so-called _perceptron_ model, as it is the simplest, yet non-trivial, one-layer neural network that we can study using statistical mechanics tools. Given an input pattern \(\xi^{\mu}\) the perceptron predicts a label \(\hat{y}^{\mu}\) \[\hat{y}^{\mu}=\text{sign}\left(\frac{1}{\sqrt{N}}\sum_{i=1}^{N}w_{i}\,\xi^{ \mu}_{i}\right) \tag{1}\] where \(w_{i}\) are the \(N\) parameters that need to be adjusted in order to fit the training set. If the possible values of \(\mathbf{w}\) lie on the vertices of the hypercube, i.e. \(w_{i}=\pm 1\), \(\forall i\), the model is called _binary_ perceptron. If instead the weights lie on the \(N-\)dimensional sphere of radius \(\sqrt{N}\), i.e. \(\sum_{i=1}^{N}w_{i}^{2}=\mathbf{w}\cdot\mathbf{w}=N\) the model is called _spherical_ perceptron. As usual in statistical mechanics, we consider the case of a _synthetic_ dataset, where the patterns are formed by random i.i.d. \(N\)-dimensional Gaussian patterns \(\xi^{\mu}_{i}\sim\mathcal{N}(0,1)\), \(i=1,\ldots,N\). Depending on the choice of the label \(y^{\mu}\), we can define two different scenarios: * in the so called _teacher-student_ scenario, \(y^{\mu}\) is generated by a network, called "teacher". The simplest case corresponds to a perceptron architecture, i.e. \[y^{\mu}=\text{sign}\left(\frac{1}{\sqrt{N}}\sum_{i=1}^{N}w_{i}^{T}\xi^{\mu}_{ i}\right)\] (2) with random i.i.d. Gaussian or binary weights: \(w_{i}^{T}\sim\mathcal{N}(0,1)\) or \(w_{i}^{T}=\pm 1\) with equal probability. This model has been studied by Gardner and Derrida [1] and Gyorgyi in the binary case [2]. * In real scenarios, sometimes data can be corrupted or the underline rule that generates the label for a given input is noisy. This makes the problem sometimes being _unrealizable_, since, expecially for large datasets, no student exists that is able to learn the training set. The _storage problem_, describes the case of extreme noise in the dataset: \(y^{\mu}\) is chosen to be \(\pm 1\) with equal probability. As a matter of fact this makes the label completely independent from the input data. In the following we will focus on the storage problem setting. Even if this problem does not present the notion of generalization error, still, it is a scenario where we can try to answer very simple questions using the same statistical mechanics tools that can be also applied to teacher-student settings (see for example [3]). For example: what is the maximum value of the size of the training set \(P\) that we are able to fit? It was discovered by the pioneering work by Gardner [4, 5] and Krauth and Mezard [6] that, in both binary and continuous settings, in the large \(P\) and \(N\) limit, with the ratio \(\alpha\equiv P/N\) fixed, it exists a _sharp_ phase transition \(\alpha_{c}\) separating a satisfiable (SAT) phase \(\alpha<\alpha_{c}\) where solutions to the problem exists, to an unsatisfiable (UNSAT) phase where the set of solutions is empty. The goal of the next Section 2 is to introduce the statistical mechanics tools through which we can answer such a question. The same tools, in turn, will provide valuable insights into how the solutions are arranged geometrically. ## 2 Gardner's computation ### Statistical mechanics representation Every good statistical mechanics computation starts by writing the partition function of the model under consideration. This is the goal of this section. Fitting the training set means that we need to satisfy the following set of constraints \[\Delta^{\mu}\equiv\frac{y^{\mu}}{\sqrt{N}}\sum_{i}w_{i}\xi_{i}^{\mu}\geq 0\,, \qquad\mu=1,\dots,P \tag{3}\] indeed if the _stability_\(\Delta^{\mu}\) of example \(\mu\) in the training set is positive, it means that the predicted label is equal to the true one. In general we can try to ask to fit the training set in a way that is _robust_ with respect to noisy perturbation of the inputs; this can be done by requiring that the stability of each pattern is larger that a given threshold \(\kappa\geq 0\) called _margin_ \[\Delta^{\mu}\equiv\frac{y^{\mu}}{\sqrt{N}}\sum_{i}w_{i}\xi_{i}^{\mu}\geq\kappa \,,\qquad\mu=1,\dots,P \tag{4}\] The total amount of noise that we can inject in \(\xi^{\mu}\) without changing the corresponding label \(y^{\mu}\) depends on \(\kappa\). Notice that if \(\kappa\geq 0\) the solutions to (4) are also solutions to (3). Sometimes satisfying all the contraints (3) can be hard or even impossible, for example when the problem is not linearly separable. In that case one can relax the problem by counting as "satisfied" certain violated contraints: one way to do that is to impose a _negative_ margin \(\kappa\). The corresponding model is usually called _negative perceptron_. As we will see in the next subsection this changes dramatically the nature of the problem. We are now ready to write down the partition function, as \[Z_{\mathcal{D}}=\int d\mu(\boldsymbol{w})\,\mathbb{X}_{\mathcal{D}}( \boldsymbol{w};\kappa) \tag{5}\] where \[\mathbb{X}_{\mathcal{D}}(\boldsymbol{w};\kappa)\equiv\prod_{\mu=1}^{P}\Theta \left(\frac{y^{\mu}}{\sqrt{N}}\sum_{i}w_{i}\xi_{i}^{\mu}-\kappa\right) \tag{6}\] is an indicator function that selects a solution to the learning problem. Indeed \(\Theta(x)\) is the Heaviside Theta function that gives \(1\) if \(x>0\) and zero otherwise. \(d\mu(\boldsymbol{w})\) is a measure over the weights that depends on the spherical/binary nature of the problem under consideration \[d\mu(\boldsymbol{w})=\left\{\begin{array}{ll}\prod_{i=1}^{N}dw_{i}\,\delta \left(\boldsymbol{w}\cdot\boldsymbol{w}-N\right),&\text{\emph{spherical case}}\\ \prod_{i=1}^{N}dw_{i}\,\frac{1}{2^{N}}\prod_{i}\left[\delta\left(w_{i}-1 \right)+\delta\left(w_{i}+1\right)\right],&\text{\emph{binary case}}\end{array}\right. \tag{7}\] The partition function is called also _Gardner_ volume [5] since it measures the total volume (or number in the binary case) of networks satisfying all the constraints imposed by the training set. Notice that sending \(\xi_{i}^{\mu}\to y^{\mu}\xi_{i}^{\mu}\) one does not change the probability measure over the patterns \(\xi^{\mu}\), therefore we can simply set \(y^{\mu}=1\) for all \(\mu=1,\dots,P\) without loss of generality. ### Simple geometric properties of the solution space We want here to give a simple geometrical interpretation of the space of solutions corresponding to imposing constraints of equation (4). By those simple arguments, we will be able to unravel the convexity/non-convexity properties of the space of solutions. Let's start from the spherical case. For simplicity consider also \(\kappa=0\). Initially, when no pattern has been presented, the whole volume of the \(N\)-dimensional sphere is a solution. When we present the first pattern \(\xi^{1}\) uniformly generated on the sphere, the allowed solutions lie on the half-sphere with a positive dot product with \(\xi^{1}\). Of course, the same would happen if we had presented any other pattern \(\xi^{\mu}\) of the training set. The intersection of those half-spheres form the space of solutions. Since the intersection of half spheres is a convex set, it turns out that the manifold of solutions is always convex, see left panel of Fig. 1 for an example. Notice that this is also true if one is interested in looking to the subset of solutions having \(\kappa>0\). If one keeps adding constraints one can obtain an empty set. The minimal density of constraints for which this happens is the SAT/UNSAT transition. The middle panel of Fig. 1, refers to the \(\kappa<0\) case, i.e. the spherical negative perceptron [7]. In this problem the space of solutions can be obtained by _removing_ from the sphere a convex spherical cap, one for each pattern (blue and red regions). As a result the manifold of solutions is non-convex and, if one adds too many constraints the space of solutions can become also _disconnected_, before the SAT/UNSAT transition. In the right panel of Fig. 1 we show the binary case. Since \(w_{i}=\pm 1\), the hypercube (in red) is inscribed in the sphere in \(N\)-dimension, i.e. the vertices of the hypercube are contained in the space of solutions of the corresponding spherical problem (green region). As can be readily seen from the example in the figure, the binary weight case is a non-convex problem (even for \(\kappa\geq 0\)), since in order to go from one solution to another, it can happen that one has to pass through a set of vertices not in the solution space. ### The typical Gardner volume The Gardner volume \(Z_{\mathcal{D}}\) introduced in equation 5 is random variable since it explicitly depends on the dataset. The goal of statistical mechanics is to characterize the _typical_, i.e. the most probable value of this random quantity. A first guess would be to compute the averaged volume \(\langle Z_{\mathcal{D}}\rangle_{\mathcal{D}}\). However, since (5) involves a product of many random contributions, the most Figure 1: **Left and middle panels**: space of solutions (green shaded area) in the spherical perceptron with \(\kappa=0\) (left panel) and \(\kappa<0\) (middle). The green shaded area is obtained by taking the intersection of two spherical caps corresponding to patterns \(\xi^{1}\) and \(\xi^{2}\). **Right panel**: binary weight case. The cube (red) is inscribed in the sphere. The green shaded area represents the (convex) space of solutions of the corresponding spherical problem. In this simple example only two vertices of the cube are inside the green region. probable value of \(Z_{\mathcal{D}}\) and its average are expected to not coincide1. On the other hand, the log of the product of independent random variables is equivalent to a large sum of independent terms, that, because of the central limit theorem, is Gaussian distributed; in that case we expect that the most probable value coincides with the average. Therefore we expect that for large \(N\) Footnote 1: In the spin glass literature one says that \(Z_{\mathcal{D}}\) is _not a self-averaging_ quantity for large \(N\). \[Z_{\mathcal{D}}\sim e^{N\phi} \tag{8}\] where \[\phi=\lim_{N\to\infty}\frac{1}{N}(\ln Z_{\mathcal{D}})_{\mathcal{D}}\,. \tag{9}\] is the averaged log-volume. Since we are at zero training error \(\phi\) coincides with the _entropy_ of solutions. Performing the average over the log is usually called as _quenched_ average in spin glass theory, to distinguish from the log of the average, which is instead called _annealed_. Annealed averages are much easier than quenched ones; even if they do not give information to the typical value of a random variable, they can still be useful, since they can give an upper bound to the quenched entropy. Indeed due to Jensen's inequality \[\phi\leq\phi_{ann}=\lim_{N\to\infty}\frac{1}{N}\ln\langle Z_{\mathcal{D}} \rangle_{\mathcal{D}}\,. \tag{10}\] ### Replica Method In order to compute the average of the log we use the _replica trick_ \[\langle\ln Z_{\mathcal{D}}\rangle_{\mathcal{D}}=\lim_{n\to 0}\frac{\langle Z _{\mathcal{D}}^{n}\rangle_{\mathcal{D}}-1}{n}=\lim_{n\to 0}\frac{1}{n}\ln (Z_{\mathcal{D}}^{n})_{\mathcal{D}}\,. \tag{11}\] The _replica method_ consists in performing the average over the disorder of \(Z_{\mathcal{D}}^{n}\) considering \(n\) integer (a much easier task with respect to averaging the log of \(Z_{\mathcal{D}}\)), and then performing an analytic continuation of the result to \(n\to 0\)[8]. It has been used firstly in spin glasses models such as the Sherrington-Kirkpatrick model [9] and then applied by E. Gardner to neural networks [10, 5]. Replicating the partition function and introducing an auxiliary variable \(v_{\mu}^{a}\equiv\frac{1}{\sqrt{N}}\sum_{i}w_{i}^{a}\varepsilon_{i}^{\mu}\) using a Dirac delta function, we have \[\begin{split} Z_{\mathcal{D}}^{n}&=\int\prod_{a=1} ^{n}d\mu(w^{a})\prod_{\mu a}\Theta\left(\frac{1}{\sqrt{N}}\sum_{i}w_{i}^{a} \varepsilon_{i}^{\mu}-\kappa\right)\\ &=\int\prod_{a\mu}\frac{d\nu_{a}^{\mu}d\nu_{a}^{a}}{2\pi}\prod_{ \mu a}\Theta\left(\nu_{a}^{\mu}-\kappa\right)e^{i\nu_{a}^{a}\nu_{a}^{a}}\int \prod_{a=1}^{n}d\mu(w^{a})\,e^{-i\sum_{\mu,a}\delta_{\mu}^{\mu}\frac{1}{\sqrt {N}}\sum_{i}w_{i}^{a}\varepsilon_{i}^{\mu}}\end{split} \tag{12}\] where we have used the integral representation of the Dirac delta function \[\delta(v)=\int\frac{d\hat{v}}{2\pi}e^{iv\hat{v}}\,. \tag{13}\] Now we can perform the average over the patterns in the limit of large \(N\) obtaining \[\begin{split}\prod_{\mu i}\left\langle e^{-i\frac{\zeta_{i}^{\mu} }{\sqrt{N}}\sum_{w_{i}^{a}\varepsilon_{i}^{\mu}}}\right\rangle_{\xi_{i}^{\mu} }&=\prod_{\mu i}e^{-\frac{1}{2N}\left(\sum_{u}w_{i}^{a}\varepsilon _{\mu}^{a}\right)^{2}}\\ &=\prod_{\mu}e^{-\sum_{u<1}y_{\mu}^{a}\varepsilon_{u}^{b}\left( \frac{1}{N}\sum_{i}w_{i}^{a}w_{i}^{b}\right)-\frac{1}{N}\sum_{i}\left(y_{\mu} ^{a}\right)^{2}}\,.\end{split} \tag{14}\] Next we can enforce the definition of the \(n\times n\) matrix of order parameters \[q_{ab}\equiv\frac{1}{N}\sum_{i=1}^{N}w_{i}^{a}w_{i}^{b} \tag{15}\] by using a delta function and its integral representation. \(q_{ab}\) represents the typical overlap between two replicas \(a\) and \(b\), sampled from the Gibbs measure corresponding to (5) and having the same realization of the training set. Due to the binary and spherical normalization of the weights this quantity is bounded \(-1\leq q_{ab}\leq 1\). We have \[\begin{split}\left\langle Z_{\mathcal{D}}^{n}\right\rangle_{ \mathcal{D}}&=\int\prod_{a<b}\frac{dq_{ab}d\hat{q}_{ab}}{2\pi/N}e^ {-N\sum_{a<b}q_{ab}\hat{q}_{ab}}\int\prod_{a}d\mu(w^{a})e^{\sum_{a<b}\hat{q}_{ab }\sum u_{i}^{a}w_{i}^{b}}\\ &\times\int\prod_{a\mu}\frac{dv_{a}^{\mu}d\hat{v}_{a}^{\mu}}{2\pi }\prod_{\mu a}\Theta\left(v_{a}^{\mu}-\kappa\right)e^{i\sum_{\mu}y_{a}^{\mu}q_ {a}^{\mu}-1}\sum_{\mu}\left(\hat{y}_{a}^{\mu}\right)^{2}-\sum_{a<b,a}q_{ab} \hat{y}_{a}^{\mu}q_{b}^{b}}\,.\end{split} \tag{16}\] The next step of every replica computation is to notice that the terms depending on the patterns \(\mu=1,\dots,P\) and on the index of the weights \(i=1,\dots,N\) have been decoupled (initially they were not), at the price of coupling the replicas (that initially were uncopled). So far the computation was the same independently on the nature of the weights. From now on, however the computation is different, since we need to explicit the form of the measure \(d\mu(\mathbf{w})\) in order to factorize the expression over the index \(i\). In the next paragraph we therefore focus on the binary case first, moving then to the spherical case. Binary caseSetting \(P=\alpha N\), in the binary weight case we reach the following integral representation of the averaged replicated partition function \[\left\langle Z_{\mathcal{D}}^{n}\right\rangle_{\mathcal{D}}\mathbf{\propto}\int \prod_{a<b}\frac{dq_{ab}d\hat{q}_{ab}}{2\pi}\,e^{NS(q,\hat{q})} \tag{17}\] where we have defined \[S^{\text{bin}}(q,\hat{q}) =G_{S}^{\text{bin}}(q,\hat{q})+\alpha G_{E}(q) \tag{18a}\] \[G_{S}^{\text{bin}} =-\frac{1}{2}\sum_{a\neq b}q_{ab}\hat{q}_{ab}+\ln\sum_{\{w^{a}\}} e^{\frac{1}{2}\sum_{a\neq b}\hat{q}_{ab}w^{a}w^{b}}\] (18b) \[G_{E} =\ln\int\prod_{a}\frac{dv_{a}d\hat{v}_{a}}{2\pi}\prod_{a}\Theta(v_ {a}-\kappa)e^{i\sum_{a}v_{a}\hat{v}_{a}-\frac{1}{2}\sum_{ab}q_{ab}\hat{v}_{a} \hat{v}_{b}}\,. \tag{18c}\] \(G_{S}\) is the so called "entropic" term, since it represents the logarithm of the volume at \(\alpha=0\), where there are no constraints induced by the training set. \(G_{E}\) is instead called the "energetic" term and it represents the logarithm of the fraction of solutions. In the energetic term we have also used the fact that, because of equation (15), \(q_{aa}=1\). Spherical caseIn the spherical case, one needs to do a little more work in order to decouple the \(i\) index in the spherical contraints of equation (7): \[\begin{split}\int\prod_{a}d\mu(w^{a})\,e^{\sum_{a<b}\hat{q}_{ab} \sum u_{i}^{a}w_{i}^{b}}&\mathbf{\propto}\int\prod_{a}\frac{d\hat{q} _{aa}}{2\pi}\int\prod_{ai}dw_{i}^{a}\,e^{-\frac{u}{2}\sum_{a\neq b}\hat{q}_{ ab}\sum u_{i}^{a}w_{i}^{b}}\\ &=\int\prod_{a}\frac{d\hat{q}_{aa}}{2\pi}e^{-\frac{u}{2}\sum_{a }\hat{q}_{aa}}\left[\int\prod_{a}dw^{a}\,e^{\frac{1}{2}\sum_{ab}\hat{q}_{ab}w^ {a}w^{b}}\right]^{N}\end{split} \tag{19}\] Therefore the replicated averaged partition in the spherical case is equal to \[\left\langle Z_{\mathcal{D}}^{n}\right\rangle_{\mathcal{D}}\propto\int\prod_{a<b} \frac{dq_{ab}d\hat{q}_{ab}}{2\pi}\int\frac{d\hat{q}_{aa}}{2\pi}\,e^{NS(q,\hat{q})} \tag{20}\] where we have defined \[S^{\mathrm{sph}}(q,\hat{q}) =G_{S}^{\mathrm{sph}}(\hat{q})+\alpha G_{E}(q) \tag{21a}\] \[G_{S}^{\mathrm{sph}} =-\frac{1}{2}\sum_{ab}q_{ab}\hat{q}_{ab}+\ln\int\prod_{a}dw^{a}\,e ^{\frac{1}{2}\sum_{ab}\hat{q}_{ab}w^{a}w^{b}}\] (21b) \[=-\frac{1}{2}\sum_{ab}q_{ab}\hat{q}_{ab}-\frac{1}{2}\ln\det(-\hat {q})\] and we have used again the definition \(q_{aa}=1\). Notice how only the entropic term changes with respect to the binary case, whereas the energetic term is the same defined in (18c). Saddle pointsIn either case, being the model binary or spherical, the corresponding expressions can be evaluated by using a saddle point approximation, since we are interested in a regime where \(N\) is large. The saddle point have to be found by finding two \(n\times n\) matrices \(q_{ab}\) and \(\hat{q}_{ab}\) that maximize the action \(S\). This gives access to the entropy \(\phi\) of solutions \[\phi=\lim_{N\to\infty}\frac{1}{N}\langle\ln Z_{\mathcal{D}}\rangle_{\mathcal{D }}=\lim_{n\to 0}\frac{1}{n}\max_{\{q,\hat{q}\}}S(q,\hat{q})\,. \tag{22}\] ### Replica-Symmetric ansatz Finding the solution to the maximization procedure is not a trivial task. Therefore one proceeds by formulating a simple parameterization or _ansatz_ on the structure of the saddle points. The simplest ansatz one can formulate is the Replica-Symmetric (RS) one. Considering the binary case, this reads \[q_{ab} =\delta_{ab}+q(1-\delta_{ab})\,, \tag{23a}\] \[\hat{q}_{ab} =\hat{q}(1-\delta_{ab}) \tag{23b}\] In the spherical case, instead one has (\(q_{ab}\) is the same as in binary weights) \[\hat{q}_{ab}=-\hat{Q}\delta_{ab}+\hat{q}(1-\delta_{ab}) \tag{24}\] The entropy in both cases can be written as \[\phi=\mathcal{G}_{S}+\alpha\mathcal{G}_{E} \tag{25}\] where we remind that the entropic term depends on the binary or spherical nature of the weights and are \[\mathcal{G}_{S}^{\mathrm{bin}} \equiv\lim_{n\to 0}\frac{G_{S}^{\mathrm{bin}}}{n}=-\frac{\hat{q}}{ 2}(1-q)+\int Dx\ln 2\cosh(\sqrt{\hat{q}}x)\,, \tag{26a}\] \[\mathcal{G}_{S}^{\mathrm{sph}} \equiv\lim_{n\to 0}\frac{G_{S}^{\mathrm{sph}}}{n}=\frac{1}{2} \hat{Q}+\frac{q\hat{q}}{2}+\frac{1}{2}\ln\frac{2\pi}{\hat{Q}+\hat{q}}+\frac{1 }{2}\frac{\hat{q}}{\hat{Q}+\hat{q}}\,. \tag{26b}\] The energetic term is instead common to both models \[\mathcal{G}_{E}\equiv\lim_{n\to 0}\frac{G_{E}}{n}=\int Dx\ln H\left(\frac{ \kappa+\sqrt{q}x}{\sqrt{1-q}}\right)\,. \tag{27}\] In the previous equations we denoted by \(Dx\equiv\frac{dx}{\sqrt{2\pi}}e^{-x^{2}/2}\) the standard normal measure and we have defined the function \[H(x)\equiv\int_{x}^{\infty}Dy=\frac{1}{2}\text{erfc}\left(\frac{x}{\sqrt{2}}\right) \tag{28}\] The value of the order parameters \(\hat{q}\), \(\hat{Q}\) and \(q\) for the spherical case and \(\hat{q}\), \(q\) for the binary case can be obtained by differentiating the entropy and equating it to zero. Saddle point equations: spherical perceptronThe saddle point equations for the spherical case read \[q =\frac{\hat{q}}{(\hat{Q}+\hat{q})^{2}} \tag{29a}\] \[1 =\frac{\hat{Q}+2\hat{q}}{(\hat{Q}+\hat{q})^{2}}\] (29b) \[\hat{q} =-a\frac{\partial\mathcal{G}_{E}}{\partial q}=\frac{a}{1-q}\int Dx \left[GH\left(\frac{\kappa+\sqrt{q}x}{\sqrt{1-q}}\right)\right]^{2} \tag{29c}\] where we have defined the function \(GH(x)\equiv\frac{G(x)}{H(x)}\), \(G(x)\) being a standard normal distribution. The saddle point equations can be simplified, by explicitly expressing \(\hat{q}\) and \(\hat{Q}\) in terms of \(q\). Indeed equations (29a) (29b) are simple algebraic expression for \(\hat{q}\) and \(\hat{Q}\) in terms of \(q\); they can be explicitly inverted as \(\hat{q}=\frac{q}{(1-q)^{2}}\), \(\hat{Q}=(1-2q)/(1-q)^{2}\). Inserting the expression of \(\hat{q}\) inside (29c) we get an equation for \(q\) only \[q=\alpha(1-q)\int Dx\left[GH\left(\frac{\kappa+\sqrt{q}x}{\sqrt{1-q}}\right) \right]^{2}\,, \tag{30}\] Saddle point equations: binary perceptronIn the binary case only the equation involving derivatives of the entropic term changes. The saddle point equations are therefore \[q =\int Dx\,\tanh^{2}\left(\sqrt{\hat{q}}x\right) \tag{31a}\] \[\hat{q} =-a\frac{\partial\mathcal{G}_{E}}{\partial q}=\frac{a}{1-q}\int Dx \left[GH\left(\frac{\kappa+\sqrt{q}x}{\sqrt{1-q}}\right)\right]^{2} \tag{31b}\] The saddle point equations (equation (30) for the spherical and (31) for the binary case), can be easily solved numerically by simple recursion with \(\alpha\) and \(\kappa\) being two external parameters. It is important to notice that the value of the order parameter \(q\) has physical meaning. Indeed one can show that \(q\) is the _typical_ (i.e. the most probable) overlap between two solutions \(\boldsymbol{w}^{1}\) and \(\boldsymbol{w}^{2}\) extracted from the Gibbs measure (6), i.e. \[q=\left\langle\frac{\int d\mu(\boldsymbol{w}^{1})\,d\mu(\boldsymbol{w}^{2}) \left(\frac{1}{N}\sum_{i}w_{i}^{1}w_{i}^{2}\right)\mathbb{X}_{\mathcal{D}}( \boldsymbol{w}^{1};\kappa)\mathbb{X}_{\mathcal{D}}(\boldsymbol{w}^{2};\kappa )}{Z_{\mathcal{D}}^{2}}\right\rangle_{\mathcal{D}} \tag{32}\] Therefore solving the saddle point equations gives access to interesting geometrical information: it suggests how distant the solutions extracted from the Gibbs measure (6) are from each other. The distance between solutions can be obtained from the overlap using the relation \[d=\frac{1-q}{2}\in[0,1] \tag{33}\] In the binary setting this definition coincides with the _Hamming distance_ between \(\mathbf{w}^{1}\) and \(\mathbf{w}^{2}\), since in that case \(d\) is equal to the fraction of indexes \(i\) at which the corresponding \(\mathbf{w}^{1}_{i}\) and \(\mathbf{w}^{2}_{i}\) are different. The distance between solutions extracted from the Gibbs measure with a given margin \(\kappa\) is shown as a function of \(\alpha\) in Fig. 2 for the spherical (left panel) and for the binary (right panel) case. In both cases one can clearly see that the distance is monotonically decreasing with \(\alpha\) it also exists a value of \(\alpha\) for which the distance goes to zero. In addition is interesting to notice that fixing the value of \(\alpha\) to a certain value, the distance decreases as the margin is increased, meaning that more robust solutions are located in a smaller region of the solution space [11]. ### SAT/UNSAT transition Once the saddle point equations are solved numerically, we can compute the value of the entropy (i.e. the total number/volume) of solutions using (24) for a given \(\alpha\) and margin \(\kappa\). The entropy is plotted in the left panel of Fig. 3 and Fig. 4 for the binary and spherical cases respectively. It is interesting to notice that in both models, for a fixed value of \(\kappa\), there is a critical value of \(\alpha\) such that the typical distance between solutions goes to zero, i.e. the solution space shrinks to a point as we approach it. The corresponding entropy diverges to \(-\infty\) at this value of \(\alpha\). This defines what we have called SAT/UNSAT transition \(\alpha_{c}(\kappa)\) in the spherical case. This _cannot_ be the true value of \(\alpha_{c}(\kappa)\) in the binary case: in binary models the entropy cannot be negative, since we are _counting_ solutions (not measuring volumes as in the spherical counterpart). This means that the analytical results obtained are wrong whenever \(\phi<0\); for this reason in Fig 2 and 4 the unphysical parts of the curves are dashed. Where is the computation wrong in the binary case? As shown by [6], it is the RS ansatz that, although stable, is wrong; at variance with the spherical case for \(\kappa\geq 0\), in the binary case the solution space is non-convex, so the solution space can be disconnected; in those cases it is well known that the RS ansatz might fail. As shown by Krauth and Mezard [6] (see also [3] for a nice "discussion"), by using a _one-step replica symmetry breaking_ (1RSB) ansatz [12], in order to compute the SAT/UNSAT transition we should compute the value of \(\alpha\) for which the _RS entropy_ vanishes \[\phi^{\text{\tiny RS}}(\alpha_{c})=0\,. \tag{34}\] This is known as the _zero entropy condition_; at this value of \(\alpha_{c}\), the distance between solutions does not go to \(0\), for \(\kappa=0\) for example \(d\simeq 0.218\). In the right panel of Fig. 3 we plot \(\alpha_{c}\) as Figure 2: Distance between solutions sampled with a given margin \(\kappa\) as a function of the constrained density \(\alpha\) for the spherical case (**left panel**) and the binary perceptron (**right panel**). In the binary case, the lines change from solid to dashed when the entropy of solutions becomes negative. a function of \(\kappa\). In particular for \(\kappa=0\), the value \(\alpha_{c}=0.833\dots\) can be obtained numerically. This value is still not rigorously proved, although in recent years [13], proved that the value obtained by the replica formula is lower bound with positive probability. In the spherical perceptron \(\kappa\geq 0\) the space of solution is convex and the RS ansatz gives correct results. The critical capacity can be therefore obtained by sending \(q\to 1\). In order to get an explicit expression that can be evaluated numerically, one therefore has to do the limit explicitly, so a a little bit of work has still to be done. The entropy is, in both cases, a monotonically decreasing function of \(\alpha\). This should be expected: when we increase the number of contraints, we should expect that the solution space shrinks. Moreover for a fixed value of \(\alpha\) the entropy is a monotonically decreasing function of the margin: this means that solutions with larger margin are _exponentially_ fewer in \(N\) (remind (8)). \(q\to 1\) limit in the spherical perceptronIn order to perform the limit analytically [4, 14] it is convenient to use the following change of variables in the entropy \[q=1-\delta q \tag{35}\] and then send \(\delta q\to 0\). We now insert this into the RS energetic term (27); using the fact that \(\ln H(x)\simeq-\frac{1}{2}\ln(2\pi)-\ln x-\frac{x^{2}}{2}\) as \(x\to\infty\), retaining only the diverging terms we get \[\int Dx\ln H\left(\frac{\kappa+\sqrt{q}x}{\sqrt{1-q}}\right)\simeq\int_{- \kappa}^{+\infty}Dx\left[\frac{1}{2}\ln\delta q-\frac{(\kappa+x)^{2}}{2\delta q }\right]=\frac{1}{2}\ln(\delta q)\,H\left(-\kappa\right)-\frac{B(\kappa)}{2 \delta q} \tag{36}\] and \[B(\kappa)=\int_{-\kappa}^{\infty}Dz_{0}(\kappa+z_{0})^{2}=\kappa G(\kappa)+ \left(\kappa^{2}+1\right)H\left(-\kappa\right)\,. \tag{37}\] The free entropy is therefore \[\phi=\frac{1}{2\delta q}+\frac{1}{2}\ln\delta q+\frac{\alpha}{2}\left(\ln( \delta q)\,H\left(-\kappa\right)-\frac{B(\kappa)}{\delta q}\right) \tag{38}\] The derivative with respect to \(\delta q\) gives the saddle point condition that \(\delta q\) itself must satisfy \[2\frac{\partial\,\phi}{\partial\delta q}=\frac{1}{\delta q}-\frac{1}{\delta q ^{2}}+\alpha\left(\frac{H\left(-\kappa\right)}{\delta q}+\frac{B(\kappa)}{ \delta q^{2}}\right)=0\,. \tag{39}\] Figure 3: **Binary perceptron. Left: RS entropy as a function of \(\alpha\) for several values of the margin \(\kappa\). Dashed lines show the nonphysical parts of the curves, where entropy is negative. The value of \(\alpha\) where the entropy vanishes corresponds to the SAT/UNSAT transition \(\alpha_{c}(\kappa)\). Right: SAT/UNSAT transition \(\alpha_{c}(\kappa)\) obtained using the zero entropy condition (34).** When we send \(\delta q\to 0\), for a fixed value of \(\kappa\) we can impose that we are near the critical capacity \(\alpha=\alpha_{c}-\delta\alpha\) and \(\delta q=C\delta\alpha\). We get \[\begin{split} 2\frac{\partial\phi}{\partial\delta q}&=\frac{1}{ C\delta\alpha}-\frac{1}{C^{2}\delta\alpha^{2}}+(\alpha_{c}-\delta\alpha) \left[\frac{H(-\kappa)}{C\delta\alpha}+\frac{B(\kappa)}{C^{2}\delta\alpha^{2} }\right]\\ &=\frac{1}{C\delta\alpha}\left[1+\alpha_{c}H(-\kappa)-\frac{B( \kappa)}{C}\right]+\frac{1}{C^{2}\delta\alpha^{2}}[\alpha_{c}B(\kappa)-1]=0\,. \end{split} \tag{40}\] The first term gives the scaling of \(\delta q\), the second gives us the critical capacity in terms of the margin [5, 10]. \[\alpha_{c}(\kappa)=\frac{1}{B(\kappa)}=\frac{1}{\kappa G\left(\kappa\right)+( \kappa^{2}+1)H\left(-\kappa\right)} \tag{41}\] Notice that \(\alpha_{c}=\frac{1}{B(\kappa)}\) is equivalent to imposing that the divergence \(1/\delta q\) in the free entropy (38) is eliminated at the critical capacity (so that it correctly goes to \(-\infty\) in that limit). In particular for \(\kappa=0\) we get \[\alpha_{c}(\kappa=0)=2\,, \tag{42}\] a results that has been derived rigorously by Cover in 1965 [15]. In the right panel of Fig. 4 we plot \(\alpha_{c}\) as a function of \(\kappa\) obtained by (41). It is important to mention that, since in the case \(\kappa\leq 0\) the model becomes non-convex, the RS estimate of the critical capacity (41) gives uncorrect results, even if it is un upper bound to the true value. We refer to [16] for a rigorous upper bound to the critical capacity and to [14] for the evaluation of \(\alpha_{c}(\kappa)\) based on a 1RSB ansatz. However the correct result should be obtained by performing a full-RSB ansatz [7, 12]. ## 3 Landscape geometry One of the most important open puzzles in deep learning is to understand how the error and the loss landscape look like [17] especially as a function of the number of parameters, and how the shape of the learning landscape impacts the learning dynamics. So far there has been growing empirical evidence that, especially in the _overparameterized regime_, where the number of parameters is much larger that the size of the training set, the landscape presents a region at low values of the loss with a large number of flat directions. For example, studies of the Figure 4: **Spherical perceptron. Left: RS entropy as a function of \(\alpha\) for several values of the margin \(\kappa\). The point in \(\alpha\) where the entropy goes to \(-\infty\) (indicated by the dashed vertical lines) corresponds to the SAT/UNSAT transition \(\alpha_{c}(\kappa)\). Right: RS SAT/UNSAT transition \(\alpha_{c}(\kappa)\) as given by (41). For \(\kappa<0\) the line is dashed, to remind that the RS prediction is only an upper bound to the true one, since the model becomes non-convex.** spectrum of the Hessian [18, 19] on a local minima or a saddles found by Stochastic Gradient Descent (SGD) optimization, show the presence of a large number of zero and near to zero eigenvalues. This region seem to be attractive for the gradient-based algorithms: the authors of [20] show that different runs of SGD end in the same basin by explicitly finding a path connecting them, in [21, 22] it was shown that very likely even the whole straight path between two different solutions lies at zero training error, i.e. they are _linear mode connected_. Also, an empirical evidence of an implicit bias of SGD towards flat regions have been shown in [23]. Study of the limiting dynamics of SGD [24, 25] unveil the presence of a diffusive behaviour in weight space, once the training set has been fitted. Simple 2D visualization of the loss landscape [26], provided insights into how commonly employed machine learning techniques for improving generalization also tend also to smooth the landscape. One of the most recent algorithms, Sharpness Aware Minimization (SAW) [27], explicitly designed to target flat regions within the loss landscape, consistently demonstrates improved generalization across a broad spectrum of datasets. In [28] it was suggested numerically that moving from the over to the underparameterized regime, gradient based dynamics suddendly becomes glassy. This observation raises the intriguing possibility of a phase transition occurring between these two regimes. In this section we review some simple results obtained on one layer models concerning the characterization of the flatness of different classes of solutions. We will consider, the paradigmatic case of the binary perceptron, but, as we will see, the definitions of the quantities are general, and can be also used in the case of continuous weights. We will see that in the overparameterized regime (i.e. \(N\gg P\)) solutions located in a very wide and flat region exist. However, as the number of constraints is increased, this region starts to shrink and at a certain point it breaks down in multiple pieces. This, in binary models, is responsible of algorithmic hardness and glassy dynamics. ### Local Entropy In order to quantify the flatness of a given solutions, several measures can be employed. One such obvious measure is the spectrum of the Hessian; however this quantity is not trivial to study analytically. Here we will employ the so called _local entropy_[29] measure. Given a configuration \(\tilde{\mathbf{w}}\) (in general it may not be a solution), its local entropy is defined as \[\mathcal{S}_{\mathcal{D}}(\tilde{\mathbf{w}},d;\kappa)=\frac{1}{N}\ln\mathcal{N}_ {\mathcal{D}}(\tilde{\mathbf{w}},d;\kappa) \tag{43}\] where \(\mathcal{N}_{\mathcal{D}}(\tilde{\mathbf{w}},d;\kappa)\) is a _local Gardner volume_ \[\mathcal{N}_{\mathcal{D}}(\tilde{\mathbf{w}},d;\kappa)\equiv\int d\mu(\mathbf{w})\, \mathbb{X}_{\mathcal{D}}(\mathbf{w};\kappa)\,\delta(N(1-2d)-\mathbf{w}\cdot\tilde{\bm {w}}) \tag{44}\] which counts2 solutions having margin at least \(\kappa\) at a given distance \(d\) from \(\tilde{\mathbf{w}}\). In the binary perceptron, we will set for simplicity \(\kappa=0\) in the above quantity, since for \(\kappa=0\) the problem is already non-convex. In (44) we have used the relation between overlap and distance of (33) to impose an hard constraint between \(\tilde{\mathbf{w}}\) and \(\mathbf{w}\). For any distance, the local entropy is bounded from above by the _total_ number of _configurations_ at distance \(d\). This value, that we will call \(\mathcal{S}_{\text{max}}\), is attained at \(\alpha=0\), i.e. when we do not impose any constraints. Moreover, since every point on the sphere is equivalent, \(\mathcal{S}_{\text{max}}\) does not depend on \(\tilde{\mathbf{w}}\) and \(\kappa\) and it reads3 Footnote 2: In the spherical case it measures a volume. Footnote 3: In the spherical case an equivalent analytical formula can be derived, see [14]. \[\mathcal{S}_{\text{max}}(d)=-d\ln d-(1-d)\ln(1-d) \tag{45}\] which is of course always non-negative. In full generality, we are interested in evaluating the local entropy of solutions \(\tilde{\mathbf{w}}\,\) that have margin \(\tilde{\kappa}\) and that are sampled from a probability distribution \(P_{\mathcal{D}}(\tilde{\mathbf{w}}\,;\tilde{\kappa})\). We are interested in computing the typical local entropy of those class of solutions, that is obtained averaging \(\mathcal{S}_{\mathcal{D}}\) over \(P_{\mathcal{D}}\) and over the dataset, i.e. \[\phi_{\text{FP}}(d;\tilde{\kappa},\kappa)=\left\langle\int d\mu(\tilde{\mathbf{w}} \,)P_{\mathcal{D}}(\tilde{\mathbf{w}}\,;\tilde{\kappa})\,\mathcal{S}_{\mathcal{D}} (\tilde{\mathbf{w}},d;\kappa)\right\rangle_{\mathcal{D}} \tag{46}\] This "averaged local entropy" is usually called _Franz-Parisi entropy_ in the context of mean field spin glasses [30]. In the following we will consider \(P_{\mathcal{D}}\) as the flat measure over solutions with margin \(\tilde{\kappa}\) \[P_{\mathcal{D}}(\tilde{\mathbf{w}};\tilde{\kappa})=\frac{\mathbb{X}_{\mathcal{D} }(\tilde{\mathbf{w}},\tilde{\kappa})}{\int d\mu(\tilde{\mathbf{w}}\,)\mathbb{X}_{ \mathcal{D}}(\tilde{\mathbf{w}},\tilde{\kappa})} \tag{47}\] The first analytical computation of (46) was performed by Huang and Kabashima [31] in the binary perceptron by using the replica method (using steps similar to the ones done in Section 2.4, even if ther are much more involved). They considered the case of typical solutions, i.e. \(\tilde{\kappa}=0\). A plot of \(\phi_{\text{FP}}\) as a function of distance is shown in the left panel of Fig. 5 for several values of \(\alpha\). It exists a neighborhood of distances \(d\in[0,d_{\text{min}}]\) around \(\tilde{\mathbf{w}}\,\) for which the local entropy is negative, which is unphysical in the binary case. The authors of [31] also showed analytically that \(\phi_{\text{FP}}(d=0)=0\)4 and that for \(d\to 0\) Footnote 4: This should be expected: if you are at zero distance from \(\tilde{\mathbf{w}}\), the only solution to be counted is \(\tilde{\mathbf{w}}\,\) itself \[\frac{\partial\,\phi_{\text{FP}}}{\partial\,d}=\alpha Cd^{-1/2}+O(\ln d) \tag{48}\] where \(C\) is a _negative_ constant. This tells us that \(d_{\text{min}}>0\)_for any \(\alpha>0\)_. This suggests that typical solutions are always _isolated_, meaning that there is a value \(d_{\text{min}}\) below which no solution can be found5, no matter what the value of the constraint density is. Notice that, since the overlap is normalized by \(N\) as in (32), in order to go from \(\tilde{\mathbf{w}}\,\) to the closest solution, one should flip an _extensive_ number of weights. The plot of \(d_{\text{min}}\) is shown, as a function of \(\alpha\), in the right panel of Fig. 5. Footnote 5: In principle, the fact that the local entropy is negative suggest that only a _subestensive_ number of solutions can be found below \(d_{\text{min}}\). Abbe and Sly [32] have also proved that in a slight variation of the model (the so called _symmetric binary perceptron_), that actually _no_ solution can be found with probability one within a distance \(d_{\text{min}}\). See also [33]. The picture, however is far from being complete. This kind of landscape with point-like solutions suggests that finding such solution should be a hard optimization problem. Indeed, from the rigorous point of view, similar hardness properties has been shown to exist if the problem at hand verifies the so called _Overlap Gap Property_ (OGP) [34, 35]. An optimization problem possesses OGP if picking _any_ two solutions the overlap distribution between them exhibits a gap, i.e. they can be either close or far away from each other, but can't be in some interval in between. However, this is contrary to the numerical evidence given by simple algorithms such as the ones based on message passing [36, 37] or gradient-based methods [14, 38] which find solutions easily. In [29] it was shown that non-isolated solutions indeed exists, but they are exponentially rarer ("subdominant"). In order to target those solutions one should give a larger statistical weight to those \(\tilde{\mathbf{w}}\,\) that are surrounded by a larger number of solutions. This led [29] to choose the measure \[P_{\mathcal{D}}(\tilde{\mathbf{w}}\,;d)=\frac{e^{\gamma N\mathcal{S}_{\mathcal{D}}( \tilde{\mathbf{w}};d)}}{\int d\mu(\tilde{\mathbf{w}}\,)e^{\gamma N\mathcal{S}_{ \mathcal{D}}(\tilde{\mathbf{w}};d)}}\,. \tag{49}\] where \(y\) is a parameter (analogous to inverse temperature) that assigns larger statistical weight to solutions with high local entropy the larger it is. In the \(y\to\infty\) limit, this measure focuses on solutions with the _highest_ local entropy for a given value of the distance \(d\); notice, indeed, that this probability measure, depends explicitly on \(d\), meaning that the particular type of solution \(\tilde{\mathbf{w}}\) sampled changes depending on the value of \(d\) chosen. In the same work it was shown not only that those high local entropy regions exist, but also that, in the teacher-student setting, they have better generalization properties with respect to typical, isolated solutions. For some rigorous results concerning the existence of those regions in similar binary perceptron models, see [39]. It was then shown that similar atypical clustered region play a similar algorithmic role in other optimization problems, such as coloring [40, 41]. Those results suggested the design of new algorithms based on message passing that explicitly exploit local entropy maximization in order to find very well-generalizing solutions [42]. One of such algorithms is called focusing Belief-Propagation (fBP). A simpler way of finding the high local entropy regions is by using (47) with \(\tilde{\kappa}\) stricly larger than zero [11]. Indeed, the property of being robust to small noise perturbation in the input is related to the flatness of the energy landscape in the neighborhood of the solution. This type of approach not only produces a phenomenology similar to [29], but it also helps to unravel the structure of high local entropy regions in neural networks, as we shall see. Increasing the amount of robustness \(\tilde{\kappa}\), we therefore intuitively expect to target larger local entropy solutions. As shown in Fig. 6, this is indeed what one finds: as one imposes \(\tilde{\kappa}>0\), there always exists a neighborhood of \(d=0\), with positive local entropy (i.e. those solutions are surrounded by an _exponential_ number of solutions). As shown in the inset of the same figure, the cluster is also very dense: for small \(d\), the local entropy curve is indistinguishable from the total log-number of configurations at that distance \(\mathcal{S}_{\text{max}}\). As one increases \(\tilde{\kappa}\) from \(0\) one can see that one starts to sample different kind of regions in the solutions space. Firstly, if \(0<\tilde{\kappa}<\tilde{\kappa}_{\text{min}}(\alpha)\) the local entropy is negative in an interval of distances \(d\in[d_{1},d_{2}]\) with \(d_{1}>0\): no solutions can be found in a spherical shell of radius \(d\in[d_{1},d_{2}]\). Secondly, if \(\tilde{\kappa}_{\text{min}}(\alpha)<\tilde{\kappa}<\tilde{\kappa}_{u}(\alpha)\) the local entropy is positive but non-monotonic. This means that typical solutions with such \(\tilde{\kappa}\) are immersed within small regions that have a characteristic size: they can be considered as isolated (for \(\tilde{\kappa}<\tilde{\kappa}_{\text{min}}\)) or nearly isolated (for \(\tilde{\kappa}>\tilde{\kappa}_{\text{min}}\)) balls. Finally, for \(\tilde{\kappa}>\tilde{\kappa}_{u}\), the local entropy is monotonic: this suggests that typical solutions with large enough \(\tilde{\kappa}\) are immersed in dense regions that do not seem to have a characteristic size and may extend to very large scales. The local entropy curve having the highest local entropy at a given value of \(\alpha\) is obtained by imposing the _maximum_ possible margin \(\kappa_{\text{max}}(\alpha)\), i.e. the margin for which Figure 5: **Binary perceptron. Left: Averaged local entropy of typical solutions as a function of distance, for several values of \(\alpha\). Right: Distance \(d_{\text{min}}\) for which the Franz Parisi entropy \(\phi_{\text{FP}}\) is zero as a function of \(\alpha\). At the SAT/UNSAT transition (dashed vertical line) the minimal distance to the closest solutions coincides with the typical distance between solution \(d\simeq 0.218\).** the entropy computed in section 2.5 vanishes6. Footnote 6: The \(\kappa_{\max}(\alpha)\) curve is the inverse of \(\alpha_{\epsilon}(\kappa)\). Therefore high margin solutions are not only rarer and closer to each other with respect to typical solutions (as examined in section 2), but tend to focus on regions surrounded by lower margin, which in turn are surrounded by many other solutions having even lower margin and so on and so forth. The flat regions in the landscape can be though to be formed by the coalescence of minima corresponding to high-margin classifications. It is worth mentioning that in [14] the same type of approach was applied to the simplest non-convex but continuous weights problem: the negative spherical perceptron. Although in the spherical case the most probable, typical solutions are not completely isolated7, a similar phenomenology is valid: higher margin solutions have always a larger local entropy. Evidence of the existence of these large local entropy regions has also been established in the context of the one hidden layer neural networks [43, 44]. These studies also reveal that the use of the cross-entropy loss [44], ReLU activations [43], and even regularization of the weights [45] influence the learning landscape by inducing wider and flatter minima. Footnote 7: In the context of a spherical model, a solution would be isolated if the Franz-Parisi entropy goes to \(-\infty\) for a \(d_{\min}>0\). ### Algorithmic hardness One can then explore how the highest local entropy curves evolve as a function of \(\alpha\). The procedure works as follows. Firstly, for a given value of \(\alpha\), we compute the maximum margin \(\kappa_{\max}\). Secondly, we plot the corresponding local entropy curve \(\phi_{\text{FP}}\) as a function of \(d\). Finally, we repeat the process using another value of \(\alpha\). The outcome is plotted for the binary perceptron in the left panel of Fig. 7. As expected from the previous section, for low values of \(\alpha\) the Franz-Parisi entropy is monotonic. However, as we keep increasing \(\alpha\) the curve becomes Figure 6: Averaged local entropy of solutions sampled with the flat measure over solutions with margin \(\tilde{\kappa}\). The dashed line represents \(\mathcal{S}_{\max}(d)\) as in (45). Notice how the local entropy changes from being non-monotonic to be monotonic as one keeps increasing \(\tilde{\kappa}\). The grey points corresponds to the local entropy computed numerically from solutions found by the fBP algorithm, which it has been explicitly constructed to target the _flattest_ regions in the landscape. Especially at small distances, there is a good agreement with the local entropy of maximum margin solutions obtained from theory (red line). Reprinted from [11]. non-monotonic: as shown in the right panel of Fig. 7 the derivative of \(\phi_{\text{FP}}\) with respect to \(d\) develops a zero at small distances. This critical value of the constrained density has been called "local entropy" transition \(\alpha_{\text{LE}}\), and it separates a phase where we can find a solution \(\tilde{\mathbf{w}}\) that is located inside a region that extends to very large distance \(\alpha<\alpha_{\text{LE}}\), from one \(\alpha>\alpha_{\text{LE}}\) where it can't be found. Above another critical value \(\alpha_{\text{OGP}}\) of the constrained density, only the "completely isolated ball" phase exists: all the high-margin solutions, even if they remain surrounded by an exponential number of lower margin solutions up to the SAT/UNSAT transition, are completely isolated between each other. The local entropy transition has been shown in the binary perceptron [11, 29] to be connected with the onset of _algorithmic hardness_: no algorithm has currently been found to be able to reach zero training error for \(\alpha>\alpha_{\text{LE}}\). Surprisingly, \(\alpha_{\text{LE}}\), which has been derived as a threshold marking a profound change in the geometry of regions of higher local entropy, also acquires the role of an insurmountable _algorithmic_ barrier. Similar algorithmic thresholds have been found to exists in other binary neural network models [38]. Notice that, OGP is expected to hold for \(\alpha>\alpha_{\text{OGP}}\); indeed if the Franz-Parisi entropy displays a gap for the \(\kappa_{\text{max}}\) curve, it will also exhibit an even larger one for every \(\tilde{\kappa}\in[0,\kappa_{\text{max}})\)8. In the binary perceptron model, \(\alpha_{\text{LE}}\) and \(\alpha_{\text{OGP}}\) are very close, so it is really difficult to understand which of the two prevents algorithms to find solutions in the infinite size limit. Footnote 8: To be precise the correct value of \(\alpha_{\text{OGP}}\) (and of \(\alpha_{\text{LE}}\)), can be obtained by sampling the reference with the _largest_ local entropy at any distance, i.e. by using (49). In spherical non-convex models, such as the negative margin perceptron, even if the local entropy transitions can be identified similarly, it has been shown to be not predictive of the behaviour algorithms, but rather, there is numerical evidence showing that it demarks a region where the solution space is connected from one where it is not [14]. In the same work, evidence has been given that smart algorithms are able to reach the SAT/UNSAT transition. In [46] the authors, interestingly, developed an algorithm that is proved to be able to reach the SAT/UNSAT transition, provided that the OGP does not hold. A proof of the lack of OGP Figure 7: Averaged local entropy profiles of typical maximum margin solutions (left panel) and its derivative (right panel) as a function of the distance. The dashed line again represents \(\mathcal{S}_{\text{max}}(d)\). Different values of \(\alpha\) are displayed: for \(\alpha=0.71\) and \(0.727\) the entropy is monotonic, i.e. it has a unique maximum at large distances (not visible). For \(\alpha=\alpha_{\text{LE}}\simeq 0.729\) the Franz-Parisi entropy becomes non-monotonic (its derivative with respect to the distance develops a new zero). The entropy becomes negative in a range of distances not containing the origin for \(\alpha>\alpha_{\text{OGP}}\simeq 0.748\). Reprinted from [11]. in the spherical negative perceptron, however, is still lacking. ## 4 Linear mode connectivity Another important line of research that has recently emerged in machine learning is the characterization of the connectivity between different solutions, i.e. the existence of a path lying a zero training error that connects them. Numerous studies [47, 48, 49, 50, 51, 20] have been started analyzing the so called _linear mode connectivity_, i.e. the particular case of a simple straight path. The first statistical mechanics study of the behavior of the training error on the _geodesic path_ connecting two solutions has been done in [52] for the negative spherical perceptron problem. Interestingly, a complete characterization of the _shape_ of the solution space of the spherical negative perceptron has been produced. It is the aim of this section to briefly explain the analytical technique and the main results of the work. Suppose that we are given a value of \(\kappa<0\) and we need to satisfy the constraints (4) using spherical weights. We sample two solutions \(\boldsymbol{w}_{1}\), \(\boldsymbol{w}_{2}\) from the probability distribution (49) using as margin respectively \(\kappa_{1}\), \(\kappa_{2}\geq\kappa\). Since the model is defined on the sphere, the straight path between \(\boldsymbol{w}_{1}\) and \(\boldsymbol{w}_{2}\) lies out of the sphere itself; therefore we project it on the sphere, obtaining a set of weights \(\boldsymbol{w}_{\gamma}\) that lie on the minimum length (i.e. the geodesic) path joining \(\boldsymbol{w}_{1}\) and \(\boldsymbol{w}_{2}\) Then we can define the _geodesic path_ between \(\boldsymbol{w}_{1}\) and \(\boldsymbol{w}_{1}\): \[\boldsymbol{w}_{\gamma}=\frac{\sqrt{N}\left(\gamma\boldsymbol{w}_{1}+(1- \gamma)\boldsymbol{w}_{2}\right)}{\|\gamma\boldsymbol{w}_{1}+(1-\gamma) \boldsymbol{w}_{2}\|}\,,\qquad\gamma\in[0,1] \tag{50}\] Finally we compute the average training error of \(\boldsymbol{w}_{\gamma}\) i.e. the fraction of errors on the training set, averaged over the sampled \(\boldsymbol{w}_{1}\), \(\boldsymbol{w}_{2}\) and over the realization of the dataset \[E_{\gamma}=\lim_{N\rightarrow+\infty}\mathbb{E}_{\mathcal{D}}\left\langle \frac{1}{P}\sum_{\mu=1}^{p}\Theta\big{(}-\boldsymbol{w}_{\gamma}\cdot\xi^{\mu }+\kappa\sqrt{N}\big{)}\right\rangle_{\kappa_{1},\kappa_{2}}. \tag{51}\] In the previous expression the average \(\langle\bullet\rangle_{\kappa_{1},\kappa_{2}}\) is over the product of the two Gibbs ensembles (49) from which \(\boldsymbol{w}_{1}\) and \(\boldsymbol{w}_{2}\) are sampled from. \(E_{\gamma}\) can be computed by using replica method. Depending on the values of the margin \(\kappa_{1}\), \(\kappa_{2}\) of \(\boldsymbol{w}_{1}\) and \(\boldsymbol{w}_{2}\) we can sample different regions of the solution space for a fixed value of \(\alpha\). We mainly have three cases: * \(\kappa_{1}=\kappa_{2}=\kappa\): the two solutions are typical. In this case, for every \(\gamma\in[0,1]\), \(E_{\gamma}>0\), see red line in the left panel of Fig. 8. * \(\kappa_{1}=\kappa_{2}=\tilde{\kappa}>\kappa\): the two solutions are atypical and have the same margin. As can be seen in the left panel of Fig. 8, if \(\tilde{\kappa}\) is slightly above \(\kappa\), the maximum energy barrier is still non-zero, but in the neighborhood of \(\boldsymbol{w}_{1}\) and \(\boldsymbol{w}_{2}\) a region at zero training error appears. The size of this region increases with \(\tilde{\kappa}\). If \(\tilde{\kappa}>\kappa_{*}(\alpha)\) the maximum barrier is zero, i.e. \(\boldsymbol{w}_{1}\) and \(\boldsymbol{w}_{2}\) are _linear mode connected_. * \(\kappa_{1}=\kappa\) and \(\kappa_{2}=\tilde{\kappa}>\kappa\): one solution is typical and the other is atypical and \(E_{\gamma}\) is asymmetric with respect to \(\gamma=1/2\). \(E_{\gamma}\) is shown in the right panel of Fig. 8, for several values of \(\tilde{\kappa}\). As one keeps increasing \(\tilde{\boldsymbol{w}}\), the maximum of the barrier decreases, and for \(\tilde{\kappa}>\kappa_{\text{km}}(\alpha)>\kappa_{*}(\alpha)\) the two solutions become linear mode connected. It can be also shown analytically that if \(\tilde{\kappa}>\kappa_{\text{km}}(\alpha)\) than \(\boldsymbol{w}_{2}\) is linear mode connected to another solution having margin \(\kappa_{1}>\kappa\). The picture exposed above holds in the overparameterized regime. In particular, the result obtained in the last point tells us that the solution space \(\mathcal{A}\) of the spherical negative perceptron is _star-shaped_, since there exists a subset \(\mathcal{C}\subset\mathcal{A}\), such that for any \(\boldsymbol{w}_{*}\in\mathcal{C}\) then geodesic path from \(\boldsymbol{w}_{*}\) to _any_ other solution \(\boldsymbol{w}\) lies entirely in \(\mathcal{A}\), i.e. \[\{\gamma\boldsymbol{w}+(1-\gamma)\boldsymbol{w}_{*};\ \gamma\in[0,1]\} \subset\mathcal{A}\,. \tag{52}\] The subset \(\mathcal{C}\) is called _kernel_ of the star-shaped manifold. In Fig. 9 we show an intuitive, 2D picture of the space of solutions, showing a schematic interpretation of the results exposed. If we sample typical solutions we are basically sampling the tips of the star (see left panel), so that the geodesic path connecting them lies entirely at positive training error. Remind that those solutions are the largest in numbers and are located at a larger typical distance with respect to higher margin ones. As one increases the margin of \(\boldsymbol{w}_{1}\), \(\boldsymbol{w}_{2}\) they come closer together, Figure 9: A two-dimensional view of the star-shaped manifold of solutions of the spherical negative perceptron in the overparameterized regime. When one samples typical solutions (left panel) the training error along geodesic path (dashed black line) is strictly positive. Increasing the margin of both solutions they get closer (see left panel of Fig. 2), and on geodesic path it appears a region at zero training error in their neighborhood. If the two solutions have margin larger than \(\kappa_{*}\), the maximal barrier goes to zero, and the two solutions are located in the blue area. If the margin of one solution \(\boldsymbol{w}_{*}\) is larger than \(\kappa_{\text{km}}>\kappa_{*}\), then it will be located in the _kernel_ region (green area, right panel); from there it can “view” any other solution, since the geodesic path is all at zero training error. following the arms of the star to which they belong to. If the two margin are large enough, i.e. \(\kappa_{1}\), \(\kappa_{2}>\kappa_{\star}(\alpha)\) they are linear mode connected, and they lie in the blue region of middle panel. If the margin of one of the two solutions is even larger, \(\kappa_{1}>\kappa_{\rm km}(\alpha)\), then it will be located in the kernel, which is depicted in green in the right panel of Fig. 9. We refer to [52] to a more in-depth discussion of the attractiveness of the kernel region to gradient-based algorithms.
2310.20148
Decision-Making for Autonomous Vehicles with Interaction-Aware Behavioral Prediction and Social-Attention Neural Network
Autonomous vehicles need to accomplish their tasks while interacting with human drivers in traffic. It is thus crucial to equip autonomous vehicles with artificial reasoning to better comprehend the intentions of the surrounding traffic, thereby facilitating the accomplishments of the tasks. In this work, we propose a behavioral model that encodes drivers' interacting intentions into latent social-psychological parameters. Leveraging a Bayesian filter, we develop a receding-horizon optimization-based controller for autonomous vehicle decision-making which accounts for the uncertainties in the interacting drivers' intentions. For online deployment, we design a neural network architecture based on the attention mechanism which imitates the behavioral model with online estimated parameter priors. We also propose a decision tree search algorithm to solve the decision-making problem online. The proposed behavioral model is then evaluated in terms of its capabilities for real-world trajectory prediction. We further conduct extensive evaluations of the proposed decision-making module, in forced highway merging scenarios, using both simulated environments and real-world traffic datasets. The results demonstrate that our algorithms can complete the forced merging tasks in various traffic conditions while ensuring driving safety.
Xiao Li, Kaiwen Liu, H. Eric Tseng, Anouck Girard, Ilya Kolmanovsky
2023-10-31T03:31:09Z
http://arxiv.org/abs/2310.20148v2
Decision-Making for Autonomous Vehicles with Interaction-Aware Behavioral Prediction and Social-Attention Neural Network ###### Abstract Autonomous vehicles need to accomplish their tasks while interacting with human drivers in traffic. It is thus crucial to equip autonomous vehicles with artificial reasoning to better comprehend the intentions of the surrounding traffic, thereby facilitating the accomplishments of the tasks. In this work, we propose a behavioral model that encodes drivers' interacting intentions into latent social-psychological parameters. Leveraging a Bayesian filter, we develop a receding-horizon optimization-based controller for autonomous vehicle decision-making which accounts for the uncertainties in the interacting drivers' intentions. For online deployment, we design a neural network architecture based on the attention mechanism which imitates the behavioral model with online estimated parameter priors. We also propose a decision tree search algorithm to solve the decision-making problem online. The proposed behavioral model is then evaluated in terms of its capabilities for real-world trajectory prediction. We further conduct extensive evaluations of the proposed decision-making module, in forced highway merging scenarios, using both simulated environments and real-world traffic datasets. The results demonstrate that our algorithms can complete the forced merging tasks in various traffic conditions while ensuring driving safety. Autonomous Vehicles, Interaction-Aware Driving, Imitation Learning, Neural Networks, Traffic Modeling ## I Introduction One of the challenges in autonomous driving is interpreting the driving intentions of other human drivers. The communication between on-road participants is typically non-verbal, and relies heavily on turn/brake signals, postures, eye contact, and subsequent behaviors. In uncontrolled traffic scenarios, e.g., roundabouts [1], unsignalized intersections [2], and highway ramps [3], drivers need to negotiate their order of proceeding. Fig. 1 illustrates a forced merging scenario at a highway entrance, where the ego vehicle in red attempts to merge into the highway before the end of the ramp. This merging action affects the vehicle behind in the lane being merged, and different social traits of its driver can result in different responses to the merging intent. A cooperative driver may choose a lane change to promote the merging process, while an egoistic driver may maintain a constant speed and disregard the merging vehicle. Therefore, understanding the latent intentions of other drivers can help the ego vehicle resolve conflicts and accomplish its task. In this paper, we specifically focus on the highway forced merging scenario illustrated in Fig. 1 and the objective of transitioning the ego vehicle onto the highway from the ramp in a timely and safe manner. The difficulty of developing suitable automated driving algorithms for such scenarios is exacerbated by the fact that stopping on the ramp in non-congested highway traffic could be dangerous. The forced merging has been addressed in the automated driving literature from multiple directions. In particular, learning-based methods have been investigated to synthesize controllers for such interactive scenarios. End-to-end planning methods [4] have been proposed to generate control inputs from Lidar point clouds [5] and RGB images [6]. Reinforcement Learning (RL) algorithms have also been considered to learn end-to-end driving policies [7, 8]. A comprehensive survey of RL methods for autonomous driving applications is presented in [9]. Meanwhile, Imitation Learning-based methods have been exploited to emulate expert driving behaviors [10, 11]. However, the end-to-end learning-based controllers lack interpretability and are limited in providing safety guarantees in unseen situations. To address these concerns, researchers have explored the integration of learning-based methods with planning and control techniques. Along these lines, Model Predictive Control (MPC) algorithms have been integrated with the Social-GAN [12] for trajectory prediction and planning [13]. Meanwhile, Inverse RL methods have also been explored to predict drivers' behavior for planning purposes [14, 15, 16]. However, the learning-based modules in these systems may have limited capability to generalize and transfer to unobserved scenarios or behaviors. There also exists extensive literature on modeling the interactive behaviors between drivers using model-based ap Fig. 1: Schematic diagram of the highway forced merging scenario: An on-ramp ego vehicle in red is merging onto the highway while interacting with the highway vehicles in grey. proaches. Assuming drivers maximize their rewards [14], game-theoretic approaches have been proposed to model traffic interactions, such as level-\(k\) hierarchical reasoning framework [17], potential games [18], and Stackelberg games [15, 19]. In the setting of the level-\(k\) game-theoretic models, approaches to estimating drivers' reasoning levels have been proposed [20]. A novel Leader-Follower Game-theoretic Controller (LFGC) has been developed for decision-making in forced merging scenarios [21]. However, solving game-theoretic problems could be computationally demanding and has limited scalability to a larger number of interacting drivers or longer prediction horizons. To be able to account for the uncertainty in the interactions, probabilistic methods, leveraging either Bayesian filter [21, 22] or particle filter [23] with Partially Observable Markov Decision Process (POMDP) [21, 24], have also been implemented to encode and estimate the uncertain intent of other drivers as hidden variables. In this paper, we consider the Social Value Orientation (SVO) from social psychology studies [25] to model drivers' interactions. The SVO quantifies subjects' tendencies toward social cooperation [26] and has been previously used to model drivers' cooperativeness during autonomous vehicle decision-making in [15, 27]. In addition, researchers have combined SVO-based rewards with RL to generate pro-social autonomous driving behaviors [28, 29] or synthesize realistic traffic simulation with SVO agents [30]. In our proposed behavioral model, we consider both social cooperativeness and the personal objectives of the interacting drivers. Leveraging a Bayesian filter, we propose a decision-making module that accounts for pairwise interactions with other drivers, and computes a reference trajectory for the ego vehicle under the uncertain cooperative intent of other drivers. The method proposed in this paper differs from the previous work [31] in the following aspects: 1) Instead of using an action space with a few coarse action primitives, we synthesize a state-dependent set of smooth and realistic trajectories as our action space. 2) We design a Social-Attention Neural Network (SANN) architecture to imitate the behavioral model that structurally incorporates the model-based priors and can be transferred to various traffic conditions. 3) We develop a decision-tree search algorithm for the ego vehicle's decision-making, which guarantees safety and scalability. 4) We conduct an extensive evaluation of the behavioral model in predicting real-world trajectory and demonstrate the decision-making module capabilities in forced merging scenarios on both simulations and real-world datasets, which is not done in [31]. The proposed algorithm has several distinguished features: 1. The behavioral model incorporates both the driver's social cooperativeness and personal driving objectives, which produces rich and interpretable behaviors. 2. The proposed decision-making module handles the uncertainties in the driving intent using a Bayesian filter and generates smooth and realistic reference trajectories for the downstream low-level vehicle controller. 3. Differing from pure learning-based methods, the designed SANN incorporates social-psychological model-based priors. It imitates the behavioral model and is transferable across different traffic conditions while providing better online computation efficiency. 4. The decision-making module utilizes an interaction-guided decision tree search algorithm, which ensures probabilistic safety and scales linearly with the number of interacting drivers and prediction horizons. 5. The behavioral model is evaluated in predicting real-world trajectories. This model demonstrates good quantitative accuracy in short-term prediction and provides qualitative long-term behavioral prediction. 6. The decision-making module is tested in the forced merging scenarios on a comprehensive set of environments without re-tuning the model hyperparameters. The proposed method can safely merge the ego vehicle into the real-world traffic dataset [32] faster than the human drivers, and into diverse Carla [33] simulated traffic with different traffic conditions. This paper is organized as follows: In Sec. II, we describe the model preliminaries, including the vehicle kinematics model, the action space with the lane-change modeling, and the choice of model hyperparameters. In Sec. III, we present the behavioral model, which can be utilized to predict interacting drivers' trajectories given their latent driving intentions. In Sec. IV, we discuss the SANN architecture that imitates the behavioral model and is suitable for online deployment. In Sec. V, we introduce the decision-making module of our ego vehicle together with a decision tree search algorithm that incorporates the SANN and improves computation efficiency. In Sec. VI, we report the results of real-world trajectory prediction using the behavioral model for the forced merging test on a real-world dataset and in simulations. Finally, conclusions are given in Sec. VII. ## II System and Model Preliminaries In this paper, we design a modularized algorithm architecture for decision-making and control of the autonomous (ego) vehicle in the forced merging scenario. In this framework (see Fig. 2), we develop a parameterized behavioral model for modeling the behavior of interacting drivers. Leveraging this model and observed traffic interactions, we can estimate the latent driving intentions of interacting drivers as model parameters. Thereby, we can predict their future trajectories in response to the action of the ego vehicle. Based on the observations, and the predictions, a high-level decision-making module optimizes a reference trajectory for the ego vehicle merging into the target highway lane while ensuring its safety Fig. 2: Proposed algorithm architecture for autonomous vehicles decision-making and control. in the traffic. A low-level controller controls the vehicle throttle and steering angle to track the reference trajectory. This chapter first introduces the vehicle kinematics model in Sec. II-A. Then, we present a state-dependent action space (in Sec. II-B) of the vehicle, which is a set of trajectories synthesized from the kinematics model. We discuss the detailed lane change trajectory modeling in Sec. II-C together with model hyperparameter identification from a naturalistic dataset [32]. ### _Vehicle Kinematics_ We use the following continuous-time bicycle model [34] to represent the vehicle kinematics, \[\left[\begin{array}{c}\dot{x}\\ \dot{y}\\ \dot{\varphi}\\ \dot{v}\end{array}\right]=\left[\begin{array}{c}v\cos(\varphi+\beta)\\ v\sin(\varphi+\beta)\\ \frac{\mu}{l_{r}}\sin(\beta)\\ a\end{array}\right]+\tilde{w}, \tag{1}\] \[\beta=\arctan\left(\frac{l_{r}}{l_{r}+l_{f}}\tan\delta\right),\] where \(x\), \(v\), and \(a\) are the longitudinal position, velocity, and acceleration of the vehicle center of gravity (CoG), respectively; \(y\) is the lateral position of the CoG; \(\varphi\) is the heading angle of the vehicle; \(\beta\) is the sideslip angle; \(\delta\) is the front wheel steering angle; \(l_{r}\) and \(l_{f}\) denote the distance from the vehicle CoG to the front and rear wheel axles, respectively; \(\tilde{w}\in\mathbb{R}^{4}\) is a disturbance representing unmodeled dynamics. We assume that all the highway vehicles, together with the ego vehicle, follow this kinematics model. We then derive discrete-time kinematics from (1) assuming zero-order hold with the sampling period of \(\Delta T\) sec. This leads to the discrete-time kinematics model, \[s_{k+1}^{i}=f(s_{k}^{i},u_{k}^{i})+\tilde{w}_{k}^{i},\;i=0,1,2,\ldots, \tag{2}\] where the subscript \(k\) denotes the discrete time instance \(t_{k}=k\Delta T\) sec; the superscript \(i\) designates a specific vehicle, where we use \(i=0\) to label the ego vehicle and \(i=1,2\ldots\) for other interacting vehicles; \(s_{k}^{i}=[x_{k}^{i},y_{k}^{i},\varphi_{k}^{i},{v_{k}^{i}}]^{T}\) and \(u_{k}^{i}=[a_{k}^{i},\delta_{k}^{i}]^{T}\) are the state and control vectors of vehicle \(i\) at time instance \(t_{k}\). Then, we can use this discrete kinematics model to synthesize vehicle trajectories. Note that there are other vehicle kinematics and dynamics models that could potentially represent vehicle behaviors [34] more realistically. The model (2) was chosen as it provides adequate accuracy for the purpose of decision-making (planning) of the ego vehicle while it is simple and computationally efficient [35]. ### _Trajectory Set as Action Space_ Given the initial vehicle state and the kinematics model (2) neglecting the disturbance \(\tilde{w}_{k}^{i}\) and using different control signal profiles, we can synthesize various trajectories with duration \(N\Delta T\) sec for trajectory prediction and planning. For the \(i\)th vehicle at time \(t_{k}\), we assume that vehicle's action space is \(\Gamma(s_{k}^{i})=\left\{\gamma_{m}(s_{k}^{i})\right\}_{m=k}^{M}\) where each individual element \(\gamma^{(m)}(s_{k}^{i})=\left\{s_{n}^{i}\right\}_{n=k}^{k+M+1}\) is a trajectory of time duration \(N\Delta T\) sec synthesized using a distinct control sequence \(\left\{u_{n}^{i}\right\}_{n=k,\ldots,k+N}\) via the kinematics model (2). We select \(225\) different control sequences such that the number of considered trajectories is finite, i.e., \(M\leq 225\) for all \(\Gamma(s_{k}^{i})\) and all initial state \(s_{k}^{i}\). As shown in Fig. 3, the action space \(\Gamma(s_{k}^{i})\) depends on the current vehicle state \(s_{k}^{i}\) for two reasons: With a fixed control sequence \(\left\{u_{n}^{i}\right\}_{n}\), the resulted trajectory from (2) varies with the initial condition \(s_{k}^{i}\); A safety filter is implemented for \(\Gamma(s_{k}^{i})\) such that all trajectories intersect with the road boundaries are removed, which is also dependent on \(s_{k}^{i}\). The chosen \(225\) control sequences generate trajectories that encompass a set of plausible driving behaviors (see Fig. 3) and suffice tasks of trajectory prediction and planning. Note that the trajectory set can be easily enlarged with more diverse control sequences if necessary. Meanwhile, we assume a complete lane change takes \(T_{\text{lane}}=N_{\text{lane}}\Delta T\) sec to move \(w_{\text{lane}}\) meters from the center line of the current lane to that of adjacent lanes. We set \(N_{\text{lane}}<N\) to allow trajectory sets to contain complete lane change trajectories. As shown in Fig. 3a), the trajectory set comprises 109 regular driving trajectories for an on-ramp vehicle that is intended to merge. This trajectory set considers varieties of driver's actions such as lane keeping with longitudinal accelerations/decelerations, merging with constant longitudinal speed, merging with longitudinal acceleration/deceleration, accelerating/decelerating before or after merging, etc. Moreover, we also include the behavior of aborting lane change (see Fig. 3b). For a lane-changing vehicle, this is a regular "change-of-mind" behavior to avoid collision with nearby highway vehicles. For longitudinal behaviors, we also assume speed and acceleration/deceleration limits of \([v_{\min},v_{\max}]\) and \([a_{\min},a_{\max}]\) for all vehicle at all times. Namely, the trajectory sets and control sequences satisfy \(a_{n}^{i}\in[a_{\min},a_{\max}]\) and \(v_{n}^{i}\in[v_{\min},v_{\max}]\) for all \(s_{n}^{i}\in\gamma^{(m)}(s_{k}^{i}),n=k,\ldots,k+N+1\) and for all \(\gamma^{(m)}(s_{k}^{i})\in\Gamma(s_{k}^{i}),m=1,\ldots,M(s_{k}^{i})\). The speed limits are commonly known quantities on highways and the longitudinal acceleration/deceleration is typically limited by the vehicle's performance limits. Fig. 3: Examples of trajectory set \(\Gamma(s_{k}^{i})\) with duration \(6\) sec: (a) A trajectory set of \(M=109\) encompasses behaviors of lane keeping, lane change, and coupled longitudinal and lateral behavior (e.g., lane change with longitudinal acceleration/deceleration). (b) A trajectory set of \(M=129\) contains actions of lane change abortion and re-merge after aborting the previous lane change. Other normal lane change trajectories are in semi-transparent lines. Likewise, the lane change abortion behaviors are also coupled with various longitudinal acceleration/deceleration profiles. ### _Trajectory Hyperparameters and Lane Change Behavior_ We use a naturalistic highway driving High-D [32] dataset to identify the model hyperparameters, i.e., \(v_{\min}\), \(v_{\max}\), \(a_{\min}\), \(a_{\max}\), \(w_{\text{lane}}\), and \(T_{\text{lane}}\). The statistics visualized in Fig. 4 are obtained from data of 110,500 vehicles driven over 44,500 kilometers. The minimum speed is set to \(v_{\min}=2\;\mathrm{m}/\mathrm{s}\) since most of the vehicles have speeds higher than that and the maximum speed limit of the dataset is \(v_{\max}=34\;\mathrm{m}/\mathrm{s}\). The majority of longitudinal accelerations and decelerations of High-D vehicles are within the range of \([a_{\min},a_{\max}]=[-6,6]\;\mathrm{m}/\mathrm{s}^{2}\). The lane width \(w_{\text{lane}}=3.5\;\mathrm{m}\) as in High-D dataset. We select \(T_{\text{lane}}=N_{\text{lane}}\Delta T=4\;\mathrm{sec}\) since most of the High-D vehicles take between \(4\) and \(6\;\mathrm{sec}\) to change lanes. We keep these hyperparameters fixed for the following discussion and experiments. Note that these parameters can be identified similarly to different values in other scenarios if necessary. In terms of lane change behaviors, given an acceleration sequence \(\left\{a_{k}^{i}\right\}_{k}\), we can derive the steering profile \(\left\{\delta_{k}^{i}\right\}_{k}\) of a lane change trajectory from 5th order polynomials [36], \[x(t|\{p_{j}\}) =p_{0}+p_{1}t+p_{2}t^{2}+p_{3}t^{3}+p_{4}t^{4}+p_{5}t^{5}, \tag{3}\] \[y(t|\{q_{j}\}) =q_{0}+q_{1}t+q_{2}t^{2}+q_{3}t^{3}+q_{4}t^{4}+q_{5}t^{5},\] which represents a vehicle lane change between time \(t=0\) and \(t=T_{\text{lane}}\;\mathrm{sec}\). Such lane change trajectories are commonly used in vehicle trajectory planning and control [37, 38]. Suppose, without loss of generality, that the lane change starts and ends at \(t_{0}=0\) and \(t_{N_{\text{lane}}}=T_{\text{lane}}\;\mathrm{sec}\), respectively. The following procedure is utilized to determine the trajectory and steering profile during a lane change: At time \(t_{k}=k\Delta T\;\mathrm{sec}\), given vehicle state \(s_{k}^{i}=[x_{k}^{i},y_{k}^{i},\varphi_{k}^{i},v_{k}^{i}]^{T}\) and lateral target lane center \(y_{\text{target}}\), we first solve for the coefficients \(\{p_{k,j}\}\) and \(\{q_{k,j}\}\) in (3) from the following two sets of boundary conditions, \[\left\{\begin{array}{ll}x(t_{k})=x_{k}^{i},\;\;\dot{x}(t_{k})=v_{k}^{i},& \;\ddot{x}(t_{k})=a_{k}^{i},\\ y(t_{k})=y_{k}^{i},&\;\dot{y}(t_{k})=\dot{y}_{k}^{i},&\;\ddot{y}(t_{k})=\ddot{ y}_{k}^{i},\end{array}\right. \tag{4}\] \[\left\{\begin{array}{ll}&x(T_{\text{lane}})=x_{k}^{i}+v_{k}^{i}(T_{\text{ lane}}-t_{k})\\ &+\frac{1}{2}a_{k}^{i}(T_{\text{lane}}-t_{k})^{2},\\ \dot{x}(T_{\text{lane}})=v_{k}^{i}+a_{k}^{i}(T_{\text{lane}}-t_{k}),&\;\ddot{x }(T_{\text{lane}})=a_{k}^{i},\\ y(T_{\text{lane}})=y_{\text{target}},\;\dot{y}(T_{\text{lane}})=0,&\;\ddot{y}(T _{\text{lane}})=0,\end{array}\right.\] where we assume initial/terminal conditions \(\dot{y}(0)=\dot{y}(T_{\text{lane}})=0\) and \(\ddot{y}(0)=\ddot{y}(T_{\text{lane}})=0\), i.e., zero lateral velocity and acceleration at the beginning and the end of a lane change. Recursively, initial conditions \(\dot{y}(t_{k})=\dot{y}_{k}^{i}\) and \(\ddot{y}(t_{k})=\ddot{y}_{k}^{i}\) at step \(k\) can be computed using (3) with the coefficients \(\{q_{k-1,j}\}\) at previous step \(k-1\); and we assume a constant longitudinal acceleration \(a_{k}^{i}\) throughout the lane change process. Then, we can compute \(s_{k+1}^{i}=[x_{k+1}^{i},y_{k+1}^{i},\varphi_{k+1}^{i},v_{k+1}^{i}]^{T}\) from (3) using the following equations, \[x_{k+1}^{i}=x(t_{k+1}|\{q_{k,j}\}_{j}),y_{k+1}^{i}=y(t_{k+1}|\{q _{k,j}\}_{j}), \tag{5}\] \[\varphi_{k+1}^{i}=\arctan\left(\dot{y}(t_{k+1}|\{q_{k,j}\}_{j})/ \dot{x}(t_{k+1}|\{q_{k,j}\}_{j})\right),\] \[v_{k+1}^{i}=\dot{x}(t_{k+1}|\{q_{k,j}\}_{j}).\] Repeating this procedure for \(k=0,1,\ldots,N_{\text{lane}}-1\), we can synthesize a smooth lane change trajectory \(\left\{s_{k}^{i}\right\}_{k=0,\ldots,N_{\text{lane}}}\) with corresponding acceleration sequences \(\left\{a_{k}^{i}\right\}_{k=0,\ldots,N_{\text{lane}}-1}\). Fig. 5 illustrates this approach to the lane change modeling. Given an acceleration sequence \(\left\{a_{k}^{i}\right\}_{k}\) (see Fig. 5b) from one of the \(225\) control sequences \(\left\{u_{k}^{i}\right\}_{k}\), we leverage (3) and produce a smooth lane change trajectory that qualitatively matches with a real-world (High-D) lane change trajectory. Meanwhile, the resulting steering angle profile \(\left\{\delta_{k}^{i}\right\}_{k}\) (see Fig. 5c) is similar to those from human driving [39, 40]. ## III Social Behavior Modeling In this section, we model the two components of drivers' driving incentives that motivate them to take action from the trajectory sets defined in Sec. II. The first component consists of each individual driver's objectives as a personal reward in Sec. III-A. The second component uses an SVO-based reward to incorporate the drivers' social cooperativeness (see Sec. III-B). In Sec. III-C, we integrate this reward model into the interacting vehicle's decision-making process. ### _Driver's Driving Objectives and Personal Rewards_ Similar to the previous work [31], we model the personal reward of the \(i\)th driver who interacts with an adjacent vehicle Fig. 4: Histogram of vehicle driving statistics in the High-D dataset [32]: (a) Time duration for a complete lane change. (b) Longitudinal velocity. (c) Longitudinal acceleration/deceleration (y-axis in log scale). Fig. 5: A lane change trajectory synthesized using a given acceleration sequence: (a) Synthesized trajectory using our algorithm compared with a real-world lane change trajectory in the High-D dataset. (b) Designed acceleration sequence \(\left\{a_{k}^{i}\right\}_{k}\). (c) Derived steering sequence \(\left\{\delta_{k}^{i}\right\}_{k}\) from (2). \(j\) using the following formula, \[\begin{split}& r\Big{(}\gamma_{n_{1}}^{n_{2}}(s_{k}^{i}),\gamma_{n_{ 1}}^{n_{2}}(s_{k}^{j})|w^{i}\Big{)}=\neg c\Big{(}\gamma_{n_{1}}^{n_{2}}(s_{k}^ {i}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})\Big{)}.\\ &\Big{[}\begin{array}{c}h(s_{k+n_{2}}^{i},s_{k+n_{2}}^{j})\quad \tau(s_{k+n_{2}}^{i})\quad e\left(\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})\right)\end{array} \Big{]}\cdot w^{i}\end{split}\;, \tag{6}\] where \(s_{k}^{i},s_{k}^{j}\) are the current states of the vehicles \(i,j\); \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i})=\{s_{k+n}^{i_{2}}\}_{n=1}^{n_{2}}\circ\gamma (s_{k}^{i})\) is a segment of the trajectory \(\gamma(s_{k}^{i})\in\Gamma(s_{k}^{i})\), and \(0\leq n_{1}\leq n_{2}\leq N+1\); \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})\) is defined likewise; \(\neg\) is the logical negative operator; \(w^{i}\in\mathbb{R}^{3}\) is a vector of weights so that the personal reward is a weighted summation of personal objectives \(h\), \(\tau\), and \(e\). Here, \(c\), \(h\), \(\tau\), and \(e\) are four functions that capture different aspects of drivers' driving objectives: 1. Collision avoidance \(c\in\{0,1\}\): \(c(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{j}))=1\) implies vehicle \(i\) following trajectory \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i})\) collides with vehicle \(j\) which follows trajectory \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})\), and \(c=0\) indicates that two vehicles' trajectories are free of collision with each other. This is used to penalize collisions between trajectories. 2. Safety consciousness \(h\in[0,1]\): If vehicle \(j\) is the leading vehicle of vehicle \(i\), \(h(s_{k+n_{2}}^{i},s_{k+n_{2}}^{i})\) computes a normalized Time-to-Collision (\(TTC\)) at the end of their corresponding trajectories \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})\). A larger \(h\) implies a larger TTC with the leading vehicle. The safety consciousness can encourage vehicles to keep an appropriate headway distance and be conscious of potential collisions. 3. Travelling time \(\tau\in[0,1]\): \(\tau(s_{k+n_{2}}^{i})\) measures the closeness between the vehicle's final state in the trajectory \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i})\) with its destination, where a larger value implies shorter distance to the goal. Including \(\tau\) in the reward reflects the objective of shortening the traveling time, e.g., merging to the highway as soon as possible for the on-ramp vehicles. 4. Control effort \(e\in[0,1]\): \(e\left(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i})\right)\) takes a lower value if \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i})\) is a lane-changing trajectory segment or generated with longitudinal acceleration/deceleration. The control effort captures drivers' desire to keep the lane and constant speed to avoid both longitudinal and lateral maneuvers. We refer the readers to our previous work [31] for more detailed descriptions of the four functions. Similar methods that model the driver's driving objectives have also been reported in [41, 42, 43, 21, 44]. The weight \(w^{i}\) is the latent model parameter in the reward function \(r(\cdot|w^{i})\). Different weights reflect distinct personal goals and, therefore, embed various driving behaviors. For example, a driver considering personal reward with \(w^{i}=[0,0,1]^{T}\) may keep the lane and drive at a constant speed, thereby maximizing the reward via minimizing the control effort. Another driver with weights \(w^{i}=[1,0,0]^{T}\) tries to maximize the headway distance and might change lanes to overtake a leading vehicle if there is one. ### _Social Value Orientation and Multi-modal Reward_ The personal reward function \(r(\cdot|w^{i})\) captures drivers' decision-making as maximizing their own gain in the traffic interaction. However, this model does not encode the behaviors of cooperation and competition. For example, a highway driver observing the merging intention of an on-ramp vehicle might slow down to yield. In social psychology studies [25, 26], the notion of Social Value Orientation (SVO) was proposed to model this cooperative/competitive behavior in experimental games, and it has more recently been applied to autonomous driving [15]. Taking inspiration from this work, we use the driver's SVO to incorporate the personal reward with the driver's tendency toward social cooperation. To this end, we assume each vehicle \(i\) interacts with the adjacent vehicle \(j\in A(i)\), where \(A(i)\) contains indices of all the adjacent vehicles around vehicle \(i\). We model driver \(i\)'s intention using a multi-modal reward function of the form, \[\begin{split}& R\Big{(}\gamma_{n_{1}}^{n_{2}}(s_{k}^{i}),\gamma_{n_{ 1}}^{n_{2}}(s_{k}^{-i})|\sigma^{i},w^{i}\Big{)}\\ &=\alpha(\sigma_{i})\cdot r\Big{(}\gamma_{n_{1}}^{n_{2}}(s_{k}^{ i}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})|w^{i}\Big{)}\\ &+\beta(\sigma_{i})\cdot\mathbb{E}_{j\in A(i)}\left[r\Big{(} \gamma_{n_{1}}^{n_{2}}(s_{k}^{j}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{i})|w^{j} \Big{)}\right],\end{split} \tag{7}\] where \(\mathbf{s}_{k}^{-i}=[s_{k}^{0},s_{k}^{i},s_{k}^{i},\dots]\) is the aggregated state of all the adjacent vehicles of vehicle \(i\); \(\mathbf{\gamma}_{n_{1}}^{n_{2}}(\mathbf{s}_{k}^{-i})=[\gamma_{n_{1}}^{n_{2}}(s_{k}^{0}), \gamma_{n_{1}}^{n_{2}}(s_{k}^{1}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{2}),\dots]\) concatenates one possible trajectory segment \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})\) for each vehicle \(j\in A(i)\); The SVO \(\sigma^{i}\) is another latent model parameter. It takes one of the four values corresponding to four SVO categories and the values of \(\alpha(\sigma^{i})\) and \(\beta(\sigma^{i})\) are specified as follows \[(\alpha,\beta)=\left\{\begin{array}{cl}(0,1)&\text{if $\sigma^{i}=$ ``altruistic'',}\\ (1/2,1/2)&\text{if $\sigma^{i}=$ ``prosocial'',}\\ (1,0)&\text{if $\sigma^{i}=$ ``egoistic'',}\\ (1/2,-1/2)&\text{if $\sigma^{i}=$ ``competitive''.}\end{array}\right. \tag{8}\] In (7), \(\alpha(\sigma^{i})\) weighs the self-reward while \(\beta(\sigma^{i})\) weighs an averaged reward to the other vehicles. We also note that the weight \(w^{j}\) is an internal parameter of vehicle \(j\) and is a latent variable affecting the decision of vehicle \(i\). Similar to [31], we assume \(w^{j}=[1/3,1/3,1/3]\) in Eq. (7) for \(j\in A(i)\). The rationale behind this assumption is that an altruistic or prosocial (or competitive) driver of vehicle \(i\) is likely to cooperate (or compete) with other drivers in all three objectives if they do not know others' actual intentions. Using this multi-modal reward, we can model each driver's intention to achieve their personal objectives (reflected in \(w^{i}\)) and, to a certain extent, cooperate with others (encoded in \(\sigma^{i}\)). For example, suppose two highway drivers with the same personal weights \(w^{i}=[0,0,1]^{T}\) encounter a merging on-ramp vehicle. A "egoistic" highway driver values the control effort heavily and, therefore is likely to keep the lane at a constant speed and ignore the merging vehicle. On the contrary, a "prosocial" highway driver might consider changing lanes or slowing down to promote on-ramp merging action such that the net reward in (7) is larger. ### _Driving Behavior Model_ Using the multi-modal reward, we can decode/infer drivers' intentions from their actions/trajectories, which can be represented by the model parameters \(w^{i},\sigma^{i}\). We formalize this process into a behavioral model, \[\gamma^{*}(s_{k}^{i})=\operatorname*{argmax}_{\gamma(s_{k}^{i})\in\Gamma(s_{k}^{i })}\;Q\left(\mathbf{s}_{k}^{-i},\gamma(s_{k}^{i})|\sigma^{i},w^{i}\right), \tag{9}\] where \(\gamma^{*}(s^{i}_{k})\) is the resulting reference trajectory for vehicle \(i\) and \(Q\) denotes the corresponding cumulative reward function. This cumulative reward admits the following form, \[\begin{split}& Q\left(\mathbf{s}^{-i}_{k},\gamma(s^{i}_{k})|\sigma^{i}, w^{i}\right)=\underset{\mathbf{\gamma}(\mathbf{s}^{-i}_{k})\in\mathbf{\Gamma}(\mathbf{s}^{-i}_{k})}{ \mathbb{E}}\\ &\left[\sum_{n=0}^{\left\lfloor N^{\prime}/N^{\prime}\right\rfloor }\lambda^{n}R\Big{(}\gamma^{(n+1)N^{\prime}}_{\textsc{n}N^{\prime}}(s^{i}_{k} ),\mathbf{\gamma}^{(n+1)N^{\prime}}_{\textsc{n}N^{\prime}}(\mathbf{s}^{-i}_{k})|\sigma ^{i},w^{i}\Big{)}\right],\end{split} \tag{10}\] where \(\lambda\in(0,1)\) is a discount factor; the summation is a cumulative reward of vehicle \(i\) over a \(N\Delta T\) sec look-ahead/prediction horizon, and this reward is obtained according to (7); \(N^{\prime}\Delta T\) (\(N^{\prime}<N\)) in second denotes the sampling period that the driver updates its decision; \(\lfloor x\rfloor\) denotes the largest integer lower bound of \(x\in\mathbb{R}\); the expectation averages the reward over all possible aggregated trajectories in the set, \[\mathbf{\Gamma}(\mathbf{s}^{-i}_{k})=\left\{\mathbf{\gamma}(\mathbf{s}^{-i}_{k}):\gamma(s^{j}_ {k})\in\Gamma(s^{j}_{k}),j\in A(i)\right\}. \tag{11}\] After obtaining the optimal \(\gamma^{*}(s^{i}_{k})\) using (9), we compute the control signal \(u^{i}_{n}=[a^{i}_{n},\delta^{i}_{n}]^{T}\) at each time step \(n=k,\dots,k+N^{\prime}\) to track this reference trajectory \(\gamma^{*}(s^{i}_{k})\) for one sampling period \(N^{\prime}\Delta T\) sec. Then, we update the reference trajectory using (9) after \(N^{\prime}\Delta T\) sec. Eventually, using the behavioral model (9), we formulate the decision-making process of a highway driver motivated by the reward (10) and a combination of social psychological model parameters \(\sigma^{i},w^{i}\), while the driving behaviors are formalized as a receding-horizon optimization-based trajectory-tracking controller with a horizon of \(\lfloor N/N^{\prime}\rfloor\). However, solving this problem online can be computationally demanding. We can use a neural network to learn the solutions of (9) offline from a dataset, thereby imitating this behavioral model for online deployment. ## IV Interaction-Aware Imitation Learning with Attention Mechanism Based on (9), the decision-making process of vehicle \(i\) is deterministic given \(s^{i}_{k}\), \(\mathbf{s}^{-i}_{k}\). Instead of learning a one-hot encoding and to include stochasticity in the decision-making process, we prescribe a policy distribution from (10) using a softmax decision rule [45] according to, \[\begin{split}&\pi\left(\gamma(s^{i}_{k})|\sigma^{i},w^{i},s^{i}_{ k},\mathbf{s}^{-i}_{k}\right)\\ &\propto\exp\left[Q\left(\mathbf{s}^{-i}_{k},\gamma(s^{i}_{k})|\sigma ^{i},w^{i}\right)\right].\end{split} \tag{12}\] Note that \(\pi\) takes values in \(\mathbb{R}^{225}\), where we assign zero probabilities in \(\pi\) for the unsafe trajectories \(\gamma_{\text{ unsafe}}(s^{i}_{k})\notin\Gamma(s^{i}_{k})\) filtered out in Sec. II-B. Then, we can learn a neural network mapping \(\pi_{NN}\) for imitating the actual behavioral model \(\pi\) using minimization of a modified Kullback-Leibler divergence according to the loss function, \[\begin{split}&\mathcal{L}\left(\pi,\pi_{NN}|\sigma^{i},w^{i},s^{i }_{k},\mathbf{s}^{-i}_{k}\right)\\ &=\sum\limits_{m=1}^{225}\Big{\{}\pi\left(\gamma^{(m)}(s^{i}_{k} )\right)\cdot\log\Big{[}\pi\left(\gamma^{(m)}(s^{i}_{k})\right)+\epsilon\Big{]} \Big{\}}\\ &-\pi\left(\gamma^{(m)}(s^{i}_{k})\right)\cdot\log\big{[}\pi_{NN} \left(\gamma^{(m)}(s^{i}_{k})\right)+\epsilon\big{]}\Big{\}},\end{split} \tag{13}\] where a positive constant \(\epsilon\ll 1\) is chosen to avoid zero probability inside the logarithm for numerical stability, and for simplicity, we omit the terms \(\sigma^{i},w^{i},s^{i}_{k},\mathbf{s}^{-i}_{k}\) in the notation of \(\pi\), \(\pi_{NN}\). This loss function \(\mathcal{L}(\cdot)\geq 0\) measures the similarity between two discrete probability distributions where smaller loss implies more similar distributions. We design a Social-Attention Neural Network (SANN) architecture (see Fig. 6) that comprises three components: The input normalization (Sec. IV-A) derives a set of normalized vectors from inputs \(\sigma^{i},w^{i},s^{i}_{k},\mathbf{s}^{-i}_{k}\) using the highway structural information. Then, we generate a set of feature vectors via an interaction-aware learning process using the attention backbone in Sec. IV-B, and we present the attention mechanism in Sec. IV-C. Finally, using the learned feature vectors, the policy head imitates the policy distribution \(\pi\) from the behavioral model (Sec. IV-D). ### _Input Normalization_ Given different lane dimensions as labeled in Fig. 1, we aim to normalize the inputs \(\sigma^{i},w^{i},s^{i}_{k},\mathbf{s}^{-i}_{k}\) accordingly to facilitate the neural network training. The normalization produces input vectors \(\bar{s}^{i}_{k}\) for vehicle \(i\) and a set of vectors \(\{\bar{s}^{j}_{k}\}\) for \(j\in A(i)\) according to, \[\bar{s}^{i}_{k}=\left[\begin{array}{c}\left(x^{i}_{k}-\nicefrac{{l}}{{2}}-x _{\text{ramp}}\right)/l_{\text{ramp}}\\ \left(x^{i}_{k}+\nicefrac{{l}}{{2}}-x_{\text{ramp}}\right)/l_{\text{ramp}}\\ \left(y^{i}_{k}-y_{\text{ramp}}\right)/w_{\text{ramp}}\\ \left(\nu^{i}_{k}-v_{\text{min}}\right)/(v_{\text{max}}-v_{\text{min}})\\ w^{i}\end{array}\right],\iota\in\{i\}\cup A(i), \tag{14}\] where \(l^{\iota}\) is the wheelbase length of vehicle \(\iota\); the first two elements of the feature vector (14) are the normalized longitudinal coordinates of the vehicle rear end and front end; \(w^{\iota}=[1/3,1/3,1/3]\) for all \(\iota\in A(i)\) per Sec. III-B. Fig. 6: Schematic diagram of our SANN architecture: The attention backbone takes the normalized input vectors and produces their corresponding feature vectors via the attention mechanism [46]. The policy head fits a policy distribution \(\pi_{NN}\) from the resulting feature vectors incorporating the driver \(i\)’s social value orientation \(\sigma^{i}\). ### _Attention Backbone_ We first use a multi-layer perceptron (MLP), i.e., a fully connected neural network, to expand the dimension of the inputs \(\{\tilde{s}_{k}^{i}\}\) individually from \(\mathbb{R}^{7}\) to \(\mathbb{R}^{225}\) according to, \[z_{\ell}=\sigma_{\text{ReLU}}\left(W_{\ell}z_{\ell-1}+b_{\ell}\right),\ \ \ \ell=1,\ldots,L, \tag{15}\] where \(W_{\ell}\) and \(b_{\ell}\) are the network parameters of the \(\ell\)th layer; \(\sigma_{\text{ReLU}}(z)=\max\left\{0,z\right\}\) is an element-wise ReLU activation function; \(L\in\mathbb{Z}\) is the number of layers; the inputs are \(z_{0}=\tilde{s}_{k}^{i}\in\mathbb{R}^{5}\), \(\iota\in\{i\}\cup A(i)\); the outputs of the MLP are learned vectors \(z_{k}^{i}=z_{L}\in\mathbb{R}^{225}\), \(\iota\in\{i\}\cup A(i)\). Then, we combine the learned vectors \(\{z_{k}^{i}\}_{*}\), into one matrix \(Z=[z_{k}^{i},\ldots,z_{k}^{j},\ldots]^{T}\) where each row corresponds to a learned feature vector \((z_{k}^{i})^{T}\). The row dimension of \(Z\) can vary with the numbers of interacting vehicles in \(A(i)\), which is undesirable for forming a batch tensor in neural network training [47]. Thus, we consider a maximum of \(N_{z}-1\) adjacent vehicles in \(A(i)\) such that we can append zero rows to \(Z\) and construct \(Z\in\mathbb{R}^{N_{z}\times 225}\). Moreover, we use a mask matrix \(H\in\mathbb{R}^{N_{z}\times N_{z}}\) to mark down the indices of the appended rows for the latter masked self-attention process. The element of \(H\) in the \(i\)th row and \(j\)th column attains \(H_{i,j}=-\infty\) if the \(i\)th or \(j\)th row vector in \(Z\) is an appended zero vector, and obtains \(H_{i,j}=0\) otherwise. Subsequently, we pass \(Z\) through three identical cascaded blocks (see Fig. 6) using the following formula, \[\begin{split}\bar{Z}_{\ell}&=\texttt{Attention}(Z_{ \ell-1}|W_{Q,\ell},W_{K,\ell},W_{V,\ell}),\\ Z_{\ell}&=\sigma_{\text{ReLU}}(\bar{Z}_{\ell})+Z_ {\ell-1},\ \ \ell=1,2,3,\end{split} \tag{16}\] where \(\texttt{Attention}(\cdot)\) denotes the masked self-attention block [46]; \(W_{Q,\ell}\in\mathbb{R}^{225\times|Q|}\), \(W_{K,\ell}\in\mathbb{R}^{225\times|Q|}\), and \(W_{V,\ell}\in\mathbb{R}^{225\times|V|}\) are the parameters named query, key, and value matrix of the \(\ell\)th masked self-attention; the inputs are \(Z_{0}=Z\) and each layer \(\ell=1,2,3\) produces \(Z_{\ell}\in\mathbb{R}^{N_{z}\times 225}\); the summation outside \(\sigma_{\text{ReLU}}\) corresponds to the bypass connection in Fig. 6 from the beginning of a masked self-attention block to the summation symbol \(\oplus\). This bypass connection is called residual connection [48] and is designed to mitigate the vanishing gradient issue in the back-propagation of deep neural networks. We chose to cascade three such blocks via trading-off between empirical prediction performance and computational cost in comparison with those using different numbers of this block. ### _Attention Mechanism_ In the \(\ell\)th attention block (16), we leverage the attention mechanism to interchange information between row vectors of the matrix \(Z_{\ell-1}\) in the learning process. Specifically, the \(\ell\)th masked self-attention block can be represented using the following set of equations, \[Q_{\ell}=Z_{\ell-1}W_{Q,\ell}, \tag{17a}\] \[K_{\ell}=Z_{\ell-1}W_{K,\ell},\] (17b) \[V_{\ell}=Z_{\ell-1}W_{V,\ell},\] (17c) \[E_{\ell}=(Q_{\ell}K_{\ell}^{T})\circ\left[1/\sqrt{|Q|}\right]_{225 \times 225}+H,\] (17d) \[P_{\ell}=\texttt{Softmax}(E_{\ell},\text{dim=1}),\] (17e) \[\bar{Z}_{\ell}=P_{\ell}V_{\ell}, \tag{17f}\] where the row vectors of matrices \(Q_{\ell}\), \(K_{\ell}\), \(V_{\ell}\) are called query, key, and value vectors, respectively, learned from the corresponding row vectors in matrix \(Z_{\ell-1}\); \(|Q|\) and \(|V|\) are the dimensions of the query and value vectors; \([x]_{a\times b}\) is a matrix of size \(a\times b\) with all entries equal to \(x\); \(\circ\) is the element-wise Hadamard product; the element \(e_{i,j}\) in \(i\)th row and \(j\)th column of \(E_{\ell}\) represents a normalized similarity score induced by dot-product between \(i\)th query vector in \(Q_{\ell}\) and \(j\)th key vector in \(K_{\ell}\); \(E_{\ell}\) essentially encodes how similar two row vectors in \(Z_{\ell-1}\) are with each other; (17e) apply \(\texttt{Softmax}(\cdot)\) to each column of \(E_{\ell}\); the element \(p_{i,j}\) in \(i\)th row and \(j\)th column of \(P_{\ell}\) is equal to \(\exp(e_{i,j})/\sum_{k}\exp(e_{k,j})\) such that each column vector of \(P_{\ell}\) is a weight vector; each row vector in \(\bar{Z}_{\ell}\) is a weighted summation of value vectors in \(V_{\ell}\) using the weights learned in \(P_{\ell}\). Notably, in the first layer \(\ell=1\), the mask \(H\) in (17d) sets the similarities between appended zero row vectors and other row vectors in \(Z_{0}=Z\) to \(-\infty\). Subsequently, if the \(i\)th or \(j\)th row vector in \(Z\) is an appended zero vector, (17e) results in weights \(p_{i,j}=0\) in \(P_{1}\) and (17f) yields the \(i\)th or \(j\)th row also a zero vector in \(\bar{Z}_{1}\). Furthermore, the computation in (16) inherits the zero-to-row vectors from \(\bar{Z}_{1}\) to \(Z_{1}\). Inductively, through \(\ell=1,2,3\), the attention backbone preserves the zero rows in \(Z_{0}=Z\). Eventually, the attention backbone outputs \(Z_{3}=[f_{k}^{i},\ldots,f_{k}^{j},\ldots]^{T}\) where each row vector \((f_{k}^{i})^{T}\) is a learned feature vector corresponds to the input row vector \((z_{k}^{i})^{T}\) in \(Z=[z_{k}^{i},\ldots,z_{k}^{j},\ldots]^{T}\). ### _Policy Head_ Similar to (7), we use the SVO category \(\sigma^{i}\) of the \(i\)th driver to combine the learned information \(\{f_{k}^{j}\}_{j\in A(i)}\) from adjacent vehicles \(j\in A(i)\) with \(f_{k}^{i}\) of the driver \(i\) and attain a single vector \(\bar{f}_{k}^{i}\in\mathbb{R}^{225}\) according to, \[\bar{f}_{k}^{i}=\alpha(\sigma^{i})f_{k}^{i}+\beta(\sigma^{i})\Sigma_{j\in A(i)}f_{k }^{j}. \tag{18}\] We use another MLP similar to (15) with a bypass residual connection. This is followed by the final element-wise \(\texttt{Softmax}\) activation function that admits the following form, \[\texttt{Softmax}(z)=\exp(z_{i})/\Sigma_{i}\exp(z_{i}), \tag{19}\] where \(z\) is a column vector and \(z_{i}\) is the \(i\)th element in \(z\). The \(\texttt{Softmax}\) calculates a probability distribution as the policy distribution output \(\pi_{NN}\in\mathbb{R}^{225}\). The SANN architecture provides several advantages: 1. The normalization process normalizes the input information using lane and vehicle dimensions which improves the prediction robustness to different highway structural dimensions and vehicle model types. 2. The learning process is interaction-aware. The attention backbone interchanges information between each feature vector corresponding to each interacting driver, which captures the inter-traffic dependencies in the personal reward (6). 3. The learning process is cooperation-aware. The policy head fuses the learned features using the driver's SVO \(\sigma^{i}\). This process emulates (7) and introduces the notion of cooperation/competition into learning. 4. The SANN incorporates the behavioral model priors \(\sigma^{i},w^{i}\) that are later estimated by a Bayesian filter. This offers better online transferability to different drivers. 5. The SANN is permutation invariant, namely, interchanging the order of the inputs in \(\{\tilde{s}_{k}^{j}\}_{j\in A(i)}\) will not alter the value of the learned features \(\{f_{k}^{j}\}_{j\in A(i)}\) or affect the final policies \(\pi_{NN}\). Given the interacting drivers \(j\in A(i)\) geographically located in a 2D plane, the SANN learned policy should not be affected by the artificial information carried in the input order. 6. The SANN can handle variable numbers of inputs up to a maximum number \(N_{z}\). This offers better transferability to different traffic conditions/densities. These properties might not necessarily be preserved by other networks, e.g., Graph Neural Networks [49], the LSTM [50]. Then, we use this neural network behavioral model to predict the trajectories of interacting vehicles for the decision-making of our ego vehicle in the forced merging scenario. ## V Decision-Making Under Cooperation Intent Uncertainty We use the Bayesian filter to infer drivers' latent model parameters from observed traffic interactions (Sec.V-A). Then, based on the behavioral model, we incorporate the predictions of interacting drivers' behavior into a receding-horizon optimization-based controller to generate reference trajectories for the ego vehicle (Sec.V-B). In Sec.V-C, we use an interaction-guided decision tree search algorithm to solve this optimization problem and integrate it with the SANN for prediction. The learned SANN improves online prediction efficiency and while the tree-search algorithm offers a probabilistic safety guarantee and good scalability. ### _Bayesian Inference of Latent Driving Intentions_ At each time step \(k\), we assume the ego vehicle \(0\) interacts with adjacent vehicle \(i\in A(0)\) and can observe its nearby traffic state \(s_{k}^{i},\mathbf{s}_{k}^{-i}\). Assuming that the \(i\)th driver's decision-making process follows the policy (12), we need to infer the latent social psychological parameters \(\sigma^{i},w^{i}\) in order to predict the future behavior/trajectory of vehicle \(i\). We assume observation history of traffic around vehicle \(i\) is available, \[\xi_{k}^{i}=[s_{0}^{i},\mathbf{s}_{0}^{-i},s_{1}^{i},\mathbf{s}_{1}^{-i},\ldots,s_{k}^ {i},\mathbf{s}_{k}^{-i}], \tag{20}\] which contains the state \(s_{n}^{i}\) of vehicle \(i\) and the aggregated state \(\mathbf{s}_{n}^{-i}\) of adjacent vehicles around vehicle \(i\) for all time step \(n=0,1,\ldots,k\). Then, we use the following Bayesian filter to recursively estimate a posterior distribution of the latent parameters \(\sigma^{i},w^{i}\) from \(\xi_{k}^{i}\). **Proposition 1**.: _Given a prior distribution \(\mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k}^{i}\right)\) and assume the unmodeled disturbance \(\tilde{w}_{k}^{i}\sim\mathcal{N}(0,\Sigma)\) in (2) follows a zero-mean Gaussian distribution, then the posterior distribution \(\mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k+1}^{i}\right)\) admits the following form,_ \[\begin{split}&\mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k+1}^{i} \right)=\frac{1}{N\left(\xi_{k+1}^{i}\right)}.\\ & D(s_{k+1}^{i},\sigma^{i},w^{i},\dot{s}_{k},\mathbf{s}_{k}^{-i}) \cdot\mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k}^{i}\right),\end{split} \tag{21}\] _where \(N\left(\xi_{k+1}^{i}\right)\) is a normalization factor, and_ \[\begin{split}& D(s_{k+1}^{i},\sigma^{i},w^{i},\dot{s}_{k}^{i}, \mathbf{s}_{k}^{-i})=\\ &\sum_{\gamma(s_{k}^{i})\in\Gamma(s_{k}^{i})}\left[\mathbb{P} \left(\tilde{w}_{k}^{i}=s_{k+1}^{i}-\gamma_{1}^{1}(s_{k}^{i})\right)\right. \right.\\ &\left.\left.\pi\left(\gamma(s_{k}^{i})|\sigma^{i},w^{i},\dot{s}_ {k}^{i},\mathbf{s}_{k}^{-i}\right)\right].\end{split} \tag{22}\] We note that (22) represents a transition probability of a driver moving from \(s_{k}^{i}\) to \(s_{k+1}^{i}\) following the kinematics (2) and policy (12). We initialize the Bayesian filter with a uniform distribution. Meanwhile, we can replace \(\pi\) in (22) with \(\pi_{NN}\) for faster online Bayesian inference. We also provide a proof of the proposition in the following: Proof.: We apply the Bayesian rule to rewrite the posterior distribution according to, \[\begin{split}&\mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k+1}^{i} \right)=\mathbb{P}\left(\sigma^{i},w^{i}|s_{k+1}^{i},\mathbf{s}_{k+1}^{-i},\xi_{k} ^{i}\right)\\ &=\frac{\mathbb{P}\left(\mathbf{s}_{k+1}^{i}|\xi_{k}^{i}\right)}{ \mathbb{P}\left(\mathbf{s}_{k+1}^{i},\mathbf{s}_{k+1}^{i}|\xi_{k}^{i}\right)}\cdot \mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k}^{i}\right)\\ &=\frac{1}{N\left(\xi_{k+1}^{i}\right)}\cdot D(s_{k+1}^{i}, \sigma^{i},w^{i},s_{k}^{i},\mathbf{s}_{k}^{-i})\cdot\mathbb{P}\left(\sigma^{i},w^ {i}|\xi_{k}^{i}\right),\end{split}\] where we define \(N\left(\xi_{k+1}^{i}\right)=\frac{\mathbb{P}\left(\mathbf{s}_{k+1}^{i},\mathbf{s}_{k+ 1}^{-i}|\xi_{k}^{i}\right)}{\mathbb{P}\left(\mathbf{s}_{k+1}^{-i}|\xi_{k}^{i}\right)}\) and rewrite the transition probability, \[\begin{split}& D(s_{k+1}^{i},\sigma^{i},w^{i},\dot{s}_{k}^{i}, \mathbf{s}_{k}^{-i})=\mathbb{P}\left(s_{k+1}^{i}|\sigma^{i},w^{i},\xi_{k}^{i}\right) \\ &=\sum_{\gamma(s_{k}^{i})\in\Gamma(s_{k}^{i})}\left[\mathbb{P} \left(\tilde{w}_{k}^{i}=s_{k+1}^{i}-\gamma_{1}^{1}(s_{k}^{i})\right)\right.\\ &\left.\cdot\pi\left(\gamma(s_{k}^{i})|\sigma^{i},w^{i},\dot{s}_ {k}^{i},\mathbf{s}_{k}^{-i}\right)\right].\end{split} \tag{23}\] ### _Receding-horizon Optimization-based Control_ Leveraging the posterior from (21), we use a receding-horizon optimization-based controller to incorporate the trajectory predictions (12) of interacting vehicles \(i\in A(0)\) and plan a reference trajectory for the ego vehicle according to \[\gamma^{*}(s_{k}^{0})=\operatorname*{argmax}_{\gamma(s_{k}^{0})\in\Gamma(s_{k}^ {0})}Q_{0}\left(\mathbf{s}_{k}^{-0},\gamma(s_{k}^{0})\right). \tag{24}\] Similar to Sec. III-C, we compute the control signal \(u_{n}^{0}=[a_{n}^{0},\delta_{n}^{0}]^{T}\) at each time step \(n=k,\ldots,k+N^{\prime}\) to track this reference trajectory \(\gamma^{*}(s_{k}^{0})\) for one control sampling period \(N^{\prime}\Delta T\) sec. Then, we update the reference trajectory using (23) after \(N^{\prime}\Delta T\) sec. Meanwhile, the cumulative reward function \(Q_{0}\left(\mathbf{s}_{k}^{-0},\gamma(s_{k}^{0})\right)\) admits the following form, \[\begin{split}& Q_{0}\Big{(}\mathbf{s}_{k}^{-0},\gamma(s_{k}^{0})\Big{)}= \frac{1}{|A(0)|}\sum_{i\in A(0)}\\ &\Big{[}\mathbb{E}_{\sigma^{i},w^{i}\sim\mathbb{P}\left(\sigma^{i},w ^{i}|\xi_{k}^{i}\right)}\,Q_{0}^{\prime}\Big{(}\mathbf{s}_{k}^{-0},\gamma(s_{k}^{0})| \sigma^{i},w^{i}\Big{)}\Big{]},\end{split} \tag{25}\] where the function value \(Q_{0}^{\prime}\) is computed according to, \[\begin{split}& Q_{0}^{\prime}\Big{(}\mathbf{s}_{k}^{-0},\gamma(s_{k}^{0})| \sigma^{i},w^{i}\Big{)}=\mathbb{E}_{\gamma(s_{k}^{i})\sim\pi\left(\cdot|\sigma^{i},w^{i},s_{k}^{i},\mathbf{s}_{k}^{-i}\right)}\\ &\Big{[}\sum\limits_{n=0}^{|N^{\prime}|}r_{0}\Big{(}\gamma_{n^{ \prime}N^{\prime}}^{(n+1)N^{\prime}}(s_{k}^{0}),\gamma_{n^{\prime}N^{\prime}}^{(n+1)N^ {\prime}}(s_{k}^{i})\Big{)}\Big{]},\end{split} \tag{26}\] and the ego vehicle acts to minimize its traveling time and avoid collision, thereby the ego reward function \(r_{0}\) attains the following form, \[r_{0}\left(\gamma_{n_{1}}^{n_{2}}(s_{k}^{0}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{i}) \right)=-\mathbb{C}\left(\gamma_{n_{1}}^{n_{2}}(s_{k}^{0}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{ i})\right)\cdot\tau( expectation of the reward function with respect to the behavioral model parameters \(\sigma^{i},w^{i}\), while (25) samples trajectory predictions \(\gamma(s^{i}_{k})\) of vehicle \(i\) from the policy \(\pi\) conditioned on \(\sigma^{i},w^{i}\). In (25), \(\pi\) can be replaced by \(\pi_{NN}\) learned by the SANN to speed up the computations. Nonetheless, solving the problem (23) requires an exhaustive search over the trajectory set \(\Gamma(s^{0}_{k})\) that can be computationally demanding. Instead, we can treat the trajectory set \(\Gamma(s^{0}_{k})\) as a decision tree (see Fig. 3) and a tree-search-based algorithm can be developed to improve the computation efficiency. ### _Interaction-Guided Decision Tree Search_ ``` 0:\(\Gamma(s^{0}_{k})\), \(A(0)\), \(\pi_{NN}\), and \(\xi^{i}_{k}\), \(\Gamma(s^{i}_{k})\), \(\mathbb{P}\left(\sigma^{i},w^{i}|\xi^{i}_{k-1}\right)\) for all \(i\in A(0)\) 1: initialize \(Q_{0}\left(\mathbf{s}^{-0}_{k},\gamma(s^{0}_{k})\right)=0\) for all \(\gamma(s^{0}_{k})\in\Gamma(s^{0}_{k})\) 2:\(\mathbb{P}\left(\sigma^{i},w^{i}|\xi^{i}_{k}\right)\propto\left[\sum_{\gamma(s^ {i}_{k})\in\Gamma(s^{0}_{k})}\mathbb{P}\left(\tilde{w}^{i}_{k}=s^{i}_{k+1}- \gamma^{1}_{1}(s^{i}_{k})\right)\cdot\pi_{NN}\left(\gamma(s^{i}_{k})|\sigma^{i },w^{i},\mathbf{s}^{i}_{k},\mathbf{s}^{-i}_{k}\right)\right]\cdot\mathbb{P}\left( \sigma^{i},w^{i}|\xi^{i}_{k-1}\right)\) for all \(i\in A(0)\)\(\triangleright\) Bayesian filter using SANN imitated behavioral model 3:\(\mathbb{P}\left(\gamma(s^{0}_{k})|\xi^{i}_{k}\right)=\sum_{\sigma^{i},w^{i}} \pi_{NN}\left(\gamma(s^{i}_{k})|\sigma^{i},w^{i},\mathbf{s}^{i}_{k},\mathbf{s}^{-i}_{k }\right)\cdot\mathbb{P}\left(\sigma^{i},w^{i}|\xi^{i}_{k}\right)\) for all \(i\in A(0)\)\(\triangleright\) interaction behavior prediction using SANN imitated behavioral model 4: sort \(A(0)\) w.r.t. \(\left((x^{0}_{k}-x^{i}_{k})^{2}+(y^{0}_{k}-y^{i}_{k})^{2}\right)^{1/2}\), \(i\in A(0)\) in a descending order 5:for\(i\in A(0)\)do\(\triangleright\) prioritize search over closer interactions 6:for\(\gamma(s^{0}_{k})\in\Gamma(s^{0}_{k})\)do\(\triangleright\) parallel search 7:for\(n=0-\lfloor N/N^{\prime}\rfloor\)do\(\triangleright\) over prediction horizons 8:\(c_{n}=\sum_{\gamma(s^{i}_{k})\in\Gamma(s^{0}_{k})}\mathbb{P}\left(\gamma(s^ {i}_{k})|\xi^{i}_{k}\right)\)\(\cdot c\left(\gamma^{(n+1)N^{\prime}}(s^{0}_{k}),\gamma^{(n+1)N^{\prime}} _{N^{\prime}}(s^{i}_{k})\right)\)\(\triangleright\) probability of collision with \(i\) 9:if\(c_{n}\)\(\triangleright\) 0.5 then \(\triangleright\) trim unsafe decision tree branch 10:\(\Gamma(s^{0}_{k})\leftarrow\Gamma(s^{0}_{k})\setminus\Gamma_{\text{ unsafe}}(s^{0}_{k})\), \(\Gamma_{\text{ unsafe}}(s^{0}_{k}):=\left\{\gamma\in\Gamma(s^{0}_{k})|\gamma^{(n +1)N^{\prime}}_{N^{\prime}}=\gamma^{(n+1)N^{\prime}}_{N^{\prime}}(s^{0}_{k})\right\}\), and terminate all parallel search branches in \(\Gamma_{\text{ unsafe}}(s^{0}_{k})\) 11:else 12:\(Q_{0}\left(\mathbf{s}^{-0}_{k},\gamma(s^{0}_{k})\right)\gets Q_{0}\left(\mathbf{s} ^{-0}_{k},\gamma(s^{0}_{k})\right)+\lambda^{n}\cdot r_{0}\left(\gamma^{(n+1)N^ {\prime}}_{N^{\prime}}(s^{0}_{k}),\gamma^{(n+1)N^{\prime}}_{N^{\prime}}(s^{i} _{k})\right)\cdot\mathbb{P}\left(\gamma(s^{i}_{k})|\xi^{i}_{k}\right)\)\(\triangleright\) update discounted cumulative reward 13:endif 14:endfor 15:endfor 16:endfor 17:\(\gamma^{*}(s^{0}_{k})=\operatorname*{argmax}_{\gamma(s^{0}_{k})\in\Gamma(s^ {0}_{k})}Q_{0}\left(\mathbf{s}^{-0}_{k},\gamma(s^{0}_{k})\right)\) 18:return\(\gamma^{*}(s^{0}_{k})\) ``` **Algorithm 1** Interaction-Guided Decision Tree Search For online deployment, we incorporate the SANN-imitated behavioral model \(\pi_{NN}\) into a decision tree search Algorithm 1 to facilitate the Bayesian filtering, trajectory prediction of interacting vehicles, and decision-making of the ego vehicle. As shown in Fig. 3, the trajectory set \(\Gamma(s^{0}_{k})\) can be viewed as a decision tree and is initiated from the current state \(s^{0}_{k}\) as the tree root. Each trajectory \(\gamma(s^{0}_{k})\) is a branch in the decision tree \(\Gamma(s^{0}_{k})\), each state in a trajectory is a node, and each (directed) edge encodes reachability from one node to another in a certain trajectory/branch. Meanwhile, two distinct branches \(\gamma^{(m_{1})}(s^{0}_{k})\), \(\gamma^{(m_{2})}(s^{0}_{k})\) can share a same trajectory segments, i.e., \((\gamma^{(m_{1})})^{n_{1}}_{0}(s^{0}_{k})=(\gamma^{(m_{2})})^{n_{1}}_{0}(s^{0}_{k })=\gamma^{n_{1}}_{0}(s^{0}_{k})\). Algorithm 1 searches over this decision tree, updates the cumulative reward (24) for each branch, and trims unsafe branches, thereby, improving the searching efficiency. For example in Fig. 3b), the on-ramp ego vehicle is trying to merge where there is a following highway vehicle in grey. The normal lane-changing trajectories/branches share the same subset of initial trajectory segments \(\gamma^{n_{1}}_{0}(s^{0}_{k})\) and are highly likely to cause a collision with the highway vehicle. Therefore, we can trim all the lane change trajectories from the decision tree (shown by semi-transparent lines in Fig. 3b) and terminate further searches along these unsafe branches. This process is formalized in Algorithm 1. In lines 2 and 3, we use the SANN behavioral model \(\pi_{NN}\) to update the posterior distribution in the Bayesian filter (see Proposition 1) and predict trajectory distributions that are used to compute the reward (24). Since a closer interacting vehicle is more likely to collide with the ego vehicle in the near future, we rank the indices of the adjacent vehicles in \(A(0)\) in descending order with respect to their Euclidean distances to our ego vehicle. The three for-loops in line 5,6,7 of Algorithm 1 search over interactions with different vehicles, branches, and prediction horizons of a branch, respectively. In lines 8-13, the sorted set \(A(0)\) prioritizes collision checks with closer interacting vehicles and trims the unsafe branches as early as possible. We trim all branches with the common set of nodes \(\gamma^{(n+1)N^{\prime}}_{nN^{\prime}}(s^{0}_{k})\) if the ego vehicle of trajectory segment \(\gamma^{(n+1)N^{\prime}}_{nN^{\prime}}(s^{0}_{k})\) has a probability of collision with the \(i\)th vehicle higher than a threshold of 0.5. Otherwise, we will update the cumulative reward according to line 12 in Algorithm 1. Eventually, in line 17, we solve (23) by simply choosing the branch with the maximum cumulative reward. We also note that the three for-loops enable linear scalability of this algorithm with respect to both the number of interacting drivers in \(A(0)\) and the number of prediction horizons \(\lfloor N/N^{\prime}\rfloor\). ## VI Simulation and Experimental Results We first present both qualitative and quantitative results of real-world trajectory prediction using our Behavioral Model and the Bayesian Filter in Sec. VI-A. We report the performance of the SANN imitation learning in Sec. VI-B. We provide extensive evaluations of the proposed decision-making framework in forced merging tasks using our simulation environment (Sec. VI-C), the real-world High-D traffic dataset [32] (Sec. VI-D), and the Carla simulator [33] (Sec. VI-E). We note that we **do not** need to re-tune the model hyperparameters or re-train the SANN for different forced merging evaluation environments. The demonstration videos are available in [https://xiaolisean.github.io/publication/2023-10-31-TCST2024](https://xiaolisean.github.io/publication/2023-10-31-TCST2024) ### _Real-world Trajectory Prediction_ We present qualitative (Fig. 8) and quantitative results (Fig. 7) of predicting real-world trajectories using our behavioral model and the Bayesian filter. In our first task, we aim to reproduce a real-world trajectory from a naturalistic High-D [32] dataset. In a High-D traffic segment (Fig. 8) of 12 sec, we identify a target vehicle \(i\) (in green) that is overtaking its leading vehicle \(2\) (in orange). We initialize a virtual vehicle (in red) at \(t_{k}=0\) using the actual vehicle state \(s^{i}_{k}\), and we set \(\sigma^{i}=\) "competitive", \(w^{i}=[0,1,0]^{T}\) in the behavioral model (9). Afterwards, we control this virtual vehicle using (9) assuming no tracking error, and we set the control sampling period to \(N^{\prime}\Delta T=0.5\) sec. We compare the synthesized trajectories using our behavioral model with the actual ones in Fig. 8. Our behavioral model demonstrates a good prediction accuracy from 0-6 sec, which is adequate for a predictive controller with a sampling period \(N^{\prime}\Delta T\leq 6\) sec. We also note that the prediction error is large at \(t_{k}=12\) sec because the error accumulates over longer prediction windows. Our prediction captures the overtaking trajectories and qualitatively demonstrates the effectiveness of our method in modeling real-world driving behavior. Fig. 7 summarizes the error statistics of the trajectory prediction. In online application scenarios, given an observed interacting history \(\xi^{i}_{k}\), we recursively infer the latent behavioral model parameters \(\sigma^{i},w^{i}\) using the Bayesian filter (21) as a posterior distribution \(\mathbb{P}\left(\sigma^{i},w^{i}|\xi^{i}_{k}\right)\). Subsequently, the interacting vehicles' trajectories are predicted as a distribution using policy (12) according to \(\mathbb{P}\left(\gamma(s^{i}_{k})|\xi^{i}_{k}\right)=\sum_{\sigma^{i},w^{i}} \pi\left(\gamma(s^{i}_{k})|\sigma^{i},w^{i},s^{i}_{k},\mathbf{s}^{-i}_{k}\right) \cdot\mathbb{P}\left(\sigma^{i},w^{i}|\xi^{i}_{k}\right)\). We quantify the prediction error between the actual trajectory \(\hat{\gamma}(s^{i}_{k})=\left\{\hat{s}^{i}_{n}\right\}_{n}^{k+k^{\prime}}\) and the predicted trajectory distribution \(\mathbb{P}\left(\gamma(s^{i}_{k})|\xi^{i}_{k}\right)\) using the following metric \[\begin{split}&\operatorname{\mathbf{err}}\left(\hat{\gamma}(s^{i}_{ k}),\xi^{i}_{k}\right)=\mathbb{E}_{\gamma(s^{i}_{k})\sim\mathbb{P}\left( \gamma(s^{i}_{k})|\xi^{i}_{k}\right)}\\ &\left[\frac{1}{k^{\prime+1}}\sum\limits_{s^{i}_{k}\in\gamma(s^{ i}_{k}),n=k}^{n=k+k^{\prime}}\left\|\begin{bmatrix}x^{i}_{n}-\hat{x}^{i}_{n}\\ y^{i}_{n}-\hat{y}^{i}_{n}\end{bmatrix}\right\|_{2}\right],\end{split} \tag{26}\] where \(k^{\prime}\Delta T\) in second is the duration of the actual trajectory \(\hat{\gamma}(s^{i}_{k})\). This metric computes the expected \(\ell_{2}-\)norm error in position prediction averaged over time steps. We sample different traffic segments of different duration from the High-D dataset, and each traffic segment is bisected by time instance \(t_{k}\) into training segments \(\xi^{i}_{k}\) and prediction segment \(\hat{\gamma}(s^{i}_{k})\) corresponding to a sampled vehicle \(i\). We apply the aforementioned procedure to each training segment \(\xi^{i}_{k}\) and calculate the prediction error (26) using the corresponding prediction segment \(\hat{\gamma}(s^{i}_{k})\). Meanwhile, in the sequel, we assume \(w^{i}\) in a finite set \[W=\left\{\begin{matrix}[0,0,1],\,[0,1,1]/2,\,[0,1,0],\,[1,1,1]/3,\\ [1,0,1]/2,\,[1,1,0]/2,\,[1,0,0]\end{matrix}\right\} \tag{27}\] Fig. 8: Reproducing real-world overtaking trajectory: The trajectories of the virtual vehicle \(i\) (in red) are synthesized using our behavioral model (9) considering its driver of model parameters \(\sigma^{i}=\) “competitive”, \(w^{i}=[0,1,0]^{T}\). This virtual “competitive” driver overtakes the vehicle \(2\in A(i)\) minimizing the traveling time \(\tau\). The resulting trajectories in red match the actual trajectories in green. Fig. 7: Error statistics of trajectory prediction comprises 6,774 High-D trajectories of in total 28,078 driving seconds: (a) Each grid reports the mean of prediction errors using \(\xi^{i}_{k}\), \(\hat{\gamma}(s^{i}_{k})\) of the same lengths. (b) Each subplot visualizes the mean prediction errors (red lines) corresponding to \(\hat{\gamma}(s^{i}_{k})\) of the same duration versus variable duration of \(\xi^{i}_{k}\). We use red shaded areas to denote the one standard deviation and use blue shaded areas to represent the minimum/maximum error bounds. to reduce the dimension of parameter space \((\sigma^{i},w^{i})\). Subsequently, we have 22 possible combinations of \((\sigma^{i},w^{i})\): We assign 7 different \(w^{i}\in W\) to three \(\sigma^{i}\neq\) "altruistic"; if \(\sigma^{i}=\) "altruistic", the weights \(w^{i}\) do not matter for the altruistic driver as defined in (7), (8). As shown in Fig. (a)a, the mean prediction errors are below \(4\ \mathrm{m}\) for predictions of 1-9 sec ahead and using training segments of 1-18 sec. We also note that longer training segments \(\xi_{k}^{i}\) (see Fig. (b)b) reduce both the prediction error standard deviation and its maximum values. However, for longer prediction windows of 7-9 sec (see Fig. (b)b), the error accumulates and leads to larger standard deviations and mean errors which are also observed in Fig. 8. For shorter prediction windows of 1-6 sec, we have a maximum error below \(5\ \mathrm{m}\) for most of the cases and the standard deviation is smaller than \(1.5\ \mathrm{m}\) (see Fig. (b)b). The results in Fig. 8 and Fig. 7 provide evidence that our algorithms have good quantitative accuracy over shorter prediction windows, and good qualitative accuracy over longer prediction windows. Based on these considerations, we set the trajectory length as \(N\Delta T=6\ \mathrm{sec}\), so that we have a good prediction performance over a shorter prediction window of 6 sec. This duration covers a complete lane change of \(T_{\text{lane}}=4\ \mathrm{sec}\), and suffice the task of predicting interacting vehicles' trajectories for ego vehicle control. ### _Imitation Learning with SANN_ The goal of imitation learning is to train the SANN to mimic the behavioral model. Namely, the predicted policy \(\pi_{\text{NN}}\) should match the actual one \(\pi\) from behavioral model (12) accurately. We leverage the High-D dataset [32] to synthesize realistic traffic \(s_{k}^{i},\mathbf{s}_{k}^{-i}\) for training the SANN. We randomly sample a frame from the High-D traffic together with a target vehicle \(i\), thereby we can extract states \(s_{k}^{i},\mathbf{s}_{k}^{-i}\) of vehicles \(i\) and its interacting vehicles. Together with sampled parameters \(\sigma^{i},w^{i}\), we can compute the actual policy distribution \(\pi\left(\cdot|\sigma^{i},w^{i},s_{k}^{i},\mathbf{s}_{k}^{-i}\right)\) using (12). We repeat this procedure to collect a dataset of 183,679 data points. We decompose the dataset into training (70%), validation (15%), and test datasets (15%). The MLP in the Attention Backbone has three layers of sizes 7, 32, 225, respectively. The MLP in the Policy Head has two layers of sizes 225, 225. Furthermore, we set \(|Q|=32\), \(|V|=225\), and the maximum number of feature vectors as \(N_{z}=9\). We use Python with Pytorch [47] to train the SANN using the training dataset and the loss function (13). We train the SANN for 1000 epochs using a Batch Stochastic Gradient Descent algorithm with momentum (batch size of 200). We set the initial learning rate and momentum to \(0.01\) and \(0.9\), respectively. We also adopt a learning rate scheduler that decreases the learning rate by half every 200 epochs. The training process takes in total 7 hours (25 sec per epoch) on a computer with 16 GB RAM and Intel Xeon CPU E3-1264 V3. We evaluate the performance of the SANN in the task of predicting the policy distributions in the test dataset, and the results are reported in Fig. 9. For the majority of the test cases, the SANN achieves losses smaller than 0.4 which corresponds to a good statistical performance. Moreover, in the example (see Fig. 9) with relatively larger losses of 0.8514, the learned policy (in blue) also qualitatively matches the actual one (in red). To conclude, the SANN imitates the proposed behavioral model with good accuracy. Hence, as presented in Algorithm 1, we use this SANN learned policy to perform Bayesian filtering and trajectory prediction to facilitate the online decision-making for our ego vehicle. ### _Forced Merging in Simulations_ We set up a simulation environment (see Fig. 10) where the interacting vehicle is controlled by the behavioral model (9) with \(\sigma^{1}=\) "altruistic". We use the proposed Algorithm 1 to control the ego vehicle. For this and the following experiments, we set the control sampling period to \(N^{\prime}\Delta T=0.5\ \mathrm{sec}\) for both Algorithm 1 and the behavioral model. Instead of passively inferring the driver's intention from its behavior, the Algorithm 1 controls the ego vehicle and exhibits a merging strategy with proactive interaction to test the interacting driver's intention. Specifically, in Fig. (b)b), the interacting driver 1 first drives at a constant speed from \(0\) to \(0.5\) sec. Meanwhile, from \(0\) to \(1\) sec, the ego vehicle 0 is not certain if driver Fig. 9: Test statistics and examples of imitation learning on the test dataset. The histogram presents the loss statistics between the SANN learned policy distributions \(\pi_{NN}\) and the actual ones \(\pi\) computed from the behavioral model (12). Five qualitative examples are visualized in call-out boxes: The \(y\)-axis shows the probability of driver \(i\) taking a certain trajectory \(\gamma(s_{k}^{i})\). For comparison, policies \(\pi\) and \(\pi_{NN}\) are plotted above and below the dashed line, respectively, in mirror reflection 1 will yield the right of way for its merging attempt and, therefore it tentatively merges after vehicle 1 with longitudinal deceleration. Meanwhile, in the time interval between \(0.5\) and \(1.5\) sec, the "altruistic" driver of vehicle 1 notices the merging intention of the ego vehicle and decelerates to promote merging. In the aforementioned interactions, the ego vehicle gradually updates its belief of the latent model parameters \(\sigma^{1},w^{1}\) of driver 1. As shown in Fig. 10a), our algorithm correctly infers the "altruistic" identity of driver 1. Subsequently, the ego vehicle aborts the merging action due to safety concerns with longitudinal acceleration to build up speed advantage for future merging. Then, in the time interval between \(2.0\) and \(5.5\) sec, being aware of the yielding behavior, the ego vehicle re-merges before vehicle 1 with longitudinal acceleration. This simulation example provides evidence that our method can effectively interpret the driving intentions of other drivers. Moreover, our decision-making module can leverage this information to facilitate the forced merging task while ensuring the safety of the ego vehicle. ### _Forced Merging in Real-world Traffic Dataset_ We further evaluate the performance of our method in the High-D [32] real-world traffic dataset. There are 60 recordings in the High-D dataset where the recordings 58-60 correspond to highways with ramps. In recordings 58-60, we identify in total 75 on-ramp vehicles (High-D target vehicles) that merge into highways. For each one of the 75 vehicles, we extract its initial state at a recording frame when it appears on the ramp. Then, we initialize a virtual ego vehicle using this state and control this virtual ego vehicle using our decision-making module. Other vehicles are simulated using the actual traffic recordings, where we neglect the interaction between the virtual ego vehicle and the High-D target vehicle. Eventually, we generate 75 forced merging test cases repeating this procedure over all target vehicles. We visualize two test cases out of 75 as examples in Figs. 11 and 12. The two interacting vehicles 1 and 2 in Fig. 11 are approximately driving at constant speeds. Therefore, our decision-making module accelerates the virtual ego vehicle to a comparable speed and merges the virtual ego vehicle in between the two interacting vehicles. In the second example (see Fig. 12), our decision-making module can interpret the yielding behavior of vehicle 2 and, thereby merge the ego vehicle highway in a timely manner. In both examples, our algorithm is able to complete the forced-merging task faster than the real-world human drivers (in green boxes). Meanwhile, we identify a test case failure if our method fails to merge the virtual ego vehicle into the highway before the end of the ramp, or if the ego vehicle collides with other vehicles and road boundaries. Otherwise, we consider it a success case. In the three recordings, we have 18, 21, and 36 test cases, respectively. Our method can achieve a success rate of 100% in all recordings. We also report the results of the time it took to merge the virtual ego vehicle in Fig. 13. Compared to the actual human driver of the target vehicle (green bars), Fig. 11: A forced merging evaluation example on the High-D dataset: Our ego vehicle (in red) accelerates first to create sufficient gaps between highway vehicles 1 and 2, which are driving approximately at constant speeds. The ego vehicle successfully merges into the highway in \(5.2\) sec which is faster than the actual human driver (in green). Fig. 10: A forced merging example of proactive interaction in a simulation: (a) Each subplot reports a posterior distribution of \(\mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k}^{i}\right)\) at a certain time instance \(t_{k}\) from the Bayesian filter (21). The \(x\)-axis shows 7 cases of \(w^{1}\in W\), and the \(y\)-axis shows three different SVO categories with \(\sigma^{i}=\) “altruistic” stands along; (b) Each subplot visualizes a frame of highway interaction between the ego vehicle and vehicle 1 controlled by Algorithm 1 and behavioral model (9), respectively. our decision-making module can merge the ego vehicle into the highway using a shorter time. Meanwhile, for 64 out of the 75 test cases, our method completes the task within \(5\ \mathrm{sec}\) while we model a complete lane change with \(T_{\text{lane}}=4\ \mathrm{sec}\) in Sec. II-C. The results demonstrate that our method can successfully complete the forced merging task with traffic in real-world recordings in a timely and safe manner. Though the traffic is realistic in the dataset, we also note that there is no actual interaction between the virtual ego vehicle and other vehicles. Namely, the recorded vehicles in the dataset will not respond to the action of our virtual ego vehicle. ### _Forced Merging in Diverse Carla Traffics_ We set up a forced merging scenario in the Carla simulator [33] (see Fig. 15) and test our decision-making module in diverse and reactive simulated traffic. The vehicles in the traffic are controlled by the default Carla Traffic Manager. In the previous experiments, we assumed that the ego vehicle was able to accurately track the reference trajectories \(\gamma^{*}(s_{k}^{0})\) from Algorithm 1. To mimic the real application scenario of our algorithm, we developed a PID controller for reference trajectory tracking. As visualized in the system diagram of Fig. 2, Algorithm 1 in the high-level decision-making module updates optimal reference trajectories \(\gamma^{*}(s_{k}^{0})\) every \(N^{\prime}\Delta T=0.5\ \mathrm{sec}\). Meanwhile, to track this reference trajectory \(\gamma^{*}(s_{k}^{0})\), the low-level PID controller computes the steering and throttle signal of the actual vehicle plant (simulated by Carla using Unreal Engine) at 20 Hz. Fig. 16 illustrates our method capability to merge the ego vehicle into a dense highway platoon in Carla. From \(0\) to \(4.60\ \mathrm{sec}\), the ego vehicle attempts to merge between vehicles 1 and 2 and, eventually aborts lane change due to safety concerns. From \(4.60\ \mathrm{to}\ 6.12\ \mathrm{sec}\), vehicle 3 is decelerating due to a small headway distance. And vehicle 2 is accelerating seeing an enlarged headway distance as a result of the acceleration of vehicle 1 before \(4.60\ \mathrm{sec}\). Consequently, the gap between vehicles 2 and 3 is enlarged due to this series of interactions from \(4.60\ \mathrm{to}\ 6.12\ \mathrm{sec}\). Taking advantage of these interactions, our ego vehicle decides to accelerate and merge before vehicle \(3\ \mathrm{at}\ 4.60\ \mathrm{sec}\). Moreover, we are interested in quantifying the state of the traffic that the ego vehicle directly interacts with, i.e., the traffic in the lane near the ramp. We first define the traffic Region Of Interest (ROI) as the traffic in the near-ramp lane between the longitudinal position \(x_{\text{ramp}}-l_{\text{ramp}}\) and \(x_{\text{ramp}}\) (see Fig. 1). Subsequently, we define the following two variables: We compute the average traffic flow as the number of vehicles entering this traffic ROI over the entire simulation time; and we calculate the traffic density as the number of vehicles located in this traffic ROI over the length of this region \(l_{\text{ramp}}\) at a certain time instance. Then, we average the traffic density over measurements taken every \(\Delta T=0.05\ \mathrm{sec}\) which yields the average traffic density. The two variables quantify the averaged speed and density of the near-ramp traffic that the ego vehicle directly interacts with. For the example in Fig. 16, the traffic in the ROI has an average traffic density of \(0.055\ \mathrm{vehicle/meter}\) and an average traffic flow \(0.3646\ \mathrm{vehicle/sec}\). We also note that the vehicles controlled by the default Carla Traffic Manager have no interpretation of other drivers' intentions and, therefore, are unlikely to yield to the ego vehicle. This introduces more difficulties in merging into a dense vehicle platoon (such as the example in Fig. 16) even for human drivers. Therefore, we introduce another variable, i.e., the probability of yielding, to have more diversified evaluations for our method. Specifically, for traffic with zero probability of yielding, we control all vehicles in the traffic using the default Carla Traffic Manager. For that being set to a non-zero value of \(X\%\), the vehicles in the traffic ROI have \(X\%\) probability Fig. 12: Another forced merging evaluation example on the High-D dataset: The interacting vehicle 2 changes its lane in order to promote the merging action of the on-ramp vehicle. Our ego vehicle (in red) merges into the highway once observing the lane change behavior of vehicle 2. The ego vehicle successfully merges into the highway in \(5.72\ \mathrm{sec}\) which is faster than the actual human driver (in green). Fig. 13: Forced merging test statistics in High-D: The results are reported separately for the three recordings, and our method achieves a 100% success rate. (Upper) We visualize the time-to-merge of the virtual ego vehicles (red bars) in comparison with the ones of the actual High-D vehicles (green bars). (Lower) Three histograms visualize the number of cases in which our method takes certain seconds to merge the virtual ego vehicle. of yielding to the on-ramp ego vehicle. The yielding behavior is generated by the Cooperate Forced Controller [51]. Similar to the Intelligent Driver Model (IDM) [52] that controls a vehicle to follow the leading vehicle ahead, the Cooperate Forced Controller is a car-following controller derived from the IDM, but assuming the on-ramp ego vehicle is the leading vehicle. We test our algorithm at various traffic conditions with different settings. As shown in Fig. 14, we achieve a 100 % success rate among 200 test cases where we have different traffic densities and different probabilities of yielding for interacting vehicles. In the majority of the test cases, the ego vehicle takes less than \(9\) sec to merge. Generally, in denser traffic, our algorithm needs more time to interact with more vehicles and, thus, more time to merge. With a higher probability of yielding, the algorithm takes less time to complete the merging tasks on average. Notably, our algorithm can achieve a 100% success rate even in traffic with zero probability of yielding. ## VII Conclusions In this paper, we proposed an interaction-aware decision-making module for autonomous vehicle motion planning and control. In our approach, interacting drivers' intentions are modeled as hidden parameters in the proposed behavioral model and are estimated using a Bayesian filter. Subsequently, interacting vehicles' behaviors are predicted using the proposed behavioral model. For the online implementation, the Social-Attention Neural Network (SANN) was designed and utilized to imitate the behavioral model in predicting the interacting drivers' behavior. The proposed approach is easily transferable to different traffic conditions. In addition, the decision tree search algorithm was proposed for faster online computation; this algorithm leads to linear scalability with respect to the number of interacting drivers and prediction horizons. Finally, a series of studies, based on naturalistic traffic data and simulations, were performed to demonstrate the capability of the proposed decision-making module. In particular, we have shown that the behavioral model has good prediction accuracy, and the proposed algorithm is successful in merging the ego vehicle in various simulations and real Fig. 14: Forced merging test statistics in the Carla simulator with different traffic conditions: The \(x\)-axis shows four different traffic settings where vehicles have different probabilities of being willing to yield to the ego vehicle. We have 50 test cases for each of the four traffic settings. Each dot is a single test case where the color and the size of the dot reflect the time needed to merge the ego into the highway. The \(y\) and \(z\) axes report the average density and flow of the traffic in the ROI. Fig. 15: A forced merging example in the Carla simulator: the ego vehicle is the black vehicle in the middle lower side of each subplot with its trajectories in green lines and reference trajectories (from our decision-making module) in red lines. world traffic scenarios, without the need for returning the model hyperparameters or retraining the SANN.
2306.00230
Predictive Limitations of Physics-Informed Neural Networks in Vortex Shedding
The recent surge of interest in physics-informed neural network (PINN) methods has led to a wave of studies that attest to their potential for solving partial differential equations (PDEs) and predicting the dynamics of physical systems. However, the predictive limitations of PINNs have not been thoroughly investigated. We look at the flow around a 2D cylinder and find that data-free PINNs are unable to predict vortex shedding. Data-driven PINN exhibits vortex shedding only while the training data (from a traditional CFD solver) is available, but reverts to the steady state solution when the data flow stops. We conducted dynamic mode decomposition and analyze the Koopman modes in the solutions obtained with PINNs versus a traditional fluid solver (PetIBM). The distribution of the Koopman eigenvalues on the complex plane suggests that PINN is numerically dispersive and diffusive. The PINN method reverts to the steady solution possibly as a consequence of spectral bias. This case study reaises concerns about the ability of PINNs to predict flows with instabilities, specifically vortex shedding. Our computational study supports the need for more theoretical work to analyze the numerical properties of PINN methods. The results in this paper are transparent and reproducible, with all data and code available in public repositories and persistent archives; links are provided in the paper repository at \url{https://github.com/barbagroup/jcs_paper_pinn}, and a Reproducibility Statement within the paper.
Pi-Yueh Chuang, Lorena A. Barba
2023-05-31T22:59:52Z
http://arxiv.org/abs/2306.00230v1
# Predictive Limitations of Physics-Informed Neural Networks in Vortex Shedding ###### Abstract The recent surge of interest in physics-informed neural network (PINN) methods has led to a wave of studies that attest to their potential for solving partial differential equations (PDEs) and predicting the dynamics of physical systems. However, the predictive limitations of PINNs have not been thoroughly investigated. We look at the flow around a 2D cylinder and find that data-free PINNs are unable to predict vortex shedding. Data-driven PINN exhibits vortex shedding only while the training data (from a traditional CFD solver) is available, but reverts to the steady state solution when the data flow stops. We conducted dynamic mode decomposition and analyze the Koopman modes in the solutions obtained with PINNs versus a traditional fluid solver (PetIBM). The distribution of the Koopman eigenvalues on the complex plane suggests that PINN is numerically dispersive and diffusive. The PINN method reverts to the steady solution possibly as a consequence of spectral bias. This case study reasures occurs about the ability of PINNs to predict flows with instabilities, specifically vortex shedding. Our computational study supports the need for more theoretical work to analyze the numerical properties of PINN methods. The results in this paper are transparent and reproducible, with all data and code available in public repositories and persistent archives; links are provided in the paper repository at [https://github.com/barbagroup/jcs_paper_pinn](https://github.com/barbagroup/jcs_paper_pinn), and a Reproducibility Statement within the paper. keywords: computational fluid dynamics, physics-informed neural networks, dynamic mode analysis, Koopman analysis, vortex shedding + Footnote †: journal: an Elsevier journal ## 1 Introduction In recent years, research interest in using Physics-Informed Neural Networks (PINNs) has surged. The idea of using neural networks to represent solutions of ordinary and partial differential equations goes back to the 1990s [1; 2], but upon the term PINN being coined about five years ago, the field exploded. Partly, it reflects the immense popularity of all things machine learning and artificial intelligence (ML/AI). It also seems very attractive to be able to solve differential equations without meshing the domain, and without having to discretize the equations in space and time. PINN methods incorporate the differential equations as constraints in the loss function, and obtain the solution by minimizing the loss function using standard ML techniques. They are easily implemented in a few lines of code, taking advantage of the ML frameworks that have become available in recent years, such as PyTorch. In contrast, traditional numerical solvers for PDEs such as the Navier-Stokes equations can require years of expertise and thousands of lines of code to develop, test and maintain. The general optimism in this field has perhaps held back critical examinations of the limitations of PINNs, and the challenges of using them in practical applications. This is compounded by the well-known fact that the academic literature is biased to positive results, and negative results are rarely published. We agree with a recent perspective article that calls for a view of "cautious optimism" in these emerging methods [3], for which discussion in the published literature of both successes and failures is needed. In this paper, we examine the solution of Navier-Stokes equations using PINNs in flows with instabilities, particularly vortex shedding. Fluid dynamic instabilities are ubiquitous in nature and engineering applications, and any method competing with traditional CFD should be able to handle them. In a previous conference paper, we already reported on our observations of the limitations of PINNs in this context [4]. Although the solution of a laminar flow with vorticity, the classical Taylor-Green vortex, was well represented by a PINN solver, the same network architecture failed to give the expected solution in a flow with vortex shedding. The PINN solver accurately represented the steady solution at a lower Reynolds number of \(Re=40\), but reverted to the steady state solution in two-dimensional flow past a circular cylinder at \(Re=200\), which is known to exhibit vortex shedding. Here, we investigate this failure in more detail, comparing with a traditional CFD solver and with a data-driven PINN that receives as training data the solution of the CFD solver. We look at various fluid diagnostics, and also use dynamic mode decomposition (DMD) to analyze the flow and help explain the difficulty of the PINN solver to capture oscillatory solutions. Other works have called attention to possible failure modes for PINN methods. Krishnapriyan el al. [5] studied PINN models of simple problems of convection, reaction, and reaction-diffusion, and found that the PINN method only works for the simplest, slowly varying problems. They suggested that the neural network architecture is expressive enough to represent a good solution, but the landscape of the loss function is too complex for the optimization to find it. Fuks and Tchelepi [6] studied the limitations of PINNs in solving the Buckley-Leverett equation, a nonlinear hyperbolic equation that models two-phase flow in porous media. They found that the neural network model was unable to represent the solution of the 1D hyperbolic PDE when shocks were present, and also concluded that the problem was the optimization process, or the loss function. The failure to capture the vortex shedding of cylinder flow is also highlighted in a recent work by Rohrhofer et al. [7], who cite our previous conference paper. Our PINN solvers were built using the NVIDIA _Modulus_ toolkit,1 a high-level package built on PyTorch for building, training, and fine-tuning physics-informed machine learning models. For the traditional CFD solver, we used our own code, _PetIBM_, which is open-source and available on GitHub, and has also been peer reviewed [8]. A Reproducibility Statement gives more details regarding all the open research objects to accompany the paper, and how the interested reader can reuse them. Footnote 1: [https://developer.nvidia.com/modulus](https://developer.nvidia.com/modulus) ## 2 Method We will be solving the 2D incompressible Navier-Stokes equations in primitive-variable form: \[\left\{\begin{aligned} &\nabla\cdot\mathbf{u}=0\\ &\frac{\partial\mathbf{u}}{\partial t}+\left(\mathbf{u}\cdot \nabla\right)\mathbf{u}=-\frac{1}{\rho}\nabla p+\nu\nabla^{2}\mathbf{u}\end{aligned}\right. \tag{1}\] Here, \(\mathbf{u}\equiv\left[u\;\;v\right]^{\mathsf{T}}\), \(p\), \(\nu\), and \(\rho\) denote the velocity vector, pressure, kinematic viscosity, and the density, respectively. Let \(\mathbf{x}\equiv\left[x\;\;y\right]^{\mathsf{T}}\in\Omega\) and \(t\in\left[0,\;T\right]\) denote the spatial and temporal domains. The velocity \(\mathbf{u}\) and pressure \(p\) are functions of \(\mathbf{x}\) and \(t\) for given fluid properties \(\rho\) and \(\nu\). The solution to the Navier-Stokes equations is subject to initial conditions \(\mathbf{u}(\mathbf{x},t)=\left[u_{0}(\mathbf{x})\;\;v_{0}(\mathbf{x})\right]^ {\mathsf{T}}\) and \(p(\mathbf{x},t)=p_{0}(\mathbf{x})\) for \(\mathbf{x}\in\Omega\) and \(t=0\). The Dirichlet boundary conditions are \(u(\mathbf{x},t)=u_{D}(\mathbf{x},t)\) and \(v(\mathbf{x},t)=v_{D}(\mathbf{x},t)\), on domain boundaries \(\mathbf{x}\in\Gamma_{uD}\) and \(\Gamma_{vD}\), respectively. The Neumann boundary conditions are \(\frac{\partial u}{\partial\mathbf{u}}(\mathbf{x},t)=u_{N}(\mathbf{x},t)\) and \(\frac{\partial v}{\partial\mathbf{u}}=v_{N}(\mathbf{x},t)\), defined on boundaries \(\mathbf{x}\in\Gamma_{uN}\) and \(\Gamma_{vN}\) correspondingly. Note that in incompressible flow pressure is a Lagrangian multiplier to enforce the divergence-free condition and does not need boundary conditions theoretically. When using physics-informed neural networks, PINNs, we approximate the solutions to equation (1) with a neural network model \(G(\mathbf{x},t;\theta)\): \[\left[\begin{matrix}u(\mathbf{x},t)\\ v(\mathbf{x},t)\\ p(\mathbf{x},t)\end{matrix}\right]\approx G(\mathbf{x},t;\theta), \tag{2}\] where \(\theta\) represents a set of free model parameters we need to determine later. A common choice of \(G\) is an MLP (multilayer perceptron) network, which can be represented as follows: \[\mathbf{h}^{0}\equiv\begin{bmatrix}x&y&t\end{bmatrix}^{\mathsf{T}} \tag{3}\] \[\mathbf{h}^{k}=\sigma_{k-1}\left(A^{k-1}\mathbf{h}^{k-1}+\mathbf{ b}^{k-1}\right)\text{, for }1\leq k\leq\ell\] (4) \[\left[u\;\;\;\;v\;\;\;p\right]^{\mathsf{T}}\approx\mathbf{h}^{ \ell+1}=\sigma_{\ell}\left(A^{\ell}\mathbf{h}^{\ell}+\mathbf{b}^{\ell}\right) \tag{5}\] The vectors \(\mathbf{h}^{k}\) for \(1\leq k\leq\ell\) are called hidden layers, and carry intermediate evaluations of the transformations that take the input (spatial and temporal variables) to the output (velocity and pressure values). \(\ell\) denotes the number of hidden layers. The elements in these vectors are called neurons, and \(N_{k}\) for \(1\leq k\leq\ell\) represents the number of neurons in each hidden layer. To have a consistent notation, we use \(\mathbf{h}^{0}\) to denote the vector of the inputs to the model \(G\), which contains spatial-temporal coordinates. Similarly, \(\mathbf{h}^{\ell+1}\) denotes the outputs of \(G\), corresponding to the approximate solutions \(u\), \(v\), and \(p\) at every spatial point and time instant. \(A^{k}\) and \(\mathbf{b}^{k}\) for \(0\leq k\leq\ell\) are parameter matrices and vectors holding the free model parameters that will be found via optimization, \(\theta=\left\{A^{0},\mathbf{b}^{0},\cdots,A^{\ell},\mathbf{b}^{\ell}\right\}\). Finally, \(\sigma_{k}\) for \(0\leq k\leq\ell\) are vector-valued functions, called activation functions, that are applied element-wise to the vectors \(\mathbf{h}^{k}\). In neural networks, the activation functions are responsible for providing the non-linearity in an MLP model. Throughout this work, we use \(\sigma_{0}=\cdots=\sigma_{\ell}=\sigma(\mathbf{x})=\frac{\mathbf{x}}{1+\exp( \mathbf{x})}\), the classical sigmoid function. The parameters \(\ell\), \(N_{k}\), and the choices of \(\sigma_{k}\) together control the model complexity of the PINNs that use MLP networks. As with all other numerical methods for PDEs, the calculations of spatial and temporal derivatives of velocity and pressure play a crucial role. While a numerical approximation (e.g., finite difference) may be a more robust choice--as seen in early-day literature on neural networks for differential equations [1; 2]--, it is common to see the use of automatic differentiation nowadays. Automatic differentiation is a general technique to find derivatives of a function by decomposing it into elementary functions with a known derivative, and then applying the chain rule of calculus to get exact derivative of the more complex function. Note that the word _exact_ here refers to being exact in terms of the model \(G\), rather than to the true solution of the Navier-Stokes equations. A detailed review of automatic differentiation can be found in reference [9]. Major deep learning programming libraries, such as TensorFlow and PyTorch, offer the user automatic differentiation features. Once we have obtained derivatives, we are able to calculate residuals, also called losses in the terminology of machine learning. As shown in figure 1, given a spatial-temporal coordinate \((x,y,t)\), we can calculate up to 10 loss terms, depending on where in the domain this spatial-temporal point is located. Figure 1 is only shown as an illustration of the PINN methodology using the solution workflow specifically for the Navier-Stokes equations (1). The number and definitions of loss terms may change, for example, when we have some boundary segments with Robin conditions or when we are solving 3D problems. Finally, we determine the free model parameters using a least-squares optimization, as shown in the last block in figure 1. To be specific, in this work we used the Adam stochastic optimization method for this process. We first randomly sam pled some spatial-temporal collocation points from the computational domain, including all boundaries. These points are called _training points_ in the terminology of machine learning. Depending on where a training point is located in the domain, it may result in multiple loss terms, as described in the previous paragraph. An aggregated squared loss is obtained over all loss terms of all training points. In this work, all loss terms were taken to have the same weights. The Adam optimization then finds the optimal model parameters, i.e., \(\theta=\left\{\mathbf{A}^{0},\mathbf{b}^{0},\cdots,\mathbf{A}^{t},\mathbf{b}^{t}\right\}\), based on the gradients of the aggregated loss with respect to model parameters. In other words, the desired model parameters are those giving the minimal aggregated squared loss. Note that in figure 1 we consider that if-conditions determine the loss terms to calculate on a training point. In practice, however, we sample points in subgroups separately from within the domain, on the boundaries, and at \(t=0\). Each subgroup of training points is only responsible for specific loss terms. We also use a batched approach for the optimization, meaning that not all training points are used during each individual optimization iteration. The batched approach only uses a sample of the training points to calculate the losses and the gradients of the aggregated loss in each optimization iteration. Hereafter, the term _training_ will be used interchangeably with the optimization process. In this section, we only introduce the specific details of PINNs required for our work. References [1; 2; 10] provide more details of these methods in general. ## 3 Verification and Validation This section presents the verification and validation (V&V) of our PINN solvers and PetIBM, an in-house CFD solver [8]. V&V results are necessary to build confidence in our case study described later in section 4. For verification, we solved a 2D Taylor-Green vortex (TGV) at Reynolds number \(Re=100\), which has a known analytical solution. For validation, on the other hand, we use 2D cylinder flow at \(Re=40\), which exhibits a well-known steady state solution with plenty of experimental data available in the published literature. ### Verification: 2D Taylor-Green Vortex (TGV), \(Re=100\) Two-dimensional Taylor-Green vortices with periodic boundary conditions have closed-form analytical solutions, and serve as standard verification cases for CFD solvers. We used the following 2D TGV configuration, wih \(Re=100\), to verify both the PINN solvers and PetIBM: \[\begin{cases}u(x,y,t)=\cos(x)\sin(y)\exp(-2\nu t)\\ v(x,y,t)=-\sin(x)\cos(y)\exp(-2\nu t)\\ p(x,y,t)=-\frac{\rho}{4}\left(\cos(2x)+\cos(2y)\right)\exp(-4\nu t)\end{cases} \tag{6}\] where \(\nu=0.01\) and \(\rho=1\) are the kinematic viscosity and the density, respectively. The spatial and temporal domains are \(x,y\in[-\pi,\pi]\) and \(t\in[0,100]\). Periodic conditions are applied at all boundaries. In PetIBM, we used the Adams-Bashforth and the Crank-Nicolson schemes for the temporal discretization of convection Figure 1: A graphical illustration of the workflow in PINNs: \(\mathbf{x}\equiv[x\ y]^{\mathsf{T}}\in\Omega\) and \(t\in[0,\ T]\) denote the spatial and temporal domains. \(\mathbf{u}\equiv[u\ v]^{\mathsf{T}}\), \(p\), \(\nu\), and \(\rho\) represent the velocity vector, pressure, kinematic viscosity, and the density, respectively. \(G(\mathbf{x},t;\theta)\) is a neural network model that approximates the solution to the Navier-Stokes equations with a set of free model parameters denoted by \(\theta\). The \(\left[h_{1}^{1},\cdots,h_{N_{1}}^{1},\cdots,h_{1}^{t},\cdots,h_{N_{N}}^{t}\right]\), called hidden layers in neural networks, can be deemed as some intermediate values or temporary results during the calculations of the approximate solutions. Given spatial-temporal coordinates \((x,y,t)\), the neural network returns an approximate solution \((u,v,p)\) at this coordinate. We then apply automatic differentiation to obtain required derivatives. With the approximate solutions and the derivatives, we are able to calculate the residuals (also called losses, denoted by symbol \(L\)) between the approximates and PDEs, as well as the initial and boundary conditions. Using the aggregated squared losses, we can determine the free model parameters \(\theta\) by a least-square method. and diffusion terms, respectively. The spatial discretization is central difference for all terms. Theoretically, we expect to see second-order convergence in both time and space for this 2D TGV problem in PetIBM. We used the following \(L_{2}\) spatial-temporal error to examine the convergence: \[\begin{split} L_{2,sp-t}&\equiv\sqrt{\frac{1}{L_{x} L_{y}T}\iint\limits_{x,y,t}\lVert f-f_{ref}\rVert^{2}\,\mathrm{d}x\,\mathrm{d}y\, \mathrm{d}t}\\ &=\sqrt{\frac{1}{N_{x}N_{y}N_{t}}\sum_{i=1}^{N_{x}}\sum_{j=1}^{N_ {t}}\sum_{k=1}^{N_{t}}\left(f^{(i,jk)}-f_{ref}^{(i,jk)}\right)^{2}}\end{split} \tag{7}\] Here, \(N_{x}\), \(N_{y}\), and \(N_{t}\) represent the number of solution points in \(x\), \(y\), and \(t\); \(L_{x}\) and \(L_{y}\) are the domain lengths in \(x\) and \(y\); \(T\) is the total simulation time; \(f\) is the flow quantity of interest, while \(f_{ref}\) is the corresponding analytical solution. The superscript \((i,j,k)\) denotes the value at the \((i,j,k)\) solution point in the discretized spatial-temporal space. We used Cartesian grids with \(2^{n}\times 2^{n}\) cells for \(i=4\), \(5\), \(\dots\), \(10\). The time step size \(\Delta t\) does not follow a fixed refinement ratio, and takes the values \(\Delta t=1.25\times 10^{-1}\), \(8\times 10^{-2}\), \(4\times 10^{-2}\), \(2\times 10^{-2}\), \(1\times 10^{-2}\), \(5\times 10^{-3}\), and \(1.25\times 10^{-3}\), respectively. \(\Delta t\) was determined based on the maximum allowed CFL number and whether it was a factor of 2 to output transient results every 2 simulation seconds. The velocity and pressure linear systems were both solved with BiCGSTAB (bi-conjugate gradient stabilized method). The preconditioners of the two systems are the block Jacobi preconditioner and the algebraic multigrid preconditioner from NIVIDA's AmgX library. At each time step, both linear solvers stop when the preconditioned residual reaches \(10^{-14}\). The hardware used for PetIBM simulations contains 5 physical cores of Intel E5-2698 v4 and 1 NVIDIA V100 GPU. Figure 2 shows the spatial-temporal convergence results of PetIBM. Both \(u\) and \(v\) follow an expected second-order convergence before the machine round-off errors start to dominate on the \(1024\times 1024\) grid. The pressure follows the expected convergence rate with some fluctuations. Further scrutiny revealed that the AmgX library was not solving the pressure system to the desired tolerance. The AmgX library has a hard-coded stop mechanism when the relative residual (relative to the initial residual) reaches machine precision. So while we configured the absolute tolerance to be \(10^{-14}\), the final preconditioned residuals of the pressure systems did not match this value. On the other hand, the velocity systems were solved to the desired tolerance. With this minor caveat, we consider the verification of PetIBM to be successful, as the minor issue in the convergence of pressure is irrelevant to the code implementation in PetIBM. Next, we solved this same TGV problem using an unsteady PINN solver. For the optimization, we used PyTorch's Adam optimizer with the following parameters: \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and \(\epsilon=10^{-8}\). The total iteration number in the optimization is 400,000. Two learning-rate schedulers were tested: the exponential learning rate and the cyclical learning rate. Both learning rates are from PyTorch and were used merely to satisfy our curiosity. The exponential scheduler has only one parameter in PyTorch: \(\gamma=0.95\frac{\pi}{\alpha n}\). The cyclical scheduler has the following parameters: \(\eta_{low}=1.5\times 10^{-5}\), \(\eta_{high}=1.5\times 10^{-3}\), \(N_{c}=5000\), and \(\gamma=9.9999,89\times 10^{-1}\). These values were chosen so that the peak learning rate at each cycle is slightly higher than the exponential rates. Figure 3 shows a comparison of the two schedulers. The MLP network used consisted of 3 hidden layers and 128 neurons per layer. \(8192\times 10^{4}\) spatial-temporal points were used to evaluate the PDE losses (i.e., the \(L_{1}\), \(L_{2}\), and \(L_{3}\) in figure 1). We randomly sampled these spatial-temporal points from the spatial-temporal domain\([-\pi,\pi]\times[-\pi,\pi]\times(0,100]\). During each optimization iteration, however, we only used 8192 points to evaluate the PDE losses. It means the optimizer sees each point 40 times on average because we have a total of \(4\times 10^{5}\) iterations. Similarly, \(8192\times 10^{4}\) spatial-temporal points were sampled from \(x,y\in[-\pi,\pi]]\times[-\pi,\pi]\) and \(t=0\) for the initial condition loss (i.e., \(L_{4}\) to \(L_{6}\)). And the same number of points were sampled from each domain boundary (\(x=\pm\pi\) and \(y=\pm\pi\)) and \(t\in(0,100]\) for boundary-condition losses (\(L_{7}\) to \(L_{10}\)). A total of 8192 points were used in each iteration for these losses as well. We used one NVIDIA V100 GPU to run the unsteady PINN solver for the TGV problem. Note that the PINN solver used Figure 3: Learning-rate history of 2D TGV \(Re=100\) w/ PINN The exponential learning rate scheduler is denoted as _Exponential_, and the cyclical scheduler is denoted as _Cyclical_. Figure 2: Grid-convergence test and verification of PetIBM using 2D TGV at \(Re=100\). The spatial-temporal \(L_{2}\) error is defined in equation (7). Taking the cubic root of the total spatial-temporal solution points gives the characteristic cell size. Both \(u\) and \(v\) velocities follow second-order convergence, while the pressure \(p\) follows the trend with some fluctuation. single-precision floats, which is the default in PyTorch. After training, we evaluated the PINN solver's errors at cell centers in a \(512\times 512\) Cartesian grid and at \(t=0,2\), \(4\), \(\cdots\), \(100\). Figure 4 shows the histories of the optimization loss and the \(L_{2}\) errors at \(t=0\) and \(t=40\) of the \(u\) velocity on the left vertical axis. The same figure also shows the run time (wall time) on the right vertical axis. The total loss converges to an order of magnitude of \(10^{-6}\), which may reflect the fact that PyTorch uses single-precision floats. The errors at \(t=0\) and \(t=40\) converge to the orders of \(10^{-4}\) and \(10^{-2}\), respectively. This observation is reasonable because the net errors over the whole temporal domain is, by definition, similar to the square root of the total, which is \(10^{-3}\). The PINN solver got exact initial conditions for training (i.e., \(L_{4}\) to \(L_{6}\)), so it is reasonable to see a better prediction accuracy at \(t=0\) than later times. Finally, though the computational performance is not the focus of this paper, for the interested reader's benefit we would like to point out that the PINN solver took about 6 hours to converge with a V100 GPU, while PetIBM needed less than 10 seconds to get an error level of \(10^{-2}\) using a very dated K40 GPU (and most of the time was overhead to initialize the solver). In sum, we determined the PINN solution to be verified, although the accuracy and the computational cost were not satisfying. The relatively low accuracy is likely a consequence of the use of single-precision floats and the intrinsic properties of PINNs, rather than implementation errors. Figure 5 shows the contours of the PINN solver's predictions at \(t=40\) for reference. ### Validation: 2D Cylinder, \(Re=40\) We used 2D cylinder flow at \(Re=40\) to validate the solvers because it has a similar configuration with the \(Re=200\) case that we will study later. The \(Re=40\) flow, however, does not exhibit vortex shedding and reaches a steady-state solution, making it suitable for validating the core functionality of the code. Experimental data for this flow configuration is also widely available. The spatial and temporal computational domains are \([-10,\ 30]\times[-10,\ 10]\) and \(t\in[0,20]\). A cylinder with a nondimensional radius of 0.5 sits at \(x=y=0\). Density is \(\rho=1\), and kinematic viscosity is \(\nu=0.025\). The initial conditions are \(u=1\) and \(\nu=0\) everywhere in the spatial domain. The boundary conditions are \(u=1\) and \(\nu=0\) on \(x=-10\) and \(y=\pm 10\). At the outlet, i.e., \(x=30\), the boundary conditions are set to 1D convective conditions: \[\frac{\partial}{\partial t}\begin{bmatrix}u\\ v\end{bmatrix}+c\frac{\partial}{\partial\mathbf{n}}\begin{bmatrix}u\\ v\end{bmatrix}=0, \tag{8}\] Figure 4: Histories with respect to optimization iterations for the total loss and the \(L_{2}\) errors of \(u\) at \(t=0\) and \(40\) in the TGV verification of the unsteady PINN solver. The left vertical axis corresponds to the total loss and the errors. The right vertical axis corresponds to the run time. The cyclical scheduler has a slightly better accuracy at \(t=40\) with a slightly more time cost, though its total loss is higher. Figure 5: Contours at \(t=40\) of 2D TGV at \(Re=100\) primitive variables and errors using the unsteady PINN solver. Roughly speaking, the absolute errors are at the level of \(10^{-3}\) for primitive variables (\(u\), \(v\), and \(p\)), which corresponds to the square root of the total loss. The vorticity was obtained from post-processing and hence was augmented in terms of errors. where \(\mathbf{n}\) is the normal vector of the boundary (pointing outward), and \(c=1\) is the convection speed. We ran the PetIBM validation case on a workstation with one (very old) NVIDIA K40 GPU and 6 CPU cores of the Intel i7-5930K processor. The grid resolution is \(562\times 447\) with \(\Delta t=10^{-2}\). The tolerance for all linear solvers in PetIBM was \(10^{-14}\). We used the same linear solver configurations as those in the TGV verification case. We validated two implementations of the PINN method with this cylinder flow because both codes were used in the \(Re=200\) case (section 4). The first implementation is an unsteady PINN solver, which is the same piece of code used in the verification case (section 3.1). It solves the unsteady Navier-Stokes equations as shown in figure 1. The second one is a steady PINN solver, which solves the steady Navier-Stokes equations. The workflow of the steady PINN solver works similar to that in figure 1 except that all time-related terms and losses are dropped. Both PINN solvers used MLP networks with 6 hidden layers and 512 neurons each. The Adam optimizer configuration is the same as that in section 3.1. The learning rate scheduler is a cyclical learning rate with \(\eta_{low}=10^{-6}\), \(\eta_{high}=10^{-2}\), \(N_{c}=5000\), and \(\gamma=9.9998e-1\). We ran all PINN-related validations with one NVIDIA A100 GPU, all using single-precision floats. To evaluate PDE losses, 256,000,000 spatial-temporal points were randomly sampled from the computational domain and the desired simulation time range. The PDE losses were evaluated on 25,600 points in each iteration, so the Adam optimizer would see each point 40 times on average during the 400,000-iteration optimization. On the boundaries, 25,600,000 points were sampled at \(y=\pm 10\), and 12,800,000 at \(x=-10\) and \(x=30\). On the cylinder surface, the number of spatial-temporal points were 5,120,000. In each iteration, 2560, 1280, and 512 points were used, respectively. Figure 6 shows the training history of the PINN solvers. The total loss of the steady PINN solver converged to around \(10^{-4}\), while that of the unsteady PINN solver converged to around \(10^{-2}\) after about 26 hours of training. Readers should be aware that the configuration of the PINN solvers might not be optimal, so the accuracy and the computational cost shown in this figure should not be treated as an indication of PINNs' general performance. In our experience, it is possible to reduce the run time in half but obtain the same level of accuracy by adjusting the number of spatial-temporal points used per iteration. Figure 7 gives the drag and lift coefficients (\(C_{D}\) and \(C_{L}\)) with respect to simulation time, where PINN and PetIBM visually agree. Table 1 compares the values of \(C_{D}\) against experimental data and simulation data from the literature. Values from different works in the literature do not closely agree with each other. Though there is not a single value to compare against, at least the \(C_{D}\) from the PINN solvers and PetIBM fall into the range of other published works. We consider the results of \(C_{D}\) validated for the PINN solvers and PetIBM. Figure 8 shows the pressure distribution on the cylinder surface. Again, though there is not a single solution that all works agree upon, the results from PetIBM and the PINN solvers visually agree with the published literature. We consider PetIBM Figure 8: Surface pressure distribution of 2D cylinder flow at \(Re=40\) Figure 6: Training convergence history of 2D cylinder flow at \(Re=40\) for both steady and unsteady PINN solvers. Figure 7: Drag and lift coefficients of 2D cylinder flow at \(Re=40\) w/ PINNs. Figure 9: Contour plots for 2D cylinder flow at \(Re=40\) and both PINN solvers validated. Finally, figure 9 compares the steady-state flow fields (i.e., the snapshots at \(t=20\) for PetIBM and the unsteady PINN solver). The PINN solvers' results visually agree with PetIBM's. The variation in the vorticity of PINNs only happens at the contour line of 0, so it is likely caused by trivial rounding errors. Note that vorticity is obtained by post-processing for all solvers. PetIBM used central difference to calculate the vorticity, while the PINN solvers used automatic differentiation to obtain it. ## 4 Case Study: 2D Cylinder Flow at \(Re=200\) The previous section presented successful verification with a Taylor-Green vortex having an analytical solution, and validation of the solvers with the \(Re=40\) cylinder flow, which exhibits a steady state solution. Those results give confidence that the solvers are correctly solving the Navier-Stokes equations, and able to model vortical flow. In this section, we study the case of cylinder flow at \(Re=200\), exhibiting vortex shedding. ### Case configurations The computational domain is \([-8,\,25]\times[-8,\,8]\) for \(x\) and \(y\), and \(t\in[0,\,200]\). Other boundary conditions, initial conditions, and density were the same as those in section 3.2. The non-dimensional kinematic viscosity is set to 0.005 to make the Reynolds number 200. The PetIBM simulation was done with a grid resolution of \(1485\times 720\) and \(\Delta t=5\times 10^{-3}\). The hardware used and the configurations of the linear solvers were the same as described in section 3.2. As for the PINN solvers, in addition to the steady and unsteady solvers, a third PINN solver was used: a data-driven PINN. The data-driven PINN solver is the same as the unsteady PINN solver but replaces the three initial condition losses (\(L_{4}\) to \(L_{6}\)) with: \[\left\{\begin{array}{l l}L_{4}=u-u_{data}\\ L_{5}=v-v_{data}\\ L_{6}=p-p_{data}\end{array}\right.,\,\,\,\text{if}\,\,\,\,\,\begin{array}{l l}\mathbf{x}\in\Omega\\ t\in T_{data}\end{array} \tag{9}\] where subscript \(data\) denotes data from a PetIBM simulation. \(T_{data}\) denotes the time range for which we feed the PetIBM simulation data to the data-driven PINN solver. In this case, \(T_{data}\equiv[125,140]\). The PetIBM simulation outputted transient snapshots every 1 second in simulation time, hence the data fed to the data-driven PINN solver consisted of 16 snapshots. These snapshots contain around 3 full periods of vortex shedding. The total number of spatial-temporal points in these snapshots is around 17,000,000, and we only used 6400 every iteration, meaning each data batch was repeated approximately every 2650 iterations. Except for replacing the IC losses with a data-driven approach, all other loss terms and the code in the data-driven PINN solver remain the same as the unsteady PINN solver. Note that for the data-driven PINN solver, the PDE and boundary condition losses were evaluated only in \(t\in[125,\,200]\) because we treated the PetIBM snapshots as if they were initial conditions. Another note is the use of steady PINN solver. The \(Re=200\) cylinder flow is not expected to have a steady-state solution. However, it is not uncommon to see a steady-state flow solver used for unsteady flows for engineering purposes, especially two or three decades ago when computing power was much more restricted. The MLP network used on all PINN solvers has 6 hidden layers and 512 neurons per layer. The configurations of spatial-temporal points are the same as those in section 3.2. The Adam optimizer is also the same, except that now we ran for 1,000,000 optimization iterations. The parameters of the cyclical learning rate scheduler are now: \(\eta_{low}=1\times 10^{-6}\), \(\eta_{high}=1\times 10^{-2}\), \(N_{c}=5000\), and \(\gamma=0.999\),991,5. The hardware used was one NVIDIA A100 GPU for all PINN runs. ### Results The overall run times for the steady, unsteady, and data-driven PINN solvers were about 28 hours, 31 hours, and 33.5 hours using one A100 GPU. The PetIBM simulation, on the other hand, took around 1.7 hours with a K40 GPU, which is 5-generation-behind in terms of the computing technology. Figure 10 shows the aggregated loss convergence history of all cases. It shows both the losses and the run times of the three PINN solvers. As seen in section 3.2, the unsteady PINN solver converges to a higher total loss than the steady PINN solver does. Also, the data-driven PINN solver converges to an even higher total loss. However, it is unclear at this point if having a higher loss means a higher prediction error in data-driven PINN because we replaced the initial condition losses with 16 snapshots from PetIBM and only ran the data-driven PINN solver for \(t\in[125,200]\). Figure 11 shows the drag and lift coefficients versus simulation time. The coefficients from the steady case are shown as just a horizontal line since there is no time variable in this case. The unsteady case, to our surprise, does not exhibit oscillations, meaning it results on no vortex shedding, even though it fits well with the PetIBM result before vortex shedding starts (at about \(t=75\)). Comparing the coefficients between the steady, unsteady, and PetIBM's values before shedding, we believe the unsteady PINN in this particular case behaves just like a steady \begin{table} \begin{tabular}{l c c c} \hline \hline & \(C_{D}\) & \(C_{D_{r}}\) & \(C_{D_{f}}\) \\ \hline Steady PINN & 1.62 & 1.06 & 0.55 \\ Unsteady PINN & 1.60 & 1.06 & 0.55 \\ PetIBM & 1.63 & 1.02 & 0.61 \\ Rosetti et al., 2012[11]1 & 1.74(9) & n/a & n/a \\ Rosetti et al., 2012[11]2 & 1.61 & n/a & n/a \\ Sen et al., 2009[12]2 & 1.51 & n/a & n/a \\ Park et al., 1988[13]2 & 1.51 & 0.99 & 0.53 \\ Tritton, 1959[14]1 & 1.48–1.65 & n/a & n/a \\ Grove et al., 1964[15]1 & n/a & 0.94 & n/a \\ \hline \hline \end{tabular} * Experimental result * Simulation result \end{table} Table 1: Validation of drag coefficients. \(C_{D}\), \(C_{D_{p}}\), and \(C_{D_{f}}\) denote the coefficients of total drag, pressure drag, and friction drag, respectively. solver. This is supported by the values in table 2, in which we compare \(C_{D}\) against published values in the literature of both unsteady and steady CFD simulations. The \(C_{D}\) obtained from the unsteady PINN is the same as the steady PINN and close to those steady CFD simulations. As for the data-driven case, its temporal domain is \(t\in[125,\)200], so the coefficients' trajectories start from \(t=125\). The result, again unexpected to us, only exhibits shedding in the time-frame with PetIBM data, i.e., \(t\in[125,\)140]. This result also implies that data-driven PINNs may be more difficult to train, compared to data-free PINNs and regular data-only model fitting. Even in the time range with PetIBM data, the data-driven PINN solver is not able to reach the given maximal \(C_{L}\), and the \(C_{D}\) is obviously off from the given data. After \(t=140\), the trajectories quickly fall back to the no-shedding pattern, though it still deviates from the trajectories of the steady and unsteady PINNs. Combining the loss magnitude shown in figure 10, the deviation of \(C_{D}\) and \(C_{L}\) from the data-driven PINN may be caused by not enough training. As figure 10 shows data-driven PINN had already converged, other optimization techniques or hyperparameter tuning may be required to further reduce the loss. Insufficient training only explains why the data-driven case deviates from the PetIBM's data in \(t\in[125,140]\) and from the other two PINNs for \(t>140\). Even with a better optimization and eventually a lower loss, based on the trajectories, we do not believe the shedding will continue after \(t=140\). To examine how the transient flow develops, we visually compared several snapshots of the flow fields from PetIBM, unsteady PINN, and the data-driven PINN, shown in figures 12, 13, 14, and 15. We also present the flow contours from the steady PINN in figure 16 for reference. At \(t=10\), we can see the wake is still developing, and the unsteady PINN visually matches PetIBM. At \(t=50\), the contours again show the unsteady PINN matching the PetIBM simulation before shedding. These observations verify that the unsteady PINN is indeed solving unsteady governing equations. The data-driven PINN does not show meaningful results because \(t=10\) and \(50\) are out of the data-driven PINN's temporal domain. These results also indicate that the data-driven PINN is not capable of extrapolating backward in time in dynamical systems. At \(t=140\), vortex shedding has already happened. However, the unsteady PINN solution does not show any shedding. Moreover, the unsteady PINN's contour plot is similar to the steady case in figure 16. \(t=140\) is also the last snapshot we fed into the data-driven PINN for training. The contour of the data-driven PINN at this time shows that it at least could qualitatively capture the shedding, which is expected. At \(t=144\), it is just \(4\) time units from the last snapshot being fed to the data-driven PINN. At such time, the data-driven PINN has already stopped generating new vortices. The existing vortex can be seen moving toward the boundary, and the wake is gradually restoring to the steady state wake. Flow at \(t=190\) further confirms that the data-driven PINN's behavior is tending toward that of the unsteady PINN, which behaves like a steady state solver. On the other hand, the solutions from the unsteady PINN for these times remain steady. Figure 17 shows the vorticity from PetIBM and the data-driven PINN in the vicinity of the cylinder in \(t\in[140,142.5]\) \begin{table} \begin{tabular}{l c} \hline \hline & \(C_{D}\) \\ \hline PetIBM & 1.38 \\ Steady PINN & 0.95 \\ Unsteady PINN & 0.95 \\ Deng et al., 2007[16]1 & 1.25 \\ Rajani et al., 2009[17]1 & 1.34 \\ Gushchin \& Shchemnikov, 1974[18]2 & 0.97 \\ Fornberg, 1980[19]2 & 0.83 \\ \hline \hline \end{tabular} * Unsteady simulations. * Steady simulations. * Steady simulations. \end{table} Table 2: PINNs, 2D Cylinder, \(Re=200\): validation of drag coefficients.The data-driven case is excluded because it does not have an obvious periodic state nor a steady-state solution. Figure 11: Drag and lift coefficients of 2D cylinder flow at \(Re=200\) w/ PINNs Figure 10: Training convergence history of 2D cylinder flow at \(Re=200\) w/ PINNs Figure 12: \(u\)-velocity comparison of 2D cylinder flow of \(Re=200\) between PetIBM, unsteady PINN, and data-driven PINN. Figure 13: \(v\)-velocity comparison of 2D cylinder flow of \(Re=200\) between PetIBM, unsteady PINN, and data-driven PINN. Figure 14: Pressure comparison of 2D cylinder flow of \(Re=200\) between PetIBM, unsteady PINN, and data-driven PINN. Figure 15: Vorticity (\(\omega_{z}\)) comparison of 2D cylinder flow of \(Re=200\) between PetIBM, unsteady PINN, and data-driven PINN. which contains a half cycle of vortex shedding. These contours compare how vorticity was generated right after we stopped feeding PetIBM data into the data-driven PINN. These comparisons might shed some light on why the data-free PINN cannot generate vortex shedding and why the data-driven PINN stops to do so after \(t=140\). At \(t=140\), PetIBM and the data-driven PINN show visually indistinguishable vorticity contours. This is expected as the data-driven PINN has training data from PetIBM at this time. At \(t=141\) in PetIBM's results, the main clockwise vortex (the blue circular pattern in the domain of \([1,2]\times[-0.5,0.5]\)) moves downstream. It slows down the downstream \(u\) velocity and accelerates the \(v\) velocity in \(y<0\). Intuitively, we can treat the main clockwise vortex as a blocker that blocks the flow in \(y<0\) and forces the flow to move upward. The net effect is the generation of a counterclockwise vortex at around \(x\approx 1\) and \(y\in[-0.5,0]\). This new counterclockwise vortex further generates a small but strong secondary clockwise vortex on the cylinder surface in \(y\in[-0.5,0]\). On the other hand, the result of the data-driven PINN at \(t=141\) shows that the main clockwise vortex becomes more dissipated and weaker, compared to that in PetIBM. It is possible that the main clockwise vortex is not strong enough to slow down the flow in \(y<0\) nor to bring the flow upward. The downstream flow in \(y<0\) (the red arm-like pattern below the cylinder) thus does not change Figure 16: Contours of 2D cylinder flow at \(Re=200\) w/ steady PINN. Figure 17: Vorticity generation near the cylinder for 2D cylinder flow of \(Re=200\) at \(t=140\), \(141\), \(142\), and \(142.5\) w/ data-driven PINNs. its direction and keeps going straight down in the \(x\) direction. In the results of \(t=142\) and \(t=142.5\) from PetIBM, the flow completes a half cycle. That is, the flow pattern at \(t=142.5\) is an upside down version of that at \(t=140\). The results from the data-driven PINN, however, do not have any new vortices and the wake becomes more like steady flow. These observations might indicate that the PINN is either diffusive or dissipative (or both). Next, we examined the Q-criterion in the same vicinity of the cylinder in \(t\in[140,142.5]\), shown in figure 18. The Q-criterion is defined as follows [20]: \[Q\equiv\frac{1}{2}\left(\|\Omega\|^{2}-\|S\|^{2}\right), \tag{10}\] where \(\Omega\equiv\frac{1}{2}\left(\nabla\mathbf{u}-\nabla\mathbf{u}^{\mathsf{T}}\right)\) is the vorticity tensor; \(S\equiv\frac{1}{2}\left(\nabla\mathbf{u}+\nabla\mathbf{u}^{\mathsf{T}}\right)\) is the strain rate tensor; and \(\nabla\mathbf{u}\) is the velocity gradient tensor. A criterion \(Q>0\) identifies a vortex structure in the fluid flow, that is, where the rotation rate is greater than the strain rate. We observe that vortices in the data-driven PINN are diffusive and could be dissipative. Moreover, judging by the locations of vortex centers, vortices also move slower in the PINN solution than with PetIBM. The edges of the vortices move at a different speed from that of the vortex centers in the PINN case. This might be hinting at the existence of numerical dispersion in the PINN solver. ### Dynamical Modes and Koopman Analysis We conducted spectral analysis on the cylinder flow to extract frequencies embedded in the simulation results. Fluid flow is a dynamical system, and how information (or signals) propagates in time plays an important role. Information with different frequencies advances at different speeds in the temporal direction, and the superposition of information forms complicated flow patterns over time. Spectral analysis reveals a set of modes, each associated with a fixed oscillation frequencies and decay or growth rate, called _dynamic modes_ in fluid dynamics. By comparing the dynamic modes in the solutions obtained with PINNs and PetIBM, we may examine how well or how badly the data-driven PINN learned information with different frequencies. Koopman analysis is a method to achieve such spectral analysis for dynamical systems. Please refer to _the method of snapshots_ in reference [21] and reference [22] for the theory and the algorithms used in this work. We analyzed the results from PetIBM and the data-driven PINN in \(t\in[125,140]\), which contains about three full cycles of vortex shedding. A total of 76 solution snapshots were used from each solver. The time spacing is \(\Delta t=0.2\). The Koopman analysis would result in 75 modes. Since the snapshots cover three full cycles, we would expect only 25 distinct frequencies and 25 nontrivial modes--only 25 out of 76 snapshots are distinct. However, this expectation only happens when the data are free from noise and numerical errors and when the number of three periods is exact. We would see more than 25 distinct frequencies and modes if the data were not ideal. In \(t\in[125,140]\), Figure 18: Q-criterion generation near the cylinder for 2D cylinder flow of \(Re=200\) at \(t=140\), \(141\), \(142\), and \(142.5\) w/ data-driven PINNs. the data-driven PINN was trained against PetIBM's data, so we expected to see similar spectral results between the two solvers. To put it simply, each dynamic mode is identified by a complex number. Taking logarithm on the complex number's absolute value gives a mode's growth rate, and the angle of the complex number corresponds to a mode's frequency. Figure 19 shows the distributions of the dynamic modes on the complex plane. The color of each dot represents the normalized strength of the corresponding mode, which is also obtained from the Koopman analysis. The star marker denotes the mode with a frequency of zero, i.e., a steady or time-independent mode. This mode usually has much higher strength than others, so we excluded it from the color map and annotated its strength directly. Koopman analysis delivers dynamical modes with complex conjugate pairs, so the modes are symmetric with respect to the real-number axis. A conjugate pair has an opposite sign in the frequencies mathematically but has the same physical frequency. We also plotted a circle with a radius of one on each figure. As flow has already reached a fully periodic regime, the growth rates should be zero because no mode becomes stronger nor weaker. In other words, all modes were expected to fall on this circle on the complex plane. If a mode falls inside the circle, it has a negative growth rate, and its contribution to the solution diminishes over time. Similarly, a mode falling outside the circle has a positive growth rate and becomes stronger over time. On the complex plane, all the modes captured by PetIBM (the left plot in figure 19) fall onto the circle or very close to the circle. The plot shows 75--rather than 25--distinct \(\lambda_{j}\) and modes, but the modes are evenly clustered into 25 groups. Each group has three modes, among which one or two modes fall on the circle, while the remaining one(s) falls inside but very close to the circle. Modes within each group have a similar frequency, and the one precisely on the circle has significantly higher strength than other modes (if not all modes in the group are weak). Due to the numerical errors in PetIBM's solutions, data in a vortex period are similar to but not exactly the same as those in another period. The strong modes falling precisely on the circle may represent the period-averaged flow patterns and are the 25 modes we expected earlier. The effect of numerical errors was filtered out from these modes. We call these 25 modes primary modes and all other modes secondary modes. Secondary modes are mostly weak and may come from the numerical errors in the PetIBM simulation. The plot shows these secondary modes are slightly dispersive but non-increasing over time, which is reasonable because the numerical schemes in PetIBM are stable. As for the PINN result (the right pane in figure 19), the mode distribution is not as structured as with PetIBM. It is hard to distinguish if all 25 expected modes also exist in this plot. However, we observe that at least the top 7 primary modes (the steady mode, two purple and 4 orange dots on the circle) also exist in the PINN case. Secondary modes spread out more widely on and inside the circle, compared to the clustered modes in PetIBM. We believe this means that PINN is more numerically dispersive and noisy. The frequencies of many of these secondary modes do not exist in PetIBM. So one possible source of these additional frequencies and modes may be the PINN method itself. It could be insufficient training or that the neural network itself inherently is dispersive. However, secondary modes on the circle are weak. We suspect that their contribution to the solution may be trivial. A more concerning observation is the presence of damped modes (modes that fall inside the circle). These modes have negative growth rates and hence are damped over time. We believe these modes contribute significantly to the solution because their strengths are substantial. The existence of the damped modes also means that PINN's predictions have more important discrepancies from one vortex period to another vortex period, compared to the PetIBM simulation. In addition, the flow pattern in PINN would keep changing after \(t=140\). They may be the culprits causing the PINN solution to quickly fall back to a non-oscillating flow pattern for \(t>140\). We may consider these errors as numerical dissipation. However, whether these errors came from insufficient training or were inherent in the PINN is unclear. Note that the spectral analysis was done against data in \(t\in[125,140]\). It does not mean the solutions in \(t>140\) also have the same spectral characteristics: the flow system is nonlinear, but the Koopman analysis uses linear approximations [22]. Figure 20 shows mode strengths versus frequencies. The plots use nondimensional frequency, i.e., Strouhal number, in the horizontal axes. We only plotted modes with positive numerical frequencies for a concise visualization. Plots in this figure also show the same observations as in the previous paragraphs: the data-driven PINN is more dispersive and dissipative. An observation that is now clearer from figure 20 is the strength distribution. In PetIBM's case, strengths decrease exponentially from the steady mode (i.e., \(St=0\)) to high-frequency modes. One can deduce a similar conclusion from PetIBM's simulation result. The vortex shedding is dominated by a single frequency (this frequency is \(St\approx 0.2\) because \(t\in[125,140]\) contains three periods). Therefore, the flow Figure 19: Distribution of the Koopman eigenvalues on the complex plane for 2D cylinder flow at \(Re=200\) obtained with PetIBM and with data-driven PINN. should be dominated by the steady mode and a mode with a frequency close to \(St=0.2\). We can indeed verify this statement for PetIBM's case in figure 20: the primary modes of \(St=0\) and \(St\approx 0.2\) are much stronger than others. The strength of the immediately next mode, i.e., \(St\approx 0.4\), drops by an order of magnitude. Note the use of a logarithmic scale. If we re-plot the figure using a regular scale, only \(St=0\) and \(St=0.2\) would be visible in the figure. The strength distribution in the case of PINN also shows that \(St=0\) and \(St\approx 0.2\) are strong. However, they are not the only dominating modes. Some other modes also have strengths at around \(10^{-1}\). As discussed in the previous paragraphs, these additional strong modes are damped modes. We also observed that some damped modes have the same frequencies as primary modes. For example, the secondary modes at \(St=0\) and \(St=0.2\) are damped modes. Note that for \(St=0\), if a mode is damped, then it is not a steady mode anymore because its magnitude changes with time, though it is still non-oscillating. Table 3 summarizes the top 4 modes (ranked by their strengths) in PetIBM's spectral result. For reference, these modes' contours are provided in the appendex as denoted in the table. The dynamic modes are complex-valued, and the contours include both the real and the imaginary parts. Note the growth rates of these 4 modes are not exactly zero but around \(10^{-6}\) and \(10^{-7}\). We were unsure if we could treat them as zero at these orders of magnitude. If not, and if they do cause the primary modes to be slightly damped or augmented over time, then we believe they also serve as a reasonable explanation for the existence of the other 50 non-primary modes in PetIBM--to compensate for the loss or the gain in the primary modes. Table 4 lists the PINN solution's top 4 primary modes, which are the same as those in table 3. Table 5 shows the top 4 secondary modes in the PINN method's result. Corresponding contours are also included in the appendix and denoted in the tables for readers' reference. The growth rates of the primary modes in the PINN method's result are around \(10^{-5}\) and \(10^{-6}\), slightly larger than those of PetIBM. If these orders of magnitude can not be deemed as zero, then these primary modes are slightly damped and dissipative, though the major source of the numerical dissipation may still be the secondary modes in table 5. ## 5 Discussion This case study raises significant concerns about the ability of the PINN method to predict flows with instabilities, specifically vortex shedding. In the real world, vortex shedding is triggered by natural perturbations. In traditional numerical simulations, however, the shedding is triggered by various numerical noises, including rounding and truncation errors. These numerical noises mimic natural perturbations. Therefore, a steady solution could be physically valid for cylinder flow at \(Re=200\) in a perfect world with no numerical noise. As PINNs are also subject to numerical noise, we expected to observe vortex shedding in the simulations, but the results show that instead the data-free unsteady PINN converged to a steady-state solution. Even the data-driven PINN reverted back to a steady-state solution beyond the timeframe that was fed with PetIBM's data. It is unlikely that the steady-state behavior has to do with perturbations. In traditional numerical simulations, it is sometimes challenging to induce vortex shedding, particularly in symmetrical computational domains. However, we can still trigger shedding by incorporating non-uniform initial conditions, which serve as perturbations to the steady state solution. In the data-driven PINN, the training data from PetIBM can be considered as such non-uniform initial conditions. The vortex shedding already exists in the training data, yet it did not continue beyond the period of data input, indicating that the perturbation is not the primary factor responsible for the steady-state behavior. This suggests \begin{table} \begin{tabular}{c c c c} \hline \(St\) & Strength & Growth Rate & Contours \\ \hline 0 & 0.97 & -2.2e-6 & Figure 25 \\ 0.201 & 0.18 & -9.4e-6 & Figure 26 \\ 0.403 & 0.03 & 2.3e-5 & Figure 27 \\ 0.604 & 0.03 & -8.6e-5 & Figure 28 \\ \hline \end{tabular} \end{table} Table 4: 2D Cylinder, \(Re=200\): top 4 primary dynamic modes (sorted by strengths) for PINN \begin{table} \begin{tabular}{c c c c} \hline \(St\) & Strength & Growth Rate & Contours \\ \hline 0 & 0.96 & 1.3e-7 & Figure 21 \\ 0.201 & 0.20 & -4.3e-7 & Figure 22 \\ 0.403 & 0.04 & 1.7e-6 & Figure 23 \\ 0.604 & 0.03 & 2.7e-6 & Figure 24 \\ \hline \end{tabular} \end{table} Table 3: 2D Cylinder, \(Re=200\): top 4 primary dynamic modes (sorted by strengths) for PetIBM Figure 20: Mode strengths versus mode frequencies for 2D cylinder flow at \(Re=200\). Note that we use a log scale for the vertical axis. \begin{table} \begin{tabular}{c c c c} \hline \(St\) & Strength & Growth Rate & Contours \\ \hline 0 & 0.96 & 1.3e-7 & Figure 21 \\ 0.201 & 0.20 & -4.3e-7 & Figure 22 \\ 0.403 & 0.04 & 1.7e-6 & Figure 23 \\ 0.604 & 0.03 & 2.7e-6 & Figure 24 \\ \hline \end{tabular} \end{table} Table 3: 2D Cylinder, \(Re=200\): top 4 primary dynamic modes (sorted by strengths) for PetIBM that PINNs have a different reason for their inability to generate vortex shedding compared to traditional CFD solvers. Other results in the literature that show the two-dimensional cylinder wake [23] in fact are using high-fidelity DNS data to provide boundary and initial data for the PINN model. The failure to capture vortex shedding in the data-free mode of PINN was confirmed in recent work by Rohrhofer et al. [7]. The steady-state behavior of the PINN solutions may be attributed to spectral bias. Rahaman et al. [24] showed that neural networks exhibit spectral bias, meaning they tend to prioritize learning low-frequency patterns in the training data. For cylinder flow, the lowest frequency corresponds to Strouhal number \(St=0\). The data-free unsteady PINN may be prioritizing learning the mode at \(St=0\) (i.e., the steady mode) from the Navier-Stokes equations. The same may apply to the data-driven PINN beyond the timeframe with training data from PetIBM, resulting in a rapid restoration to the non-oscillating solution. Even within the timeframe with the PetIBM training data, the data-driven PINN may prioritize learning the \(St=0\) mode in PetIBM's data. Although the vortex shedding in PetIBM's data forces the PINN to learn higher-frequency modes to some extent, the shedding modes are generally more difficult to learn due to the spectral bias. This claim is supported by the history of the drag and lift coefficients of the data-driven PINN (the red dashed line in figure 11), which was still unable to predict the peak values in \(t\in[125,140]\), despite extensive training. The suspicion of spectral bias prompted us to conduct spectral analysis by obtaining Koopman modes, presented in section 4.3. The Koopman analysis results are consistent with the existence of spectral bias: the data-driven PINN is not able to learn discrete frequencies well, even when trained with PetIBM's data that contain modes with discrete frequencies. The Koopman analysis on the data-driven PINN's prediction reveals many additional frequencies that do not exist in the training data from PetIBM, and many damped modes that have a damping effect and reduce or prohibit oscillation. These damped modes may be the cause of the solution restoring to a steady-state flow beyond the timeframe with PetIBM's data. From a numerical-method perspective, the Koopman analysis shows that the PINN methods in our work are dissipative and dispersive. The Q-criterion result (figure 18) also demonstrates dissipative behavior, which inhibits oscillation and instabilities. Dispersion can also contribute to the reduction of oscillation strength. However, it is unclear whether dispersion and dissipation are intrinsic numerical properties or whether we did not train the PINNs sufficiently, even though the aggregated loss had converged (figure 10). Unfortunately, limited computing resources prevented us from continuing the training--already taking orders of magnitude longer than the traditional CFD solver. More theoretical work may be necessary to study the intrinsic numerical properties of PINNs beyond computational experiments. Another point worth discussing is the generalizability of data-driven PINNs. Our case study demonstrates that data-driven PINNs may not perform well when predicting data they have not seen during training, as illustrated by the unphysical predictions generated for \(t=10\) and \(t=50\) in figures 12, 13, 14, and 15. While data-driven PINNs are believed to have the advantage of performing extrapolation in a meaningful way by leveraging existing data and physical laws, our results suggest that this "extrapolation" capability may be limited. In data-driven approaches, the training data typically consists of observation data (e.g., experimental or simulation data) and pure spatial-temporal points. The "extrapolation" capability is therefore constrained to the coordinates seen during training, rather than arbitrary coordinates beyond the observation data. For example, in our case study, \(t\in[0,125]\) corresponds to spatial-temporal points that were never seen during training, \(t\in[140]\) contains observation data, and \(t\in[140,200]\) corresponds to spatial-temporal points seen during training but without observation data. The PINN method's prediction for \(t\in[125,140]\) is considered interpolation. Even if we accept the steady-state solution as physically valid, then the data-driven PINN can only extrapolate for \(t\in[140,200]\), and fails to extrapolate for \(t\in[0,125]\). This limitation means that the PINN method can only extrapolate on coordinates it has seen during training. If the steady-state solution is deemed unacceptable, then the data-driven PINN lacks extrapolation capability altogether and is limited to interpolation. This raises the interesting research question of how data-driven PINNs compare to traditional deep learning approaches (i.e., those not using PDEs for losses), particularly in terms of performance and accuracy benefits. It is worth noting that Cai et al. [10] argue that data-driven PINNs are useful in scenarios where only sparse observation data are available, such as when an experiment only measures flow properties at a few locations, or when a simulation only saves transient data at a coarse-grid level in space and time. In such cases, data-driven PINNs may outperform traditional deep learning approaches, which typically require more data for training. However, as we discussed in our previous work [4], using PDEs as loss functions is computationally expensive, increasing the overall computational graph exponentially. Thus, even in the context of interpolation problems under sparse observation data, the research question of how much additional accuracy can be gained at what cost in computational expense remains an open and interesting question. Other works have brought up concerns about the limitations of PINN methods in certain scenarios, like flows with shocks [6] and flows with fast variations [5]. These researchers suggested that the optimization process on the complex landscape of the loss function may be the cause of the failure. And other works have also highlighted the performance penalty of PINNs compared to traditional numerical methods [25]. In comparison with finite element methods, PINNs were found to be orders of magnitude slower in terms of wall-clock time. We also observed a similar performance penalty in our case study, where the PINN method took orders of magnitude longer to train than the traditional CFD solver. We purposely used a very old GPU (NVIDIA Tesla K40) with PetIBM, running on our lab-assembled workstation, while the PINN method was run on a modern GPU (NVIDIA Tesla A100) on a high-performance computing cluster. However, we did not conduct a thorough performance comparison. It is unclear what a "fair" perfor mance comparison would look like, as the factors affecting runtime are so different between the two methods. An interesting third option was proposed recently, where the discretized form of the differential equations is used in the loss function, rather than the differential equations themselves [26]. This approach foregoes the neural-network representation altogether, as the unknowns are the solution values themselves on a discretization grid. It shares with PINNs the features of solving a gradient-based optimization problem, taking advantage of automatic differentiation, and being easily implemented in a few lines of code thanks to modern ML frameworks. But it does not suffer from the performance penalty of PINNs, showing an advantage of several orders of magnitude in terms of wall-clock time. Given that this approach uses a completely different loss function, it supports the claims of other researchers that the loss-function landscape is the source of problems for PINNs. ## 6 Conclusion In this study, we aimed to expand upon our previous work [4] by exploring the effectiveness of physics-informed neural networks (PINNs) in predicting vortex shedding in a 2D cylinder flow at \(Re=200\). It should be noted that our focus is limited to forward problems involving non-parameterized, incompressible Navier-Stokes equations. To ensure the correctness of our results, we verified and validated all involved solvers. Aside from using as a baseline results obtained with PetIBM, we used three PINN solvers in the case study: a steady data-free PINN, an unsteady data-free PINN, and a data-driven PINN. Our results indicate that while both data-free PINNs produced steady-state solutions similar to traditional CFD solvers, they failed to predict vortex shedding in unsteady flow situations. On the other hand, the data-driven PINN predicted vortex shedding only within the time-frame where PetIBM training data were available, and beyond this timeframe the prediction quickly reverted to the steady-state solution. Additionally, the data-driven PINN showed limited extrapolation capabilities and produced meaningless predictions at unseen coordinates. Our Koopman analysis suggests that PINN methods may be dissipative and dispersive, which inhibits oscillation and causes the computed flow to return to a steady state. This analysis is also consistent with the observation of a spectral bias inherent in neural networks [24]. One interesting research question that arises from our findings is how the cost-performance ratio of data-driven PINNs compares to classical deep learning approaches. While data-free PINNs are commonly considered as numerical methods for solving PDEs, data-driven PINNs are more akin to supervised machine/deep learning. However, data-free PINNs have been shown to have inferior cost-performance ratios compared to traditional numerical methods for PDEs (in terms of forward and non-parameterized problems). The literature suggests that PINNs are best utilized in a data-driven configuration, rather than data-free settings. Therefore, it would be valuable to quantitatively compare the benefits of data-driven PINNs to those of classical deep learning approaches and understand the associated cost-performance trade-offs. ## 7 Reproducibility statement In our work, we strive for achieving reproducibility of the results, and all the code we developed for this research is available publicly on GitHub under an open-source license, while all the data is available in open archival repositories. PetIBM is an open-source CFD library based on the immersed boundary method, and is available at [https://github.com/barbagroup/PetIBM](https://github.com/barbagroup/PetIBM) under the permissive BSD-3 license. The software was peer reviewed and published in the Journal of Open Source Software [8]. Our PINN solvers based on the NVIDIA _Modulus_ toolkit can be found following the links in the GitHub repository for this paper, located at [https://github.com/barbagroup/jcs_paper_pinn/](https://github.com/barbagroup/jcs_paper_pinn/). There, the folder prefixed by repro-pack corresponds to a git submodule pointing to the relevant commit on a branch of the repository for the full reproducibility package of the first author's PhD dissertation [27]. The branch named jcs-paper contains the modified plotting scripts to produce the publication-quality figures in this paper. A snapshot of the repro-pack is archived on Zenodo, and the DOI is 10.5281/zenodo.7988067. As described in the README of the repro-pack, readers can use pre-generated data for plotting the figures in this paper, or they can re-run the solutions using the code and data available in the repro-pack. The latter option is of course limited by the computational resources available to the reader. For the first option, the reader can find the raw data in a Zenodo archive, with DOI: 10.5281/zenodo.7988106. To facilitate reproducibility of the computational environment, we execute all cases using Singularity/Apptainer images for both the PetIBM and PINN cases. All the container recipes are included in the repro-pack under the resources folder. The _Modulus_ toolkit was open-sourced by NVIDIA in March 2023,2 under the Apache License 2.0. This is a permissive license that requires preservation of copyright and license notices and provides an express grant of patent rights. When we started this research, _Modulus_ was not yet open-source, but it was publicly available through the conditions of an End User Agreement. Documentation of those conditions can be found via the May 21, 2022, snapshot of the _Modulus_ developer website on the Internet Archive Wayback Machine.3 We are confident that following the best practices of open science described in this statement provides good conditions for reproducibility of our results. Readers can inspect the code if any detail is unclear in the paper narrative, and they can re-analyze our data or re-run the computational experiments. We spared no effort to document, organize, and preserve all the digital artifacts for this work. Footnote 2: [https://developer.nvidia.com/blog/physics-ml-platform-modulus-is-now-open-source/](https://developer.nvidia.com/blog/physics-ml-platform-modulus-is-now-open-source/) Footnote 3: [https://web.archive.org/web/20220521223413/https://catalog.nvidia.com/orgs/nvidia/teams/modulus/containers/modulus](https://web.archive.org/web/20220521223413/https://catalog.nvidia.com/orgs/nvidia/teams/modulus/containers/modulus) ## Acknowledgement We appreciate the support by NVIDIA, through sponsoring the access to its high-performance computing cluster.
2301.13770
Energy-Conserving Neural Network for Turbulence Closure Modeling
In turbulence modeling, we are concerned with finding closure models that represent the effect of the subgrid scales on the resolved scales. Recent approaches gravitate towards machine learning techniques to construct such models. However, the stability of machine-learned closure models and their abidance by physical structure (e.g. symmetries, conservation laws) are still open problems. To tackle both issues, we take the `discretize first, filter next' approach. In this approach we apply a spatial averaging filter to existing fine-grid discretizations. The main novelty is that we introduce an additional set of equations which dynamically model the energy of the subgrid scales. Having an estimate of the energy of the subgrid scales, we can use the concept of energy conservation to derive stability. The subgrid energy containing variables are determined via a data-driven technique. The closure model is used to model the interaction between the filtered quantities and the subgrid energy. Therefore the total energy should be conserved. Abiding by this conservation law yields guaranteed stability of the system. In this work, we propose a novel skew-symmetric convolutional neural network architecture that satisfies this law. The result is that stability is guaranteed, independent of the weights and biases of the network. Importantly, as our framework allows for energy exchange between resolved and subgrid scales it can model backscatter. To model dissipative systems (e.g. viscous flows), the framework is extended with a diffusive component. The introduced neural network architecture is constructed such that it also satisfies momentum conservation. We apply the new methodology to both the viscous Burgers' equation and the Korteweg-De Vries equation in 1D. The novel architecture displays superior stability properties when compared to a vanilla convolutional neural network.
Toby van Gastelen, Wouter Edeling, Benjamin Sanderse
2023-01-31T17:13:17Z
http://arxiv.org/abs/2301.13770v5
# Energy-Conserving Neural Network for Turbulence Closure Modeling ###### Abstract In turbulence modeling, and more particularly in the Large-Eddy Simulation (LES) framework, we are concerned with finding closure models that represent the effect of the unresolved subgrid scales on the resolved scales. Recent approaches gravitate towards machine learning techniques to construct such models. However, the stability of machine-learned closure models and their abidance by physical structure (e.g. symmetries, conservation laws) are still open problems. To tackle both issues, we take the 'discretize first, filter next' approach, in which we apply a spatial averaging filter to existing energy-conserving (fine-grid) discretizations. The main novelty is that we extend the system of equations describing the filtered solution with a set of equations that describe the evolution of (a compressed version of) the energy of the subgrid scales. Having an estimate of the energy of the subgrid scales, we can use the concept of energy conservation and derive stability of the discrete representation. The compressed variables are determined via a data-driven technique in such a way that the energy of the subgrid scales is matched. For the extended system, the closure model should be energy-conserving, and a new skew-symmetric convolutional neural network architecture is proposed that has this property. Stability is thus guaranteed, independent of the actual weights and biases of the network. Importantly, our framework allows energy exchange between resolved scales and compressed subgrid scales and thus enables backscatter. To model dissipative systems (e.g. viscous flows), the framework is extended with a diffusive component. The introduced neural network architecture is constructed such that it also satisfies momentum conservation. We apply the new methodology to both the viscous Burgers' equation and the Korteweg-De Vries equation in 1D and show superior stability properties when compared to a vanilla convolutional neural network. **Keywords**: Turbulence modeling, Neural networks, Energy conservation, Structure preservation, Burgers' equation, Korteweg-de Vries equation ## 1 Introduction Simulating turbulent flows with direct numerical simulations (DNSs) is often infeasible due to the high computational requirements. This is due to the fact that with increasing Reynolds number fine computational meshes are required in order to resolve all the relevant scales. Especially for applications in design and uncertainty quantification, where typically many simulations are required, this rapidly becomes computationally infeasible [1; 2]. To tackle this issue several different approaches have been proposed, such as reduced order models [3], Reynolds-averaged Navier-Stokes (RANS) [4], and Large Eddy Simulation (LES) [5]. These approaches differ in how much of the physics is simulated and how much is modelled. Here we will focus on the LES approach. In LES the large-scale physics is modelled directly by a numerical discretization of the governing equations on a coarse grid. However, due to fact that the filter does not commute with the nonlinear terms in the equations a commutator error arises. This prevents one from obtaining an accurate solution without knowledge of the subgrid-scale (SGS) content. This commutator error is typically referred to as the closure term and modeling this term is the main concern of the LES community. A major difficulty in the modeling of this closure term, by a corresponding closure model, is dealing with the exchange of energy from the small to the large scales (backscatter) [6; 7], as the SGS energy content is unknown during the time of the simulation. This makes accounting for backscatter difficult without leading to numerical instabilities [8]. Classical physics-based closure models are therefore often represented by a dissipative model, e.g. of eddy-viscosity type [9], ensuring a net decrease in energy, or clipped such that backscatter is removed [10]. Even though the assumption of a global net decrease in energy is valid [9], explicit modeling of backscatter is still important, as locally the effect of backscatter can be of great significance [11; 12]. Closure models that explicitly model the global kinetic energy present in the small scales at a given point in time, to allow for backscatter without sacrificing stability, also exist [13]. Recently, machine learning approaches, or more specifically neural networks (NNs), have also become a viable option for the modeling of this closure term, as they show potential for outperforming the classical approaches in different use cases [14; 15; 16; 17]. However, stability remains an important issue along with abidance by physical structure such as mass, momentum, and energy conservation [18; 19; 20; 16]. In [18] the case of homogeneous isotropic turbulence for the compressible Navier-Stokes equations was investigated. A convolutional neural network (CNN) was trained to reproduce the closure term from high-resolution flow data. Although _a priori_ cross-correlation analysis on the training data showed promising results, stable models could only be achieved by projecting onto an eddy-viscosity basis. In [19] a gated recurrent NN was applied to the same test case which showed even higher cross-correlation values with the actual closure term, but still yielded unstable models, even after employing stability training on data with artificial noise [20]. In [16] the case of incompressible turbulent channel flow was treated. Here NNs with varying dimensions of input space were employed to construct a closure model. They showed that increasing the size of the input space of the NN improves _a priori_ performance. However, _a posteriori_ analysis showed that this increased input space also led to instabilities. Even after introducing a form of backscatter clipping to stabilize these larger models, they were still outperformed by NN closure models with a small input space, for which only the solution at neighboring grid points was provided to the NN. Two other recent promising approaches to improving the stability of NN closure models are 'trajectory fitting' [14; 15; 21; 22; 23] and reinforcement learning [24; 25]. Both of these approaches have in common that instead of fitting the NN to match the exact closure term (which is what we will refer to as 'derivative fitting'), one optimizes directly with respect to how well the solution is reproduced when carrying out a simulation with the closure model embedded into the solver. This has been shown to lead to more accurate and stable models [14; 15; 26]. The main difference between the two is that for trajectory fitting one requires the implementation of the spatial and temporal discretization to be differentiable with respect to the NN parameters. In this way one can determine the gradients of the solution error with respect to the NN parameters such that gradient-based optimizers can be applied to the corresponding optimization problem. Reinforcement learning on the other hand does not require these gradients which makes it suitable for non-differentiable processes such as chess and self-driving cars [27]. However, neither of these approaches lead to a provably stable NN closure model without some form of clipping and also do not guarantee abidance by the underlying conservation laws. The latter something that to our knowledge does not yet exist in the case of LES closure models. To resolve the issues of stability and lack of physical structure, we present _a new NN closure model that satisfies both momentum and kinetic energy conservation and is therefore stable by design_, while still allowing for backscatter of energy into the resolved scales. As stated earlier, the difficulty of this task mainly lies in the fact that: (i) the kinetic energy conservation law includes terms which depend on the SGS content which is too expensive to simulate directly, and consequently (ii) kinetic energy of the large scales is not a conserved quantity (in the limit of vanishing viscosity). In order to tackle these issues we propose to take the 'discretize first, filter next' approach [22; 26]. This means that we start from a high-resolution solution with \(N\) degrees of freedom (on a fine computational grid), to which we apply a discrete filter (a spatial average) that projects the solution onto a coarse computational grid of dimension \(I\), with \(I\ll N\). Given the discrete filter the exact closure term can be computed from the high-resolution simulation by calculating the commutator error. The main advantage of this approach is that the closure term now also accounts for the discretization error. Based on the filter's properties we then derive an energy conservation law that can be split into two components: one that depends solely on the large, or resolved, scales (resolved energy) and another that solely depends on the SGS content (SGS energy) [13]. Like in existing works the closure model is represented by a NN, however, we include an additional set of SGS variables that represent the SGS energy in our simulation. The key insight is that the resulting total system of equations should still conserve energy in the inviscid limit, and we choose our NN approximation such that it is consistent with this limit. In this way we still allow for backscatter without sacrificing stability. The paper is structured in the following way. In section 2 we discuss Burgers' and Korteweg-de Vries equation and their energy and momentum conservation properties. We introduce the discrete filter, the resulting closure problem, and derive a new energy conservation law that describes an energy exchange between the resolved energy and the SGS energy. In section 3 we introduce our new machine learning approach for modeling the closure term, satisfying the derived energy conservation law using a set of SGS variables to represent the SGS energy. In addition, we show how to also satisfy momentum conservation. In section 4 we study the convergence properties and stability of our closure model with respect to the coarse grid resolution and compare this to a vanilla CNN. We also analyze the structure-preserving properties in terms of momentum and energy conservation and the ability of the trained closure models to extrapolate in space and time. In section 5 we conclude our work. ## 2 Governing equations, discrete filtering, and closure problem Before constructing a machine learning closure on the discrete level, we formulate a description of the closure problem and the machinery required (e.g. discrete filters and reconstruction operators) at the discrete level, and we discuss the effect of filtering on the physical structure. ### Spatial discretization We consider an initial value problem (IVP) of the following form: \[\frac{\partial u}{\partial t} =f(u), \tag{1}\] \[u(\mathbf{x},0) =u_{0}(\mathbf{x}), \tag{2}\] which describes the evolution of some quantity \(u(\mathbf{x},t)\) in space \(\mathbf{x}\in\Omega\) and time \(t\) on the spatial domain \(\Omega\subseteq\mathbb{R}^{d}\), given initial state \(u_{0}\). The dynamics of the system is governed by right-hand side (RHS) \(f(u)\), which typically involves partial derivatives of \(u\). After spatial discretization (method of lines), we obtain the vector \(\mathbf{u}(t)\in\mathbb{R}^{N}\) which approximates the value of \(u\) at each of the \(N\) grid points \(\mathbf{x}_{i}\in\Omega\) for \(i=1,\ldots,N\), such that \(\mathrm{u}_{i}\approx u(\mathbf{x}_{i})\). The discrete analogue of the IVP is then \[\frac{\mathrm{d}\mathbf{u}}{\mathrm{d}t} =f_{h}(\mathbf{u}), \tag{3}\] \[\mathbf{u}(0) =\mathbf{u}_{0}, \tag{4}\] where \(f_{h}\) represents a spatial discretization of \(f\). It is assumed that all the physics described by equation (1) is captured in the discrete solution \(\mathbf{u}\). This means that whenever the physics involves a wide range of spatial scales, a very large number of degrees of freedom \(N\) is needed to adequately resolve all these scales. This places a heavy (or even insurmountable) burden on the computational effort that is required to numerically solve the considered equations. ### Burgers' and Korteweg-de Vries equation and physical structure We are interested in the modeling and simulation of turbulent flows. For this purpose, we first consider Burgers' equation, a 1D simplification of the Navier-Stokes equations. Burgers' equation describes the evolution of the velocity \(u(x,t)\) according to partial differential equation (PDE) \[\frac{\partial u}{\partial t}=-\frac{1}{2}\frac{\partial u^{2}}{\partial x}+ \nu\frac{\partial^{2}u}{\partial x^{2}}, \tag{5}\] where the first term on the RHS represents non-linear convection and the second term diffusion, weighted by the viscosity parameter \(\nu\geq 0\). These processes are somewhat analogous to 3-D turbulence in the fact that smaller scales are created by nonlinear convective terms which are then dissipated by diffusion [28]. We will be interested in two properties of the Burgers' equation, which we collectively call'structure'. Firstly, momentum \(P\) is conserved on periodic domains: \[\frac{\mathrm{d}P}{\mathrm{d}t}=\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}u \mathrm{d}\Omega=\int_{\Omega}-\frac{1}{2}\frac{\partial u^{2}}{\partial x}+ \nu\frac{\partial^{2}u}{\partial x^{2}}\mathrm{d}\Omega=-\frac{1}{2}[u^{2}]_{a }^{b}+\nu[\frac{\partial u}{\partial x}]_{a}^{b}=0, \tag{6}\] Secondly, on periodic domains (kinetic) energy is conserved in the absence of viscosity: \[\frac{\mathrm{d}E}{\mathrm{d}t}=\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int _{\Omega}u^{2}\mathrm{d}\Omega=\int_{\Omega}-\frac{u}{2}\frac{\partial u^{2}} {\partial x}+u\nu\frac{\partial^{2}u}{\partial x^{2}}\mathrm{d}\Omega=-\frac{ 1}{3}[u^{3}]_{a}^{b}+\nu[u\frac{\partial u}{\partial x}]_{a}^{b}-\nu\int_{ \Omega}\left(\frac{\partial u}{\partial x}\right)^{2}\mathrm{d}\Omega=-\underbrace {\nu\int_{\Omega}\left(\frac{\partial u}{\partial x}\right)^{2}\mathrm{d} \Omega}_{\geq 0}, \tag{7}\] where we used integration by parts. These properties can be preserved in a discrete setting by employing a structure-preserving scheme [29] on a uniform grid with grid-spacing \(h\). The convective term is approximated by the following skew-symmetric scheme: \[\mathbf{G}(\mathbf{u})=-\frac{1}{3}\mathbf{D}_{1}\mathbf{u}^{2}-\frac{1}{3} \mathrm{diag}(\mathbf{u})\mathbf{D}_{1}\mathbf{u}, \tag{8}\] where \(\mathbf{D}_{1}\) is the central difference operator corresponding to the stencil \((\mathbf{D}_{1}\mathbf{u})_{i}=(\mathrm{u}_{i+1}-\mathrm{u}_{i-1})/(2h)\), \(\mathbf{u}^{2}\) is to be interpreted element-wise, and \(\mathbf{D}_{2}\) is the diffusive operator with stencil \((\mathbf{D}_{2}\mathbf{u})_{i}=(\mathrm{u}_{i+1}-2\mathrm{u}_{i}+\mathrm{u}_ {i-1})/h^{2}\). We assume periodic boundary conditions (BCs). The spatial discretization leads to a system of ordinary differential equations (ODEs): \[\frac{\mathrm{d}\mathbf{u}}{\mathrm{d}t}=\underbrace{\mathbf{G}(\mathbf{u})+ \nu\mathbf{D}_{2}\mathbf{u}}_{=f_{h}(\mathbf{u})}. \tag{9}\] which we will march forward in time using an explicit RK4 scheme [30]. The structure is preserved because the discretization conserves the discrete momentum \(P_{h}=h\mathbf{1}^{T}\mathbf{u}\) (for periodic BCs): \[\frac{\mathrm{d}P_{h}}{\mathrm{d}t}=h\mathbf{1}^{T}f_{h}(\mathbf{u})=0, \tag{10}\] where \(\mathbf{1}\) is a column vector with all entries equal to one. Furthermore, due to the skew-symmetry of the convection operator the evolution of the discrete kinetic energy \(E_{h}=\frac{h}{2}\mathbf{u}^{T}\mathbf{u}\) (which we will refer to simply as energy) is given by: \[\text{Burgers' equation:}\qquad\frac{\mathrm{d}E_{h}}{\mathrm{d}t}=h\mathbf{u}^{ T}f_{h}(\mathbf{u})=h\nu\mathbf{u}^{T}\mathbf{D}_{2}\mathbf{u}=-\nu|| \mathbf{Q}\mathbf{u}||_{2}^{2}. \tag{11}\] Here we used the fact that \(\mathbf{D}_{2}\) can be written as the Cholesky decomposition \(-\mathbf{Q}^{T}\mathbf{Q}\)[3], where \(\mathbf{Q}\) is a simple forward difference approximation of the first-order derivative. The norm \(\|.\|_{2}\) represents the conventional two-norm further detailed in section 2.5. This discretization ensures net energy dissipation and conservation in the inviscid limit. In addition to Burgers' equation we will consider the Korteweg-de Vries (KdV) equation: \[\frac{\partial u}{\partial t}=-\frac{\varepsilon}{2}\frac{\partial u^{2}}{ \partial x}-\mu\frac{\partial^{3}u}{\partial x^{3}}, \tag{12}\] where \(\varepsilon\) and \(\mu\) are parameters. The KdV equation conserves momentum and (kinetic) energy irrespective of the values of \(\varepsilon\) and \(\mu\). We discretize the nonlinear term in the same way as for Burgers' equation, using the skew-symmetric scheme. The third-order spatial derivative is approximated by the skew-symmetric central difference operator \(\mathbf{D}_{3}\) corresponding to the stencil \((\mathbf{D}_{3}\mathbf{u})_{i}=(-\mathrm{u}_{i-2}+2\mathrm{u}_{i-1}-2\mathrm{u }_{i+1}+\mathrm{u}_{i+2})/(2h^{3})\), see [31]. The resulting discretization is then not only momentum conserving, but also energy conserving in the case of periodic BCs: \[\text{KdV equation:}\qquad\frac{\mathrm{d}E_{h}}{\mathrm{d}t}=0. \tag{13}\] ### Discrete filtering In order to tackle the issue of high computational expenses for large \(N\) we apply a spatial averaging filter to the fine-grid solution \(\mathbf{u}\), resulting in the coarse-grid solution \(\bar{\mathbf{u}}\). The coarse grid follows from dividing \(\Omega\) into \(I\) non-overlapping cells \(\Omega_{i}\) with cell centers \(\mathbf{X}_{i}\). The coarse grid is refined into the fine grid by splitting each \(\Omega_{i}\) into \(J(i)\) subcells \(\omega_{ij}\) with cell centers \(\mathbf{x}_{ij}\). This subdivision is intuitively pictured in the upper grid of Figure 1, for a 1D grid. Given the coarse and fine grid, we define the mass matrices \(\boldsymbol{\omega}\in\mathbb{R}^{N\times N}\) and \(\boldsymbol{\Omega}\in\mathbb{R}^{I\times I}\) which contain the volumes of the fine and coarse cells on the main diagonal, respectively. To reduce the degrees of freedom of the system we apply a discrete spatial averaging filter \(\mathbf{W}\in\mathbb{R}^{I\times N}\), \(I<N\), to the fine-grid solution \(\mathbf{u}\) in order to obtain the filtered solution \(\bar{\mathbf{u}}\): \[\bar{\mathbf{u}}=\mathbf{W}\mathbf{u}. \tag{14}\] The spatial averaging filter is defined as \[\mathbf{W}:=\boldsymbol{\Omega}^{-1}\mathbf{O}. \tag{15}\] with overlap matrix \(\mathbf{O}\in\mathbb{R}^{I\times N}\): \[\mathbf{O}:=\begin{bmatrix}|\omega_{11}|&\dots&|\omega_{1J(1)}|&&&&\\ &&\ddots&\ddots&\ddots&&\\ &&&&|\omega_{I1}|&\dots&|\omega_{IJ(I)}|\end{bmatrix}. \tag{16}\] Here \(|.|\) represents the volume of the considered subcell. The overlap matrix essentially contains the volume of the overlap between coarse-grid cell \(\Omega_{i}\) and fine-grid subcell \(\omega_{ij}\) at the appropriate locations. Note that each column of \(\mathbf{W}\) and \(\mathbf{O}\) only contains one non-zero entry. The filter reduces the number of unknowns at each time step from \(N\) to \(I\). Next to the filter, we define a reconstruction operator \(\mathbf{R}\in\mathbb{R}^{N\times I}\) which relates to \(\mathbf{W}\) as \[\mathbf{R}:=\boldsymbol{\omega}^{-1}\mathbf{W}^{T}\boldsymbol{\Omega}= \boldsymbol{\omega}^{-1}\mathbf{O}^{T}. \tag{17}\] The matrix \(\mathbf{R}\) is essentially a simple approximation of the inverse of \(\mathbf{W}\) by a piece-wise constant function [32]. This is intuitively pictured in Figure 2. An important property of the filter/reconstruction pair, which will be used in subsequent derivations, is that \[\mathbf{W}\mathbf{R}=\boldsymbol{\Omega}^{-1}\mathbf{O}\boldsymbol{\omega}^{ -1}\mathbf{O}^{T}=\begin{bmatrix}\ddots&&\\ &\sum_{j=1}^{J(i)}\frac{|\omega_{ij}|}{|\Omega_{i}|}&\\ &&\ddots\end{bmatrix}=\mathbf{I}. \tag{18}\] Consequently, filtering a reconstructed solution \(\mathbf{R}\bar{\mathbf{u}}\) leaves \(\bar{\mathbf{u}}\) unchanged, i.e. \[\bar{\mathbf{u}}=\underbrace{(\mathbf{W}\mathbf{R})^{p}}_{=\mathbf{I}}\mathbf{ W}\mathbf{u} \tag{19}\] for \(p\in\mathbb{N}_{0}\). We will refer to this property as the 'projection' property, as it is similar to how repeated application of a projection operator leaves a vector unchanged. By subtracting the reconstructed solution \(\mathbf{R}\bar{\mathbf{u}}\) from \(\mathbf{u}\) we can define the subgrid-scale (SGS) content \(\mathbf{u}^{\prime}\in\mathbb{R}^{N}\): \[\mathbf{u}^{\prime}:=\mathbf{u}-\mathbf{R}\bar{\mathbf{u}}. \tag{20}\] Figure 1: Subdivision of the spatial grid where the dots represent cell centers \(x_{ij}\) and \(X_{i}\) for \(J(1)=J(2)=3\) and \(J(3)=4\). In addition, we will refer to the SGS content in a single coarse cell \(\Omega_{i}\) as \(\mathbf{\mu}_{i}\in\mathbb{R}^{J(i)}\), see Figure 2. Applying the filter to \(\mathbf{u}^{\prime}\) yields zero: \[\mathbf{W}\mathbf{u}^{\prime}=\mathbf{W}\mathbf{u}-\underbrace{\mathbf{W} \mathbf{R}}_{=\mathrm{I}}\bar{\mathbf{u}}=\bar{\mathbf{u}}-\bar{\mathbf{u}}= \mathbf{0}_{\Omega}, \tag{21}\] where \(\mathbf{0}_{\Omega}\) is a vector with all entries equal to zero defined on the coarse grid. This can be seen as the discrete equivalent of a property of a Reynolds operator [5]. As illustration we show each of the introduced quantities, calculated for a 1D sinusoidal wave, in Figure 2. ### Discrete closure problem After having defined the filter we describe the time evolution of \(\bar{\mathbf{u}}\). Since we employ a spatial filter that does not depend on time, filtering and time-differentiation commute: \(\mathbf{W}\frac{\mathrm{d}\mathbf{u}}{\mathrm{d}t}=\frac{\mathrm{d}(\bar{ \mathbf{W}}\mathbf{u})}{\mathrm{d}t}=\frac{\mathrm{d}\bar{\mathbf{u}}}{ \mathrm{d}t}\). The closure problem arises because such a commutation property is not true for the spatial discretization, i.e. \[\mathbf{W}f_{h}(\mathbf{u})\neq f_{H}(\mathbf{W}\mathbf{u})=f_{H}(\bar{ \mathbf{u}}), \tag{22}\] where \(f_{H}\) represents the same spatial discretization scheme as \(f_{h}\), but on the coarse grid. The closure problem is that the equations for \(\bar{\mathbf{u}}\) are 'unclosed', meaning that we require the fine-grid solution \(\mathbf{u}\) to be able to evolve the coarse-grid solution \(\bar{\mathbf{u}}\) in time. The filtered system can be rewritten in closure model form as \[\frac{\mathrm{d}\bar{\mathbf{u}}}{\mathrm{d}t}=f_{H}(\bar{\mathbf{u}})+ \underbrace{(\mathbf{W}f_{h}(\mathbf{u})-f_{H}(\bar{\mathbf{u}}))}_{=: \mathbf{c}(\mathbf{u})}, \tag{23}\] where \(\mathbf{c}(\mathbf{u})\in\mathbb{R}^{I}\) is the closure term. \(\mathbf{c}(\mathbf{u})\) is essentially the discrete equivalent of the commutator error in LES [5]. One advantage of having first discretized the problem is that \(\mathbf{c}(\mathbf{u})\) now also includes the discretization error. The aim in closure modeling is generally to approximate \(\mathbf{c}(\mathbf{u})\) by a closure model \(\tilde{\mathbf{c}}(\bar{\mathbf{u}};\mathbf{\Theta})\). In section 3 we choose to represent \(\tilde{\mathbf{c}}\) by a neural network (NN), whose parameters \(\mathbf{\Theta}\) are to be trained to make the approximation accurate. In constructing such approximations, we will also use the equation describing the evolution of the SGS content \(\frac{\mathrm{d}\mathbf{u}^{\prime}}{\mathrm{d}t}\): \[\frac{\mathrm{d}\mathbf{u}^{\prime}}{\mathrm{d}t}=\frac{\mathrm{d}\mathbf{u}} {\mathrm{d}t}-\mathbf{R}\frac{\mathrm{d}\bar{\mathbf{u}}}{\mathrm{d}t}. \tag{24}\] ### Inner products and energy decomposition To describe the energy that is present in the system at any given time, we define the following inner products and norms: \[(\mathbf{a},\mathbf{b})_{\boldsymbol{\xi}} :=\mathbf{a}^{T}\boldsymbol{\xi}\mathbf{b} \tag{25}\] \[||\mathbf{a}||_{\boldsymbol{\xi}}^{2} :=(\mathbf{a},\mathbf{a})_{\boldsymbol{\xi}} \tag{26}\] for \(\boldsymbol{\xi}\in\{\boldsymbol{\omega},\boldsymbol{\Omega}\}\). With this notation we can represent the inner product on the fine grid, \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{N}\), as well as the coarse grid, \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{I}\), respectively. For \(\boldsymbol{\xi}=\mathbf{I}\) we simply obtain the conventional inner product and two-norm, denoted as \((\mathbf{a},\mathbf{b})=\mathbf{a}^{T}\mathbf{b}\) and \(||\mathbf{a}||_{2}^{2}\), respectively. We also define a joint inner product as the following sum of inner products: \[(\begin{bmatrix}\mathbf{a}_{1}\\ \vdots\\ \mathbf{a}_{M}\end{bmatrix},\begin{bmatrix}\mathbf{b}_{1}\\ \vdots\\ \vdots\\ \mathbf{b}_{M}\end{bmatrix})_{\boldsymbol{\xi}_{M}}:=\begin{bmatrix} \mathbf{a}_{1}\\ \vdots\\ \mathbf{a}_{M}\end{bmatrix}^{T}\underbrace{\begin{bmatrix}\boldsymbol{\xi}&&\\ &\ddots&\\ &&\boldsymbol{\xi}\end{bmatrix}}_{=\boldsymbol{\xi}_{M}}\begin{bmatrix} \mathbf{b}_{1}\\ \vdots\\ \mathbf{b}_{M}\end{bmatrix}, \tag{27}\] where vectors \(\mathbf{a}_{i}\) and \(\mathbf{b}_{i}\) (\(i=1,\ldots,M\)) have the appropriate dimensions and are concatenated into a column vector. Furthermore, \(\boldsymbol{\xi}_{M}\) is the extended mass matrix. This notation is introduced in order to later extend our system of equations with additional equations for the subgrid content. Besides the projection property (19) an additional characteristic of the filter/reconstruction pair is that the inner product is conserved under reconstruction (see Appendix A): \[(\mathbf{R}\bar{\mathbf{a}},\mathbf{R}\bar{\mathbf{b}})_{\omega}=(\bar{ \mathbf{a}},\bar{\mathbf{b}})_{\Omega}. \tag{28}\] The total energy \(E_{h}\) of the fine-grid solution in terms of inner products reads \[E_{h}:=\frac{1}{2}||\mathbf{u}||_{\omega}^{2}, \tag{29}\] which can be decomposed using (20): \[E_{h} =\frac{1}{2}||\mathbf{u}||_{\omega}^{2}=\frac{1}{2}||\mathbf{R} \bar{\mathbf{u}}+\mathbf{u}^{\prime}||_{\omega}^{2}\] \[=\frac{1}{2}||\mathbf{R}\bar{\mathbf{u}}||_{\omega}^{2}+( \mathbf{R}\bar{\mathbf{u}},\mathbf{u}^{\prime})_{\omega}+\frac{1}{2}|| \mathbf{u}^{\prime}||_{\omega}^{2}.\] We can simplify this decomposition by noting that the cross-term is zero, i.e. \(\mathbf{R}\bar{\mathbf{u}}\) is orthogonal to \(\mathbf{u}^{\prime}\), see Appendix A. Combining this orthogonality property with property (28) leads to the following important energy decomposition: \[E_{h}=\underbrace{\frac{1}{2}||\tilde{\mathbf{u}}||_{\Omega}^{2}}_{=:E_{h}}+ \underbrace{\frac{1}{2}||\mathbf{u}^{\prime}||_{\omega}^{2}}_{=:E_{h}^{\prime}}. \tag{30}\] In other words, our choice of filter and reconstruction operators is such that the total energy of the system can be split into one part (the resolved energy \(\bar{E}_{h}\)) that exclusively depends on the filtered \(\bar{\mathbf{u}}\) and another part (the SGS energy \(E_{h}^{\prime}\)) that depends only on the SGS content \(\mathbf{u}^{\prime}\). The energy conservation law can also be decomposed into a resolved and SGS part: \[\frac{\mathrm{d}E_{h}}{\mathrm{d}t}=\frac{\mathrm{d}\bar{E}_{h}}{\mathrm{d}t} +\frac{\mathrm{d}E_{h}^{\prime}}{\mathrm{d}t}=(\bar{\mathbf{u}},\frac{ \mathrm{d}\bar{\mathbf{u}}}{\mathrm{d}t})_{\Omega}+(\mathbf{u}^{\prime},\frac{ \mathrm{d}\mathbf{u}^{\prime}}{\mathrm{d}t})_{\omega}=0, \tag{31}\] where we used the product rule to arrive at this relation. For Burgers' equation with \(\nu>0\), the last equality sign changes to \(\leq\). This means that even for dissipative systems the resolved energy could in principle increase (so-called 'backscatter'), as long as the total energy is decreasing. We illustrate the energy decomposition using simulations of the KdV equation. Figure 3 shows the exchange of energy between the subgrid and filtered solutions. Clearly, the energy of the filtered solution is _not_ a conserved quantity. ### Momentum conservation Next to the energy, we formulate the total discrete momentum in terms of an inner product and investigate if it is conserved upon filtering. The total discrete momentum is given by \[P_{h}=(\mathbf{1}_{\omega},\mathbf{u})_{\omega}, \tag{32}\] where \(\mathbf{1}_{\omega}\) is a vector with all entries equal to one, defined on the fine grid. From this definition we can show (see Appendix A) that the discrete momentum does not change upon filtering, i.e. \[P_{h}=(\mathbf{1}_{\omega},\mathbf{u})_{\omega}=(\mathbf{1}_{\Omega},\bar{ \mathbf{u}})_{\Omega}. \tag{33}\] This relation allows us to derive a momentum conservation condition on the closure term: \[\frac{\mathrm{d}P_{h}}{\mathrm{d}t}=(\mathbf{1}_{\omega},f_{h}(\mathbf{u}))_{ \omega}=(\mathbf{1}_{\Omega},\mathbf{W}f_{h}(\mathbf{u}))_{\Omega}=(\mathbf{1 }_{\Omega},f_{H}(\bar{\mathbf{u}})+\mathbf{c}(\mathbf{u}))_{\Omega}=(\mathbf{ 1}_{\Omega},\mathbf{c}(\mathbf{u}))_{\Omega}=0, \tag{34}\] where we used the fact that the coarse discretization is already momentum conserving. ## 3 Structure-preserving closure modeling framework The derived discrete energy and momentum balances, before and after filtering, will be used to construct a novel structure-preserving closure model in this section. We will also discuss how to fit the parameters of the model. The ideas will be presented for periodic BCs in 1D, whereas different types of boundary conditions (BCs) are discussed in Appendix C. ### Framework Many existing closure approaches aim at approximating \(\mathbf{c}(\mathbf{u})\) by a closure model \(\tilde{\mathbf{c}}(\bar{\mathbf{u}};\mathbf{\Theta})\), where \(\mathbf{\Theta}\) are parameters to be determined such that the approximation is accurate. In this work, we propose a novel Figure 3: Simulation of KdV equation (12) with periodic BCs before and after filtering (left) and corresponding energy decomposition (right). formulation, in which we extend the system of equations for the \(I\) filtered variables \(\bar{\mathbf{u}}\) with a set of \(I\) auxiliary SGS variables \(\mathbf{s}\in\mathbb{R}^{I}\) that locally model the SGS energy. This extended system of equations has the form \[\frac{\mathrm{d}}{\mathrm{d}t}\begin{bmatrix}\bar{\mathbf{u}}\\ \mathbf{s}\end{bmatrix}=\begin{bmatrix}f_{H}(\bar{\mathbf{u}})\\ \mathbf{0}\end{bmatrix}+\mathbf{\Omega}_{2}^{-1}(\mathcal{K}-\mathcal{K}^{T}) \begin{bmatrix}\bar{\mathbf{u}}\\ \mathbf{s}\end{bmatrix}-\mathbf{\Omega}_{2}^{-1}\mathcal{Q}^{T}\mathcal{Q} \begin{bmatrix}\bar{\mathbf{u}}\\ \mathbf{s}\end{bmatrix}, \tag{35}\] where \(\mathcal{K}=\mathcal{K}(\bar{\mathbf{u}},\mathbf{s},\mathbf{\Theta})\in \mathbb{R}^{2I\times 2I}\) and \(\mathcal{Q}=\mathcal{Q}(\bar{\mathbf{u}},\mathbf{s},\mathbf{\Theta})\in \mathbb{R}^{2I\times 2I}\), and \(\mathbf{\Theta}\) represents the parameters. Note that this system is an approximation of the true dynamics. Next to the introduction of the SGS variables \(\mathbf{s}\), the second main novelty in this work is to formulate the closure model in terms of a skew-symmetric term and a dissipative term. The skew-symmetric term is introduced to allow for a local energy exchange between the filtered solution and the SGS variables, and the dissipative term to provide additional dissipation. These operators will be modelled in terms of neural networks (NNs) with trainable parameters (contained in \(\mathbf{\Theta}\)). So even though the notation in (35) suggests linearity of the closure model in \(\bar{\mathbf{u}}\) and \(\mathbf{s}\), the dependence of \(\mathcal{K}\) and \(\mathcal{Q}\) on \(\bar{\mathbf{u}}\) and \(\mathbf{s}\) makes the model non-linear. The construction of the introduced operators will be detailed in sections 3.3 and 3.4. Note the presence of \(\mathbf{\Omega}_{2}^{-1}\) in (35), which is due to the fact that our energy definition includes \(\mathbf{\Omega}\). The SGS variables \(\mathbf{s}\) are used to represent the SGS energy _on the coarse grid_, such that \[\frac{1}{2}\mathbf{s}^{2}\approx\frac{1}{2}\mathbf{W}(\mathbf{u}^{\prime})^{2}, \tag{36}\] where the notation \((.)^{2}\) is again to be interpreted element-wise. In section 3.2 we present how we achieve this approximation. By adding these SGS variables as unknowns into equation (35), we are able to include an approximation of the SGS energy into the simulation, while still significantly reducing the system size (from \(N\) to \(2I\)). Our key insight is that _by explicitly including an approximation of the SGS energy we are able to satisfy the energy conservation balance, equation (31)_. The energy balance serves not only as an important constraint that restrains the possible forms that the closure model (represented by a NN) can take, but also guarantees stability of our closure model, since the (kinetic) energy is a norm of the solution which is bounded in time. Given the extended system of equations, the total energy is approximated as \[E_{h}\approx E_{s}:=||\bar{\mathbf{U}}||_{\Omega_{2}}^{2}=\underbrace{\frac{1 }{2}(\bar{\mathbf{u}},\bar{\mathbf{u}})_{\Omega}}_{=E_{h}}+\underbrace{\frac{ 1}{2}(\mathbf{s},\mathbf{s})_{\Omega}}_{=:S}, \tag{37}\] with \(S\) approximating the SGS energy \[S\approx\bar{E}_{h}^{\prime}, \tag{38}\] with evolution \[\frac{\mathrm{d}E_{s}}{\mathrm{d}t}=\left(\bar{\mathbf{U}},\frac{\mathrm{d} \bar{\mathbf{U}}}{\mathrm{d}t}\right)_{\Omega_{2}}, \tag{39}\] where we used the joint inner product notation introduced in (27) and concatenated the filtered solution and the SGS variables into a single vector \(\bar{\mathbf{U}}\in\mathbb{R}^{2I}\): \[\bar{\mathbf{U}}:=\begin{bmatrix}\bar{\mathbf{u}}\\ \mathbf{s}\end{bmatrix}. \tag{40}\] Upon substituting the closure model form, equation (35), the following evolution equation for the approximated total energy results: \[\frac{\mathrm{d}E_{s}}{\mathrm{d}t}=(\bar{\mathbf{u}},f_{H}(\bar{\mathbf{u}})) _{\Omega}-||\mathcal{Q}\bar{\mathbf{U}}||_{2}^{2}, \tag{41}\] as the skew-symmetric term involving \(\mathcal{K}-\mathcal{K}^{T}\) cancels. This equation can be further simplified when choosing a specific \(f_{H}\). For example, if we substitute the structure-preserving discretization of Burgers' equation (9) for \(f_{H}\) (with grid-spacing \(H\)) we obtain \[\text{Burgers' equation:}\qquad\frac{\mathrm{d}E_{s}}{\mathrm{d}t}=-H\nu||\bar{ \mathbf{Q}}\bar{\mathbf{u}}||_{2}^{2}-||\mathcal{Q}\bar{\mathbf{U}}||_{2}^{2} \leq 0, \tag{42}\] i.e. energy is dissipated from the system by two terms: the coarse-grid diffusion operator, and an additional (trainable) dissipation term. Here \(\bar{\mathbf{Q}}\) represents the forward difference approximation of the first-order derivative on the coarse grid. This additional dissipation term is required as the diffusion operator, discretized on the fine grid, is more dissipative than on the coarse grid, see Appendix B. For energy-conserving systems, such as KdV, we set \(\mathcal{Q}\) to zero, and we obtain: \[\text{KdV equation:}\qquad\frac{\mathrm{d}E_{s}}{\mathrm{d}t}=0. \tag{43}\] We stress again that by having added an approximation of the subgrid energy into the equation system, we are able to use the concept of energy conservation (or dissipation) in constructing a closure model. Furthermore, as energy is dissipated or conserved the resulting model is stable by design. ### SGS variables To represent the SGS variables we propose a data-driven linear compression of the SGS content (assuming uniform coarse and fine grids such that \(J(i)=J\)): \[\mathrm{s}_{i}=\mathbf{t}^{T}\boldsymbol{\mu}_{i},\qquad i=1,\ldots,I, \tag{44}\] where we recall that \(\boldsymbol{\mu}_{i}\in\mathbb{R}^{J}\) represents the SGS content in a single coarse cell \(\Omega_{i}\). The SGS variable \(\mathrm{s}_{i}\) is a representation of the SGS content within cell \(\Omega_{i}\) encoded by learnable compression parameters \(\mathbf{t}\in\mathbb{R}^{J}\). This linear compression can be written for all coarse-grid points as the following matrix vector product: \[\mathbf{s}=\mathbf{T}\mathbf{u}^{\prime}, \tag{45}\] with \(\mathbf{T}(\mathbf{t})\in\mathbb{R}^{I\times N}\) being the (sparse) compression matrix fully defined by the parameters \(\mathbf{t}\). Note that \(\mathbf{T}\) has the same sparsity pattern as \(\mathbf{W}\). Using this notation (40) can be written as \[\bar{\mathbf{U}}=\mathbf{W}_{\mathbf{T}}\mathbf{u}, \tag{46}\] where \[\mathbf{W}_{\mathbf{T}}:=\begin{bmatrix}\mathbf{W}\\ \mathbf{T}(\mathbf{I}-\mathbf{R}\mathbf{W})\end{bmatrix}. \tag{47}\] The main advantage of defining the compression as a linear operation is that, if we have reference data for \(\mathbf{u}^{\prime}\), we can easily obtain the evolution of \(\mathbf{s}\) as \[\frac{\mathrm{d}\mathbf{s}}{\mathrm{d}t}=\frac{\partial\mathbf{s}}{\partial \mathbf{u}^{\prime}}\frac{\mathrm{d}\mathbf{u}^{\prime}}{\mathrm{d}t}=\mathbf{ T}\frac{\mathrm{d}\mathbf{u}^{\prime}}{\mathrm{d}t}. \tag{48}\] Another advantage is that the Jacobian \(\frac{\partial\mathbf{s}}{\partial\mathbf{u}^{\prime}}=\mathbf{T}\) does not depend on \(\mathbf{u}^{\prime}\), such that we avoid the problem that arises when taking the 'natural' choice of \(\mathbf{s}\), which would be \(\mathbf{s}=\sqrt{\mathbf{W}(\mathbf{u}^{\prime})^{2}}\), namely that the Jacobian \[\left(\frac{\partial\mathbf{s}}{\partial\mathbf{u}^{\prime}}\right)_{ij}= \frac{\mathrm{W}_{ij}\mathrm{u}_{j}^{\prime}}{\sqrt{\sum_{j=1}^{N}\mathrm{W}_{ ij}(\mathrm{u}_{j}^{\prime})^{2}}}\] becomes undefined when the denominator is zero. A third advantage is that the linear compression allows us to calculate the contribution of a forcing term to \(\frac{\mathrm{d}\mathbf{s}}{\mathrm{d}t}\) (this will be explained in section 3.5). The parameters \(\mathbf{t}\) are chosen such that the SGS energy is accurately represented on the coarse grid, i.e. we determine the elements of \(\mathbf{t}\) such that they minimize the error made in approximation (36), leading to the loss function \[\mathcal{L}_{s}(\mathcal{D};\mathbf{t})=\frac{1}{|\mathcal{D}|}\sum_{d\in \mathcal{D}}\frac{1}{|\Omega|}||\frac{1}{2}(\mathbf{T}(\mathbf{t})\mathbf{u}_ {d}^{\prime})^{2}-\frac{1}{2}\mathbf{W}(\mathbf{u}_{d}^{\prime})^{2}||_{ \Omega}^{2}, \tag{49}\] where the notation \((.)^{2}\) is again to be interpreted element-wise. Here the subscript \(d\) represents a sample from the training dataset \(\mathcal{D}\) containing \(|\mathcal{D}|\) samples. Note that, due to the way \(\mathbf{t}\) appears in the loss function, negative values for \(\mathbf{s}\) are allowed. To overcome the saddle point at \(\mathbf{t}=\mathbf{0}\) we initialize the elements of \(\mathbf{t}\) with random noise (see Appendix D). For \(J=2\) this minimization problem has an exact solution (see Appendix E). To illustrate how the compression works in practice we consider a snapshot from a simulation of Burgers' equation (\(\nu=0.01\)) with periodic BCs, see Figure 4. We observe that \(\mathbf{s}\) serves as an energy storage for the SGS content, which is mainly present near shocks. ### Skew-symmetric closure term \(\mathcal{K}\) Having defined the SGS variables \(\mathbf{s}\), we continue to detail the construction of \(\mathcal{K}\) appearing in equation (35). We propose the following decomposition: \[\mathcal{K}=\begin{bmatrix}\mathbf{K}_{11}&\mathbf{K}_{12}\\ \mathbf{0}&\mathbf{K}_{22}\end{bmatrix}\quad\rightarrow\quad\mathcal{K}- \mathcal{K}^{T}=\begin{bmatrix}\mathbf{K}_{11}-\mathbf{K}_{11}^{T}&\mathbf{K} _{12}\\ -\mathbf{K}_{12}^{T}&\mathbf{K}_{22}-\mathbf{K}_{22}^{T}\end{bmatrix}, \tag{50}\] with submatrices \(\mathbf{K}_{ij}(\bar{\mathbf{U}};\boldsymbol{\Theta})\in\mathbb{R}^{I\times I}\), which will depend on the solution \(\bar{\mathbf{U}}\) and trainable parameters \(\boldsymbol{\Theta}\). This decomposition is chosen such that the upper-left submatrix \(\mathbf{K}_{11}\) allows for an energy exchange within the resolved scales, the upper-right submatrix \(\mathbf{K}_{12}\) for an energy exchange between the resolved scales and the SGS variables, and the final submatrix \(\mathbf{K}_{22}\) for an energy exchange within the SGS variables. If all entries of each \(\mathbf{K}_{ij}\) would be taken as parameters, one would have \(\mathcal{O}(I^{2})\) parameters, which is too large for practical problems of interest. Instead, we propose to represent each \(\mathbf{K}_{ij}\) in terms of a matrix \(\boldsymbol{\Phi}_{ij}\in\mathbb{R}^{I\times I}\) of only \(2D+1\) diagonals \(\boldsymbol{\phi}_{d}^{ij}\in\mathbb{R}^{I}\) (\(d=-D,\ldots,D\)), where each diagonal is given by an output channel of a convolutional neural network (CNN, [33]): \[\boldsymbol{\Phi}_{ij}=\begin{bmatrix}\ddots&&\ddots&\ddots&\ddots&&\ddots&&\\ &\boldsymbol{\phi}_{-D}^{ij}&\cdots&\boldsymbol{\phi}_{-1}^{ij}&\boldsymbol{ \phi}_{0}^{ij}&\boldsymbol{\phi}_{1}^{ij}&\cdots&\boldsymbol{\phi}_{D}^{ij} \\ &&\ddots&&\ddots&\ddots&\ddots&&\ddots\end{bmatrix}. \tag{51}\] The hyperparameter \(D\) determines the sparsity of \(\boldsymbol{\Phi}_{ij}\) and is taken such that \(D\ll I/2\) to reduce computational costs. In this way only a local neighbourhood is included in the approximation. As the input channels of the CNN we take \(\{\bar{\mathbf{u}},\mathbf{s},f_{H}(\bar{\mathbf{u}})\}\). The dependence of \(\boldsymbol{\phi}_{d}\) on \(\bar{\mathbf{U}}\) through the CNN adds non-linearity to the Figure 4: Learned SGS compression applied to Burgers’ equation for \(N=1000\), with \(I=20\) and \(J=50\). By filtering and applying the SGS compression the degrees of freedom of this system are effectively reduced from \(N=1000\) to \(2I=40\). closure model. Multiplying some vector \(\mathbf{v}\) by \(\mathbf{\Phi}_{ij}\) thus corresponds to the following non-linear stencil \[(\mathbf{\Phi}_{ij}\mathbf{v})_{k}=\sum_{d=-D}^{D}\phi_{dk}^{ij}(\bar{\mathbf{U} };\mathbf{\Theta})\mathrm{v}_{k+d}. \tag{52}\] A CNN is chosen to represent the diagonals as it is invariant with respect to translations of the input channels. In this way our final closure model inherits this property. In total, the CNN thus consists of three input channels, an arbitrary number of hidden channels (to be specified in the results section), and \(3(2D+1)\) output channels: \[\mathrm{CNN}:\bar{\mathbf{u}},\mathbf{s},f_{H}(\bar{\mathbf{u}})\mapsto \boldsymbol{\phi}_{d}^{11},\boldsymbol{\phi}_{d}^{12},\boldsymbol{\phi}_{d}^ {22}\qquad d=-D,\ldots D. \tag{53}\] In the case of periodic BCs we apply circular padding to the input channels of the CNN to connect both ends of the domain. Different BC types are discussed in Appendix C. Although in principle the matrices \(\mathbf{K}_{ij}\) could be represented directly by matrices of the form (51), such a construction is not momentum-conserving. In the next subsection we will propose an approach to express \(\mathbf{K}_{ij}\) in terms of \(\mathbf{\Phi}_{ij}\) which _is_ momentum conserving. #### 3.3.1 Momentum-conserving transformation Requiring momentum conservation for the extended system (35) leads to the following condition (see also (34)): \[\left(\begin{bmatrix}\mathbf{1}_{\Omega}\\ \mathbf{0}_{\Omega}\end{bmatrix},\mathbf{\Omega}_{2}^{-1}(\mathcal{K}- \mathcal{K}^{T})\bar{\mathbf{U}}\right)_{\Omega_{2}}=\mathbf{1}_{\Omega}^{T}( \mathbf{K}_{11}-\mathbf{K}_{11}^{T})\bar{\mathbf{u}}+\mathbf{1}_{\Omega}^{T} \mathbf{K}_{12}\mathbf{s}=0, \tag{54}\] such that we impose the following constraints on the \(\mathbf{K}\) matrices: \[\mathbf{1}_{\Omega}^{T}\mathbf{K}_{11}=\mathbf{1}_{\Omega}^{T}\mathbf{K}_{11} ^{T}=\mathbf{1}_{\Omega}^{T}\mathbf{K}_{12}=\mathbf{0}_{\Omega}. \tag{55}\] To satisfy conditions (55) we first define the linear operator \(\mathbf{B}\in\mathbb{R}^{I\times I}\) corresponding to the stencil \[(\mathbf{B}\mathbf{v})_{i}=\sum_{j=-B}^{B}b_{i}\mathrm{v}_{i+j} \tag{56}\] with \(2B+1\) parameters \(b_{i}\) (\(i=-B,\ldots,B\)), applied to some vector \(\mathbf{v}\). In addition, we define the matrix \(\bar{\mathbf{B}}\in\mathbb{R}^{I\times I}\) whose elements are given by \[\bar{b}_{i}=b_{i}-\frac{1}{2B+1}\sum_{i=-B}^{B}b_{i}, \tag{57}\] corresponding to the stencil \[(\bar{\mathbf{B}}\mathbf{v})_{i}=\sum_{j=-B}^{B}\bar{b}_{i}\mathrm{v}_{i+j}. \tag{58}\] In the periodic case this matrix satisfies \[\mathbf{1}_{\Omega}^{T}\bar{\mathbf{B}}=\mathbf{1}_{\Omega}^{T}\bar{\mathbf{B }}^{T}=\mathbf{0}_{\Omega}, \tag{59}\] independent of the choice of underlying parameters \(b_{i}\). A simple example of a matrix \(\bar{\mathbf{B}}\) that satisfies such conditions is the second order finite difference representation of a first-order derivative: \(B=1\), \(\bar{b}_{-1}=-1/(2H)\), \(\bar{b}_{0}=0\), \(\bar{b}_{1}=1/(2H)\). Our framework allows for more general stencils which are trained based on fine-grid simulations. These \(\mathbf{B}\) matrices can be used to enforce momentum conservation on the \(\mathbf{\Phi}\) matrices by pre- and post-multiplication. This will be denoted by a superscript, e.g. \[\mathbf{K}_{12}=\mathbf{\Phi}_{12}^{\bar{\mathbf{B}}\mathbf{B}}=\bar{\mathbf{B}}_ {1}^{\mathbf{\Phi}_{12}}\mathbf{\Phi}_{12}\mathbf{B}_{2}^{\mathbf{\Phi}_{12}} \tag{60}\] such that \(\mathbf{1}_{\Omega}^{T}\mathbf{K}_{12}=0\) is satisfied. Note that satisfying this condition only requires a \(\bar{(.)}\) over the pre-multiplying \(\mathbf{B}\) matrix. The matrices \(\bar{\mathbf{B}}_{1}^{\mathbf{\Phi}_{12}},\mathbf{B}_{2}^{\mathbf{\Phi}_{12}} \in\mathbb{R}^{I\times I}\) each contain their own unique set of \(2B+1\) underlying parameters. The hyperparameter \(B\) is taken such that \(B\ll I/2\) to enforce sparsity and thus reduce computational costs. Similarly, \[\mathbf{K}_{11}=\mathbf{\Phi}_{11}^{\bar{\mathbf{B}}\bar{\mathbf{B}}}=\bar{ \mathbf{B}}_{1}^{\mathbf{\Phi}_{11}}\mathbf{\Phi}_{11}\bar{\mathbf{B}}_{2}^{ \mathbf{\Phi}_{11}} \tag{61}\] such that the constraints \(\mathbf{1}_{\Omega}^{T}\mathbf{K}_{11}=\mathbf{1}_{\Omega}^{T}\mathbf{K}_{11} ^{T}=0\) are met. The additional \(\mathbf{B}\) matrices of \(\mathbf{K}_{11}\) add another set of \(2(2B+1)\) parameters to the framework. The full matrix \(\mathcal{K}\) follows as \[\mathcal{K}=\begin{bmatrix}\mathbf{\Phi}_{11}^{\bar{\mathbf{B}}\bar{\mathbf{B }}}&\mathbf{\Phi}_{12}^{\bar{\mathbf{B}}\bar{\mathbf{B}}}\\ \mathbf{0}&\mathbf{\Phi}_{22}^{\bar{\mathbf{B}}\bar{\mathbf{B}}}\end{bmatrix}, \tag{62}\] where we used a momentum-conserving matrix \(\bar{\mathbf{B}}\) where appropriate. We thus have \(6(2B+1)\) parameters that fully describe the \(\mathbf{B}\) matrices. ### Dissipative term \(\mathcal{Q}\) In a similar fashion as \(\mathcal{K}\) we decompose \(\mathcal{Q}\) as \[\mathcal{Q}=\begin{bmatrix}\mathbf{Q}_{11}&\mathbf{Q}_{12}\\ \mathbf{Q}_{21}&\mathbf{Q}_{22}\end{bmatrix}. \tag{63}\] As for the \(\mathcal{K}\) matrix, we do not represent the entire matrix by parameters but instead use the output channels of the CNN to represent the diagonals of the submatrices. However, in this case we only construct the main and \(D\) upper diagonals. The reason for this will be explained later. The diagonals are again represented by CNN output channels \(\boldsymbol{\psi}^{ij}\in\mathbb{R}^{I}\) defining the matrix \(\mathbf{\Psi}_{ij}\in\mathbb{R}^{I\times I}\). The CNN of section 3.3 is thus extended and represents the mapping \[\text{CNN}:\bar{\mathbf{u}},\mathbf{s},f_{H}(\bar{\mathbf{u}})\mapsto \boldsymbol{\phi}_{d_{1}}^{11},\boldsymbol{\phi}_{d_{1}}^{12},\boldsymbol{\phi }_{d_{1}}^{22},\boldsymbol{\psi}_{d_{2}}^{11},\boldsymbol{\psi}_{d_{2}}^{12}, \boldsymbol{\psi}_{d_{2}}^{21},\boldsymbol{\psi}_{d_{2}}^{22},\qquad d_{1}=-D, \ldots D,\quad d_{2}=0,\ldots D. \tag{64}\] The underlying CNN now consists of three input channels, an arbitrary number of hidden channels, and \(3(2D+1)+4(D+1)\) output channels. Again, like in case of \(\mathbf{\Phi}\), a mapping is needed to make the \(\mathbf{\Psi}\) matrices momentum-conserving. Substituting decomposition (63) into the momentum conservation constraint (34) results in \[-\left(\begin{bmatrix}\mathbf{1}_{\Omega}\\ \mathbf{0}_{\Omega}\end{bmatrix},\mathbf{\Omega}_{2}^{-1}(\mathcal{Q}^{T} \mathcal{Q})\bar{\mathbf{U}}\right)_{\Omega_{2}}=-\mathbf{1}_{\Omega}^{T}( \mathbf{Q}_{11}^{T}\mathbf{Q}_{11}+\mathbf{Q}_{21}^{T}\mathbf{Q}_{21})\bar{ \mathbf{u}}-\mathbf{1}_{\Omega}^{T}(\mathbf{Q}_{11}^{T}\mathbf{Q}_{12}+ \mathbf{Q}_{21}^{T}\mathbf{Q}_{22})\mathbf{s}=0, \tag{65}\] leading to the constraints \[\mathbf{1}_{\Omega}^{T}\mathbf{Q}_{11}^{T}=\mathbf{1}_{\Omega}^{T}\mathbf{Q} _{21}^{T}=\mathbf{0}_{\Omega}. \tag{66}\] The matrix \(\mathcal{Q}\) that satisfies these constraints follows as \[\mathcal{Q}=\begin{bmatrix}\mathbf{\Psi}_{11}^{\mathbf{I}\mathbf{B}}&\mathbf{ \Psi}_{12}^{\mathbf{I}\mathbf{B}}\\ \mathbf{\Psi}_{21}^{\mathbf{I}\mathbf{B}}&\mathbf{\Psi}_{22}^{\mathbf{I} \mathbf{B}}\end{bmatrix}, \tag{67}\] where we used a momentum-conserving matrix \(\bar{\mathbf{B}}\) where appropriate and replaced the pre-multiplying \(\mathbf{B}\) matrix by the identity matrix. The latter, in addition to only constructing the main and upper diagonals of the \(\mathbf{\Psi}\) matrices, makes that the sparsity pattern of \(\mathcal{Q}^{T}\mathcal{Q}\) matches that of \(\mathcal{K}-\mathcal{K}^{T}\). With the addition of this dissipative term all the \(\mathbf{B}\) matrices combined contain in total \(10(2B+1)\) parameters that are to be trained. Figure 5: Example of a simulation of Burgers’ equation with periodic BCs using our trained structure-preserving closure model for \(I=20\) (left), along with the DNS solution for \(N=1000\) (right). An example application of the framework is shown in Figure 5, where we simulate Burgers' equation using our structure-preserving closure modeling framework and compare it to a direct numerical simulation (DNS). It is again interesting to see that \(\mathbf{s}\) is largest at the shocks, indicating the presence of significant SGS content there. When comparing the magnitude of the different terms in (35) (see Figure 6), we observe that the \(\mathcal{K}\) term, that is responsible for redistributing the energy, is most important, and in fact more important than the coarse-grid discretization operator \(f_{H}(\bar{\mathbf{u}})\). In other words, our closure model has learned dynamics that are highly significant to correctly predict the evolution of the filtered system. ### Forcing Our proposed closure modeling framework allows for the presence of a forcing term \(\mathrm{F}_{i}(t)\approx F(\mathbf{x}_{i},t)\) in the RHS of our discretized PDE (3), with \(\mathbf{F}\in\mathbb{R}^{N}\). As long as this term does not depend on the solution \(\mathbf{u}\) the forcing commutes with \(\mathbf{W}\). This means we can simply add \(\bar{\mathbf{F}}=\mathbf{W}\mathbf{F}\) to the RHS of (23) without any contribution to the closure term. In addition, we can account for its contribution to the evolution of \(\mathbf{s}\) by first computing its contribution \(\mathbf{F}^{\prime}\) to the evolution of the SGS content (see (24)) as \[\mathbf{F}^{\prime}:=\mathbf{F}-\mathbf{R}\bar{\mathbf{F}}. \tag{68}\] The contribution to the evolution \(\mathbf{s}\) is then given by \(\mathbf{T}\mathbf{F}^{\prime}\), see (48). The full closure modeling framework is thus summarized by \[\frac{\mathrm{d}\bar{\mathbf{U}}}{\mathrm{d}t}=\mathcal{G}_{\mathbf{\Theta}}( \bar{\mathbf{U}}):=\begin{bmatrix}f_{H}(\bar{\mathbf{u}})\\ \mathbf{0}\end{bmatrix}+\mathbf{\Omega}_{2}^{-1}(\mathcal{K}-\mathcal{K}^{T} )\bar{\mathbf{U}}-\mathbf{\Omega}_{2}^{-1}\mathcal{Q}^{T}\mathcal{Q}\bar{ \mathbf{U}}+\mathbf{W}_{\mathbf{T}}\mathbf{F}, \tag{69}\] depending on parameters \(\mathbf{\Theta}\). Note that we separated the forcing from \(f_{H}\) (the RHS of the coarse discretization). In the results section we use a forcing term in some of the Burgers' equation simulations. ### Finding the optimal parameter values The optimal parameter values \(\mathbf{\Theta}^{*}\), where \(\mathbf{\Theta}\) includes the weights of the CNN along with the parameters of the \(\mathbf{B}\) matrices, can be obtained numerically by minimizing \[\mathcal{L}(\mathcal{D};\mathbf{\Theta}):=\frac{1}{|\mathcal{D}|}\sum_{d\in \mathcal{D}}\frac{1}{2|\Omega|}||\mathcal{G}_{\mathbf{\Theta}}(\mathbf{W}_{ \mathbf{T}}\mathbf{u}_{d})-\mathbf{W}_{\mathbf{T}}f_{h}(\mathbf{u}_{d})||_{ \Omega_{2}}^{2} \tag{70}\] Figure 6: Magnitude of each of the different terms present in (35) corresponding to the simulation in Figure 5. with respect to \(\mathbf{\Theta}\) for the training set \(\mathcal{D}\) containing \(|\mathcal{D}|\) samples. We will refer to this approach as 'derivative fitting', as we minimize the residual between the predicted and the true RHS. In (70) the true RHS is obtained by applying \(\mathbf{W_{T}}\) to the fine-grid RHS \(f_{h}(\mathbf{u}_{d})\). The subscript \(d\) indicates a sample from the training set. We will combine this method with a different approach in which we directly optimize \(\mathbf{\Theta}\) such that the solution itself is accurately reproduced. To achieve this we minimize \[\mathcal{L}_{n}(\mathcal{D};\mathbf{\Theta}):=\frac{1}{|\mathcal{D}|}\sum_{d \in\mathcal{D}}\frac{1}{n}\sum_{i=1}^{n}\frac{1}{2|\Omega|}||\mathcal{S}^{i}_{ \mathbf{\Theta}}(\mathbf{W_{T}}\mathbf{u}_{d})-\mathbf{W_{T}}\mathcal{S}^{i( \overline{\Delta t}/\Delta t)}(\mathbf{u}_{d})||^{2}_{\Omega_{2}}, \tag{71}\] where \(\mathcal{S}^{i}_{\mathbf{\Theta}}(\mathbf{W_{T}}\mathbf{u}_{d})\) represents the successive application of an explicit time integration scheme for \(i\) time steps, with step size \(\overline{\Delta t}\), starting from initial condition \(\mathbf{W_{T}}\mathbf{u}_{d}\), using the introduced closure model. The fine-grid counterpart is indicated by \(\mathcal{S}^{i(\overline{\Delta t}/\Delta t)}(\mathbf{u}_{d})\), with step size \(\Delta t\), starting from initial condition \(\mathbf{u}_{d}\). Note the appearance of the ratio \(\overline{\Delta t}/\Delta t\), as the coarser grid for \(\tilde{\mathbf{u}}\) allows us to take larger time steps [34]. This further reduces the required computational resources. We will refer to this method of finding the optimal parameters as 'trajectory fitting'. This approach has been shown to yield more accurate and stable closure models [14; 15; 21; 22; 23], as this approach also accounts for the time discretization error. In practice, we employ a hybrid approach in which we first use derivative fitting and subsequently continue with trajectory fitting, as the latter requires more computational effort. ## 4 Results To test our closure modeling framework we consider the previously introduced Burgers' equation with \(\nu=0.01\) on the spatial domain \(\Omega=[0,2\pi]\) for two test cases: (i) periodic BCs without forcing and (ii) inflow/outflow (I/O) BCs with time-independent forcing. The implementation of BCs is discussed in Appendix C. We also consider a third test case: (iii) the KdV equation with \(\varepsilon=6\) and \(\mu=1\) on the spatial domain \(\Omega=[0,32]\) for periodic BCs. Parameter values for Burgers' and KdV are taken from [35]. Reference simulations are carried out on a uniform grid of \(N=1000\) for Burgers' and \(N=600\) for KdV up to time \(t=T=10\). The data that is generated from these reference simulations is split into a training set and a validation set. The simulation conditions (initial conditions, BCs, and forcing) for training and testing purposes are generated randomly, as described in Appendix D. In addition to this, the construction of a training and validation set, the training procedure, and the chosen hyperparameters are also described in Appendix D. For the analysis, we will compare our structure-preserving framework (SP) to a vanilla CNN that models the closure term as \(\mathbf{c}(\mathbf{u})\approx\bar{\mathbf{Q}}\text{CNN}(\bar{\mathbf{u}},f_{ H}(\bar{\mathbf{u}});\boldsymbol{\theta})\) (with parameters \(\boldsymbol{\theta}\)). Multiplication of the CNN output channel by the coarse-grid forward difference operator \(\bar{\mathbf{Q}}\) takes care of the momentum conservation condition (this has been shown to yield more accurate closure models [26]). The same trick is not applied for our SP closure, as it would destroy the derived evolution of the (approximated) total energy, see (42) and (43). Instead we resort to the described pre- and post-multiplication by the parameterized \(\mathbf{B}\) matrices to satisfy momentum conservation. Furthermore, we consider the no closure (NC) case, i.e. \(\bar{\mathbf{c}}=\mathbf{0}_{\Omega}\), which corresponds to a coarse-grid solution of the PDEs. To make a fair comparison we compare closure models with the same number of degrees of freedom (DOF). For SP we have \(\text{DOF}=2I\), as we obtain an additional set of \(I\) degrees of freedom corresponding to the addition of the SGS variables. For the CNN and NC we simply have \(\text{DOF}=I\). To march the solution forward in time we employ an explicit RK4 scheme [30] with \(\overline{\Delta t}=0.01\) (\(4\times\) larger than the DNS) for use cases (i) and (ii) and \(\overline{\Delta t}=5\times 10^{-3}\) (\(50\times\) larger than the DNS) for use case (iii). The SP closure models contain in total 7607 parameters (consisting of two hidden layers with each 30 channels and a kernel size of 5 for the underlying CNN) for use cases (i) and (ii) and 3905 (consisting of two hidden layers with each 20 channels and a kernel size of 5) for use case (iii). The purely CNN-based closure models consist of 3261 parameters (two hidden layers with each 20 channels and a kernel size of 7) for every use case. These settings are based on the hyperparameter tuning procedure in Appendix D. In between hidden layers we employ the ReLU activation function, whereas we apply a linear activation function to the final layer for both SP and the vanilla CNN. For SP we choose \(D=B=1\) for the construction of the \(\mathbf{B}\) and \(\mathbf{\Phi}/\mathbf{\Psi}\) matrices for use cases (i) and (ii) matching the width of the coarse discretization \(f_{H}(\bar{\mathbf{u}})\). For (iii) we do the same and therefore take \(D=B=2\). Note that the same set of compression matrices and closure models are used for (i) and (ii), as they both correspond to the same equation. These closure models are thus trained on a dataset containing both simulation conditions. As stated earlier, the model parameters are optimized by first derivative fitting and then trajectory fitting. This is specified in Appendix D. We implement our closure models in the Julia programming language [36] using the Flux.jl package [37; 38]. The code can be found at [https://github.com/tobyvg/ECNCM_1D](https://github.com/tobyvg/ECNCM_1D). ### Closure model performance We first examine the performance of the trained closure models based on how well the filtered DNS solution is reproduced for cases (i)-(iii) and unseen simulation conditions. During our comparison we will make extensive use of the normalized root-mean-squared error (NRMSE) metric, defined as \[\text{NRMSE }\bar{\mathbf{u}}(t)=\sqrt{\frac{1}{|\Omega|}||\bar{\mathbf{u}}(t)- \bar{\mathbf{u}}^{\text{DNS}}(t)||_{\Omega}^{2}}, \tag{72}\] to compare the approximated solution \(\bar{\mathbf{u}}\) at time \(t\), living on the coarse grid, to the ground truth \(\bar{\mathbf{u}}^{\text{DNS}}\) obtained from the DNS. We will refer to this metric as the solution error. In addition, we define the integrated-NRMSE (I-NRMSE) as \[\text{I-NRMSE }\bar{\mathbf{u}}(t)=\frac{1}{t}\sum_{i}\overline{\Delta t} \text{ NMRSE }\bar{\mathbf{u}}(i\overline{\Delta t}),\qquad 0\leq i\overline{\Delta t}\leq t, \tag{73}\] such that the sum represents integrating the solution error in time. We will refer to this metric as the integrated solution error. #### 4.1.1 Convergence As we refine the resolution of the coarse grid, and with this increase the number of DOF, we expect convergence of both the compression error \(\mathcal{L}_{s}\) (defined in equation (49)) and the solution error. We consider \(\text{DOF}\in\{20,30,40,50,60,70,80,90,100\}\), each with a different set of trained closure models. If the fine-grid resolution \(N\) is not divisible by the coarse-grid resolution \(I\) we first project the fine-grid solution on a grid with a resolution that is divisible by \(I\) to generate reference data. This is necessary for constructing the spatial averaging filter (see section 2.3). In total 36 closure models are trained: two (SP and CNN) for each combination of the 9 considered coarse-grid resolutions and equation (Burgers' or KdV). Closure models corresponding to Burgers' equation are applied to both use case (i) periodic and (ii) I/O conditions. The SGS compression error evaluated over the validation set is shown in Figure 7. We observe monotonic convergence of the compression error as we refine the grid. We expect the compression error to further converge to zero until the exact solution is reached at \(\text{DOF}=N\) (\(J=2\)), see Appendix E. The faster convergence for the KdV equation is likely caused by the lower fine-grid resolution of \(N=600\), as opposed to \(N=1000\) for Burgers' equation. Next, we look at the integrated solution error averaged over 20 simulations with unseen simulation conditions, generated as described in Appendix D, for each of the considered numbers of DOF, see Figure 8. For test cases (i) and (ii) we observe, for both SP and NC, almost monotonic convergence of the solution error as we increase the number of DOF in the simulation, with SP improving upon NC with roughly one order of magnitude. On the other hand, the solution error for the CNN behaves quite erratically: sometimes more accurate than SP, sometimes unstable (e.g. in case (ii) and DOF = 80, all 20 simulations were unstable), and sometimes less accurate than NC (case (i), \(\text{DOF}=90\)). For test case (iii) we find that for most numbers of DOF the CNN outperforms SP, while not resulting in stable closure models for \(\text{DOF}\in\{90,100\}\). Overall, the conclusion is that our proposed SP closure model leads to much more robust simulations while being on par in terms of accuracy with a CNN closure model. Furthermore, for the lower numbers of DOF we observe similar performance for SP and the CNN. From this we conclude that the compression error (see Figure 7) is likely not the limiting factor of the closure model performance. #### 4.1.2 Consistency of the training procedure It is important to note that the closure models trained in the previous section possess a degree of randomness, caused by the (random) initialization of the network weights and the random selection the mini-batches. This can possibly lead to the irregular convergence behavior shown in the previous section. In order to evaluate this effect, we train 10 separate replica models for \(\text{DOF}=60\), which only differ in the random seed. The trained models are evaluated in terms of stability (number of unstable simulations) and integrated solution error. A simulation is considered unstable when it produces NaN values for \(\bar{\mathbf{u}}(t)\) (\(t\leq T\)). In total 20 simulations per closure model are carried out using the same simulation conditions as in the convergence study. The results are depicted in Figure 9. With regards to stability we observe that all trained SP closure models produced exclusively stable simulations. This is in accordance with the earlier derived stability conditions (42) and (43) for the periodic cases. In addition, for the non-periodic test case (ii) we also observe a clear stability advantage, as all of the trained SP closure models still produced only stable simulations with a consistently low integrated solution error. Regarding this integrated solution error, we observe that the SP closure models all perform very con Figure 8: Integrated solution error evaluated at \(T=10\) averaged over 20 simulations for the different use cases (i)-(iii) and an increasing number of \(\text{DOF}\). Only stable simulations are considered for the depicted averages. Absence of a scatter point indicates no stable simulations. Figure 7: Convergence of the SGS compression error when refining the coarse grid, evaluated on the validation set for Burgers’ equation (\(N=1000\)) and KdV equation (\(N=600\)). sistently (errors are almost overlapping). The CNNs sometimes outperform SP for test cases (i) and (iii), but also show very large outliers. This confirms our conclusion of the previous section that our SP closure models are much more robust than the CNNs, which can be 'hit or miss' depending on the randomness in the training procedure. #### 4.1.3 Error behavior in time To further exemplify how structure preservation aids in robustness and accuracy we consider a single simulation of Burgers' equation with periodic BCs. We choose DOF = 90 (the value for which the CNN closure model performed poorly during the convergence study) and randomly select one of the simulations from the convergence study for the analysis. The resulting solution error trajectory and energy trajectories for this simulation are displayed in Figure 10. We find that the resolved energy for the CNN starts showing erratic behavior around the time the solution error surpasses the one of NC. Around \(t=4\) the resolved energy even increases drastically. The other three methods show no increase in energy. This is in accordance with the derived evolution of the energy: equation (11) for NC and the DNS, and equation (42) for SP. From this we conclude that there is a clear stability and accuracy benefit to adhering to physical structure, as compared to using a standard CNN. Figure 10: Solution error (left) and resolved energy (right) trajectories for a simulation of Burgers’ equation with periodic BCs starting from an unseen initial condition. The presented results correspond to DOF = 90. For SP and the DNS the (approximated) total energy is displayed, as the SGS energy is small. These trajectories overlap for the entirety of the simulation. Figure 9: Integrated solution error evaluated at \(T=10\) averaged over 20 simulations and % of unstable simulations for each closure model in the trained ensemble of closure models (DOF = 60). Use cases (i)-(iii) are considered. For (ii) two CNN closure models produced 100% unstable simulations and are therefore omitted from the graph. ### Structure preservation To analyze how well the SP closure models adhere to physical structure we consider a single simulation of Burgers' and KdV with periodic BCs, i.e. use case (i) and (iii), and unseen simulation conditions. For the purpose of this analysis we stick to closure models corresponding to \(\text{DOF}=40\). #### 4.2.1 Burgers' equation For Burgers' equation the results are depicted in Figure 11. With regards to momentum conservation we find that each of the considered closures preserves momentum within machine precision. NC and the DNS achieve this through a structure-preserving discretization, the CNN achieves this through the multiplication by the forward difference operator \(\bar{\mathbf{Q}}\), and the SP model through the construction of \(\mathcal{K}\) and \(\mathcal{Q}\). With regards to the energy, both the resolved energy \(\bar{E}_{h}\) as well as the (approximated) total energy \(E_{s}/E_{h}\) are considered. The first observation is that the energy of NC is strictly decreasing but remains at a too high level as compared to the DNS, which is consistent with our analysis in Appendix B. For SP the approximated total energy is also always decreasing, as derived in (42), thus successfully mimicking the property that the total energy should be decreasing for viscous flows and periodic BCs, in the absence of forcing. Furthermore, when looking only at the resolved energy we find that SP nicely captures the back and forth energy transfer between the resolved and SGS energy, similar to the DNS result. This means that it successfully allows for backscatter, without sacrificing stability. The CNN is omitted from this analysis, as earlier we observed that it is not strictly dissipative, see Figure 10. #### 4.2.2 Korteweg-de Vries equation Next, we study the KdV equation. With regards to momentum we observe that it is again conserved up to machine precision for each of the closures, see Figure 12. However, in contrast to Burgers' equation with viscosity, the total energy should now be exactly conserved. We mimic this by not including the dissipative \(\mathcal{Q}\) term in the SP closure model. We find that the approximated total energy is indeed conserved up to a time integration error, due to the use of an explicit RK4 integration scheme [30] instead of a structure-preserving time integration method such as implicit midpoint. This is done as implicit time integration schemes are incompatible with trajectory fitting. The energy error decreases with \(\mathcal{O}(\Delta t^{4})\) when the time step is decreased and is at machine precision for \(\overline{\Delta t}=10^{-4}\). Based on the results for Burgers' and KdV equation, we conclude that our proposed SP closure model successfully achieves stability by mimicking the energy conservation law of the full system, while still allowing for backscatter to be modelled correctly. Figure 11: Change in momentum \(\Delta_{t}P_{h}=P_{h}(t)-P_{h}(0)\) (left) and evolution of resolved and total energy (right) for a simulation of Burgers’ equation with periodic BCs starting from an unseen initial condition. The presented results correspond to \(\text{DOF}=40\). ### Extrapolation in space and time As a final experiment we evaluate how well the closure models are capable of extrapolating in space and time. We consider the KdV equation on an extended spatial domain \(\Omega=[0,96]\), which is three times the size of the domain in the training data, and run the simulation until \(T=50\) (five times longer than present in the training data). As closure models, we use the ones trained during the convergence study that correspond to the grid-spacing of the employed grid. The resulting DNS (\(N=3\times 600\)), and absolute error (AE) for the NC, CNN, and SP simulations (\(\mathrm{DOF}=3\times 40\)) are shown in Figure 13. We observe that SP and the CNN both improve upon NC in the earlier stages of the simulation (\(t\leq 20\)), but less so for longer time spans. However, since the absolute error is sensitive to small translations in the solution (as observed in the later stages of the simulation), we require a more thorough analysis to further compare the two machine learning-based closure models. For this purpose we first look at the trajectory of the resolved energy. This is presented in Figure 14. We find that for SP the resolved energy (in black) stays in close proximity to its corresponding filtered DNS simulation (in green). This is in contrast to the CNN (in red) which starts to diverge from the DNS (in brown) around \(t=5\). The resolved energy for the CNN also exceeds the maximum allowed total energy \(E_{h}\) (in orange) at different points in the simulation, which is unphysical. We thus conclude that adding the SGS variables and conserving the total energy helps with capturing the delicate energy balance between resolved and SGS energy that characterizes the DNS. It is also interesting to note that NC conserves the resolved energy, as the coarse discretization conserves the discrete energy. However, this is not desired, as the resolved energy is not a conserved quantity, see Figure 3. To make a more quantitative analysis of this phenomenon we investigate the trajectory of the solution error and the Gaussian kernel density estimate (KDE) [39] of the resolved energy distribution, for both the CNN and SP. The latter analysis is carried out to analyze whether the closure models capture the correct energy balance between the resolved and SGS energy. The results for \(\mathrm{DOF}\in\{40,60,80\}\) are depicted in Figure 15. Looking at the solution error trajectories we find that at the earlier stages of the simulation the CNN outperforms SP (for \(\mathrm{DOF}=60\) and \(\mathrm{DOF}=80\)). However, SP slowly overtakes the CNN past the training region (\(t\leq 10\)). For \(\mathrm{DOF}=40\), SP outperforms the CNN roughly throughout the entire simulation. With regards to the resolved energy distribution we find that for each of the considered numbers of \(\mathrm{DOF}\) SP is capable reproducing the DNS distribution. On the other had, the CNN closure models struggle to capture this distribution. For \(\mathrm{DOF}=40\) a significant part of the distribution even exceeds the total energy present in the DNS, i.e. there occurs a nonphysical influx of energy. From this we conclude that both the SP and CNN closure models are capable of extrapolating beyond Figure 12: Change in momentum \(\Delta_{t}P_{h}=P_{h}(t)-P_{h}(0)\) (left) and change in (approximated) total energy \(\Delta_{t}E_{s/h}=E_{s/h}(t)-E_{s/h}(0)\) (right) for a simulation of KdV equation with periodic BCs starting from an unseen initial condition. The presented results correspond to \(\mathrm{DOF}=40\). the training data. However, the fact that SP is capable of correctly capturing the energy balance between the resolved and unresolved scales allows it to more accurately capture the statistics of the DNS results. This in turn leads to more robust long-term solution error behavior. ## 5 Conclusion In this paper we proposed a novel way of constructing machine learning-based closure models in a structure-preserving fashion by taking the 'discretize first and filter next' approach. We started off by applying a spatial averaging filter to a fine-grid discretization and writing the resulting filtered system in closure model form, where the closure term requires modeling. Next, we showed that by applying the filter we effectively remove part of the energy. We then introduced a linear compression of the subgrid-scale (SGS) content into a set of SGS variables living on the coarse grid. These SGS variables serve as a means of reintroducing the removed energy back into the system, allowing us to use the concept of kinetic energy conservation. In turn we introduced an extended system of equations that models the evolution of the filtered solution as well as the evolution of the compressed SGS variables. For this extended system we propose a structure-preserving closure modeling framework that allows for energy exchange between the filtered solution and the SGS variables, in addition to dissipation. This framework serves to constrain the underlying convolutional neural network (CNN) such that no additional energy enters the system for periodic boundary conditions (BCs). In this way we achieve stability by abiding by the underlying energy conservation law, while still allowing for backscatter through the energy present in the SGS variables. The framework is constructed such that momentum conservation is also satisfied. Figure 13: Absolute errors for the simulations produced by the NC, CNN, and SP closures, as well as the DNS solution, for solving the KdV equation on an extended spatial \(\Omega=[0,96]\) and temporal domain \(t=[0,50]\). The grid resolutions correspond to \(\text{DOF}=3\times 40\) for the closure models and \(N=3\times 600\) for the DNS. The area enclosed within the dashed lines indicates the size of the domain used for training. A convergence study showed that the learned SGS variables are able to accurately match the original SGS energy content, with accuracy consistently improving when refining the coarse-grid resolution. Given the SGS compression operator, our proposed structure-preserving framework (SP) was compared to a vanilla CNN (adapted to be momentum-conserving). Overall, the SP method performed on par with the CNN in terms of accuracy, _provided that the CNN produced stable results_. However, the results for the CNN were typically inconsistent, not showing clear convergence of the integrated solution error upon increasing the degrees of freedom, in addition to suffering from stability issues. On the other hand, our SP method produced stable results in all cases, while also consistently improving upon the 'no closure model' results by roughly an order of magnitude in terms of the integrated solution error. This conclusion was further strengthened by training an ensemble of closure models, where we investigated the consistency of the closure model performance with respect to the randomness inherent in the neural network training procedure. We observed that the trained vanilla CNNs differed significantly in performance and stability, whereas the different SP models performed very similarly to each other and displayed no stability issues. Our SP model is therefore more robust and successfully resolves the stability issues that plague conventional CNNs. Our numerical experiments confirmed the structure-preserving properties of our method: exact momentum conservation, energy conservation (in the absence of dissipation) up to a time discretization error, and strict energy decrease in the presence of dissipation. We also showed that our method succeeds in accurately modeling backscatter. Furthermore, when extrapolating in space and time, the advantage of including the SGS variables and embedding structure-preserving properties became even more apparent: our method is much better at capturing the delicate energy balance between the resolved and SGS energy. This in turn yielded better long-term error behavior. Based on these results we conclude that including the SGS variables, as well as adherence to the underlying energy conservation law, has the important advantages of stability and long-term accuracy, in addition to consistent performance. This work therefore serves as an important starting point for building physical constraints into machine learning-based turbulence closure models. In the future we aim to apply our SP framework to the Navier-Stokes equations in 2D and 3D, locally modeling the turbulent kinetic energy by a set of SGS variables. More generally, our framework is potentially applicable to a wide range of systems that possess multiscale behavior while also possessing a secondary conservation law, for example incompressible Figure 14: Trajectory of the resolved energy \(\bar{E}_{h}\) for the simulation presented in Figure 13 for each of the different models corresponding to DOF = 40. The DNS resolved energy is depicted for both \(I=\) DOF (to compare with the CNN) and \(I=\) DOF/2 (to compare with SP). pipe flow [40] and the streamfunction-vorticity formulation of Navier-Stokes in 2D [41]. **CRediT authorship contribution** **T. van Gastelen:** Conceptualization, Methodology, Software, Writing - original draft. **W. Edeling:** Writing - review & editing. **B. Sanderse:** Conceptualization, Methodology, Writing - review & editing, Funding acquisition. **Data availability** The code used to generate the training data and the implementation of the neural networks can be found at [https://github.com/tobyvg/ECNCM_1D](https://github.com/tobyvg/ECNCM_1D). **Acknowledgements** This publication is part of the project "Unraveling Neural Networks with Structure-Preserving Computing" (with project number OCENW.GROOT.2019.044 of the research programme NWO XL which is financed by the Dutch Research Council (NWO)). In addition, part of this publication is funded by Eindhoven University of Technology. Figure 15: Solution error trajectory (top) and KDEs estimating the distribution of \(\bar{E}_{h}\) (bottom) for the trained closure models corresponding to different numbers of DOF. These quantities are computed for a simulation of the KdV equation with the same initial condition on the extended spatial and temporal domain. In the top row the vertical black line indicates the maximum time present in the training data, while in the bottom row it indicates the total energy of the DNS (which should not be exceeded). The DNS resolved energy is again depicted for both \(I=\) DOF (to compare with the CNN) and \(I=\) DOF/2 (to compare with SP).
2309.06577
Efficient Finite Initialization for Tensorized Neural Networks
We present a novel method for initializing layers of tensorized neural networks in a way that avoids the explosion of the parameters of the matrix it emulates. The method is intended for layers with a high number of nodes in which there is a connection to the input or output of all or most of the nodes, we cannot or do not want to store/calculate all the elements of the represented layer and they follow a smooth distribution. This method is equally applicable to normalize general tensor networks in which we want to avoid overflows. The core of this method is the use of the Frobenius norm and the partial lineal entrywise norm of reduced forms of the layer in an iterative partial form, so that it has to be finite and within a certain range. These norms are efficient to compute, fully or partially for most cases of interest. In addition, the method benefits from the reuse of intermediate calculations. We apply the method to different layers and check its performance. We create a Python function to run it on an arbitrary layer, available in a Jupyter Notebook in the i3BQuantum repository: https://github.com/i3BQuantumTeam/Q4Real/blob/e07c827651ef16bcf74590ab965ea3985143f891/Quantum-Inspired%20Variational%20Methods/TN_Normalizer.ipynb
Alejandro Mata Ali, Iñigo Perez Delgado, Marina Ristol Roura, Aitor Moreno Fdez. de Leceta
2023-09-11T08:05:09Z
http://arxiv.org/abs/2309.06577v3
# Efficient Finite Initialization for Tensorized Neural Networks ###### Abstract We present a novel method for initializing layers of tensorized neural networks in a way that avoids the explosion of the parameters of the matrix it emulates. The method is intended for layers with a high number of nodes in which there is a connection to the input or output of all or most of the nodes. The core of this method is the use of the Frobenius norm of this layer in an iterative partial form, so that it has to be finite and within a certain range. This norm is efficient to compute, fully or partially for most cases of interest. We apply the method to different layers and check its performance. We create a Python function to run it on an arbitrary layer, available in a Jupyter Notebook in the i3BQ quantum repository github.com/i3BQantumTeam/Q4Real ## 1 Introduction Deep neural networks in the world of machine learning are widely used to obtain good results for use cases in industry, research and various fields. However, for highly complex cases we have the problem of needing a large number of parameters, with very large layers of neurons, which can be seen as having to apply very large matrices. In the literature it has been extensively studied to reduce the number of parameters in various ways, such as decomposing matrices into tensor networks [1] or directly training the tensor network itself [2][3][4] (Fig. 1). Our focus of analysis will be on methods where a tensor network is generated to model the layer tensor and trained directly, rather than using a full matrix. For example, when we try to use tensorized physics informed neural networks to solve differential equations for big industrial cases, as the heat equation of an engine or fluids in a turbine. In this case the initialization problem is often encountered, which we will see in the next section. If we initialize the elements of each tensor with a certain distribution, when we contract the tensor network to obtain the tensor it represents, some of its elements are too large (infinite) or too small (null) for the computer. We want to eliminate precisely these problems. A first proposal could be to contract the tensor network and eliminate these elements. However, in certain very large layers we cannot store all the tensor elements in memory, so we need another way. One way is to re-initialize the tensor network by changing a distribution with better hyperparameters, changing the mean and standard deviation. Nevertheless, many of these methodologies are not easy to apply or are not efficient at all. Our method consists of iteratively calculating the Frobenius norm for different sections of the tensor network until a condition is met, when we Figure 1: Arbitrary tensor network layer. divide all the parameters of the tensor network by the calculated factor in a particular way. This allows us to gradually make the Frobenius norm of the layer tend to the number we want, without having to repeatedly re-initialize. This method is remarkably interesting for hierarchical tree form layers, especially in Tensor Train (TT), Tensor Train Matrix (TT-M) and Projected Entangled Pair States (PEPS). This can also be used in other methods with tensor networks, such as combinatorial optimization, to determine hyperparameters, and it can be combined with other initialization methods. ## 2 Description of the problem When we have a tensor network of \(N\) nodes, we will have that the elements of the tensor representing the tensor network are given by the sum of a set of values, each given by the product of \(N\) elements of the different nodes. If we look at the case of a TT layer, as in Fig. 2.a, the shape of the elements of the layer is given as \[T_{ijklm}=\sum_{nopq}T_{in}^{0}T_{njo}^{1}T_{ojp}^{2}T_{pjq}^{3}T_{qm}^{4}. \tag{1}\] We see that for 5 indices in the tensor we have to multiply 5 tensor elements, but in the general case with \(N\) indices we have \[T_{ioi_{1}\ldots i_{N-1}}=\sum_{j_{0}j_{1}\cdots j_{N-2}}T_{i_{0}j_{0}}^{0}T_{ j_{0}i_{1}j_{1}}^{1}\ldots T_{j_{N-2}i_{N-1}}^{N-1}, \tag{2}\] multiplying \(N\) elements of the tensors \(T^{i}\) to obtain the element of the tensor \(T\). To exemplify the problem we want to solve, let's think about this. For a general case with bond dimension \(b\), the dimension of the index that is contracted between each pair of nodes, \(N\) nodes and \(a\) constant elements of the nodes we would have \[T_{i_{0}i_{1}\ldots i_{N-1}}=a^{N}b^{N-1}. \tag{3}\] We can see that with 20 nodes, an element value of 1.5 and a link dimension of 10, the final tensor elements would be 3.3 10\({}^{22}\). This is a very large element for a good initialization. However, if we were to divide the values of these tensors by that number, we could arrive at a case where \(a^{N}\) was a number too small for our computer to store and we would get a 0 in all the elements. Footnote 1: We have to use the tensor network to compute the tensor network. This problem is exacerbated by the number of nodes in the layer, since each one is a product. Moreover, we cannot simply calculate these tensor elements for cases with many physical indices, the output indices. This is because the number of values to be held in memory increases exponentially with the number of indices. ## 3 Tensor network initialization protocol Our protocol is based on the use of partial Frobenius norms in order to normalize the total Frobenius norm of the resulting tensor. The Frobenius norm of a matrix is given by the equation \[||A||_{F}=\sqrt{\sum_{ij}|a_{ij}|^{2}}=\sqrt{\mathrm{Tr}(A^{\dagger}A)}. \tag{4}\] In a tensor network, this would be to contract the layer with a copy of itself, so that each physical index is connected to the equivalent of its copy. We can see some examples in Fig. 3. Figure 3: Square of the Frobenius norm calculated to a) Tensor Train layer. b) Tensor Train Matrix layer. c) PEPS layer. Figure 2: a) Tensor Train layer with 5 indices. b) Tensor Train Matrix layer with 10 indices. c) PEPS layer with 9 indices. The contraction of this tensor network is equivalent to the Frobenius norm of the matrix it represents. In addition, it can be computed without the need to calculate the elements of the represented matrix, using only the elements of the nodes. The Frobenius norm is an indicator that serves to regularize layers of a model [5] and gives an estimate of the order of magnitude of the matrix elements. This can be seen if, with a more or less homogeneous distribution of elements, the norm in Eq. (4) will be of the order of \(\sqrt{nm}\ a_{00}\) for a \(n\times m\) matrix. To avoid that the elements of the layer are too big or too small, and therefore we have too big or too small outputs in the initialization, we will normalize these elements so that the norm of the tensor is a number that we choose, for example 1. This prevents our highest element from being higher than this value, while taking advantage of the localized distribution of values to ensure that the smallest value is not too small. Still, for an \(n\times m\) matrix we will be summing \(nm\) values, so we should adjust the norm to be proportional to \(nm\), the size of the problem, and thus not decrease the magnitude of the values with it. For this purpose we define what we will call the partial square norm of the tensor network. ### Partial square norm of the tensor network Throughout this section we will assume that we can consistently sort the nodes of a tensor network so that they form a single growing chain. \(p||\mathcal{A}||_{n,N}\), the partial square norm at \(n\) nodes of a tensor network \(\mathcal{A}\) with N nodes, will be defined as the norm of the tensor network \(\mathcal{A}_{n}\) defined by the first \(n\) nodes of \(\mathcal{A}\). To get an idea of what this partial square norm is, we will exemplify it with a simple case, a tensor train layer. We will consider the tensor network in Fig. 4, whose nodes are sorted. As we can see, in this case we would only have to do the same process as when calculating the total norm of the total tensor network, but stopping at step \(n\) and contracting the bond index of the two final tensors of the chain. We can see in the following Fig. 5 and 6 how the partial square norm would be for a TT-Matrix layer and for a PEPS layer. This calculation can be extended to general tensor networks easily, as long as we have a consistent ordering of the nodes. ### Initialization protocol If we have a tensor network \(\mathcal{A}\), representing a \(n_{A}\times m_{A}\) matrix whose Frobenius norm \(||\mathcal{A}||_{F}\) is infinite, zero or outside a certain range of values, we will want to normalize the elements of our tensor network so that the norm \(||\mathcal{B}||_{F}\) of the new tensor network \(\mathcal{B}\) is equal to a certain number. From here we will assume it is \(F=n_{A}m_{A}\), but we will see that another number can easily be chosen. To normalize the norm of the \(\mathcal{A}\) tensor with \(N\) nodes, we will only have to divide the elements of each of its nodes by \(||\mathcal{A}||_{F}^{1/N}\). Since we cannot divide the elements by 0 or infinity, we will use the following logic. If the total norm is infinite (zero), there will exist a partial square norm of \(n\) nodes whose value is finite and non-zero such that the partial square norm of \(n+1\) nodes is infinite (zero). This is because each step we add a new node to the partial square norm, we multiply by a new value, so Figure 4: a) Tensor Train layer with 5 nodes. b) Partial square norm at 1 node. c) Partial square norm at 2 nodes. d) Partial square norm at 3 nodes. Figure 5: a) Tensor Train Matrix layer with 5 nodes. b) Partial square norm at 1 node. c) Partial square norm at 2 nodes. d) Partial square norm at 3 nodes. infinity (zero) will appear after a certain number of nodes, being the partial square norm with one node less a valid number to divide by. The idea is to iteratively normalize the norm little by little so that we eventually achieve full normalization. We want a tensor network \(\mathcal{B}\) with Frobenius norm \(F\), with \(N\) nodes and we set as tolerance range \((a,b)\). The protocol to follow would be: 1. We initialize the node tensors with some initialization method. We recommend random initialization with a Gaussian distribution of a constant standard deviation (not greater than 0.5) and a constant mean neither too high nor too low and positive. 2. We calculate the norm \(||\mathcal{A}||_{F}\). If it is finite and non-zero, we divide each element of each node by \(\left(\frac{||\mathcal{A}||_{F}}{F}\right)^{1/N}\) and we have the \(\mathcal{B}\) we want. Otherwise, we continue. 3. We calculate \({}^{p}||\mathcal{A}||_{1,N}\), the partial square norm for 1 node of \(\mathcal{A}\). 1. If it is infinite, we divide each element of nodes of \(\mathcal{A}\) by \((10(1+\xi))^{1/2N}\), being \(\xi\) a random number between 0 and 1 and we return to step 2. 2. If it is zero, we divide each element of nodes of \(\mathcal{A}\) by \((0.1/(1+\xi))^{1/2N}\) and we return to step 2. 3. Otherwise, we save this value as \({}^{p}||\mathcal{A}||_{1,N}\) and continue. 4. For \(n\in[2,N-1]\) we calculate \({}^{p}||\mathcal{A}||_{n,N}\), the partial square norm for \(n\) nodes of \(\mathcal{A}\). 1. If it is infinite or zero, we divide each element of nodes of \(\mathcal{A}\) by \((^{p}||\mathcal{A}||_{n-1,N})^{\frac{1}{2N}}\) and we return to step 2. 2. If it is finite, but bigger than \(b\) or smaller than \(a\), we divide each element of nodes of \(\mathcal{A}\) by \((^{p}||\mathcal{A}||_{n,N})^{\frac{1}{2N}}\) and we return to step 2. 3. Otherwise, we continue. 5. If no partial square norm is out of range, infinite or zero, we divide each element of nodes of \(\mathcal{A}\) by \((^{p}||\mathcal{A}||_{N-1,N})^{\frac{1}{2N}}\) and we return to step 2. We repeat the cycle until we reach a stop condition, which will be to have repeated a certain maximum number of iterations. If we reach that point, the protocol will have failed and we will have two options. The first is to change the order of the nodes, so that other structures are checked. The second is to reinitialize with other hyperparameters in the nodes. The purpose of using a random factor in case of divergence in the partial norm with 1 node is that, not knowing the real value by which we should divide, we rescale by an order of magnitude. However, to avoid possible infinite rescaling loops, we add a variability factor so that we cannot get stuck. ## 4 Results We ran the initialization with TT and TT-Matrix layers of different sizes and physical dimensions \(p\), and checked how many steps were needed for normalization. We use a value of 1 for the mean and a value of 0.5 for the standard deviation. For the TT layer we choose \(F=p^{2N}\) and for the TT-Matrix layer we choose \(F=p^{N}\). For the TT layer this is the number of elements we have and for the TT-Matrix layer it is its root, a number we take for convergence purposes. Our tolerance range is \((F\ 10^{-3},F\ 10^{3})\). We can see in Figs. 7, 8 and 9 the result for the TT layer and for the TT-Matrix layer. We can see in Fig. 7 that the scaling with \(N\) is linear for different \(p\). Fig. 8 shows that the scaling is logarithmic with \(p\), similar to the scaling with \(b\) in Fig. 9. In all cases, the TT-Matrix layer normalization requires more steps than the TT layer normalization. Figure 6: a) PEPS layer with 9 nodes. b) Partial square norm at 1 node. c) Partial square norm at 2 nodes. d) Partial square norm at 5 nodes. ## 5 Other applications So far we have seen the application for tensorized neural networks, but this method can be useful for more methods. Whenever we have a method where we have to contract a tensor network and the non-zero elements of the tensors that form it are of the same order of magnitude we can perform this method. This can be helpful in cases where we do not want the absolute scale of the final tensor elements, but we want to observe a relative scale between them. An example would be the simulation of imaginary time evolution processes where we want to see which is the state with minimum energy and not the energy it has. However, this energy could be recovered if we perform the method, we save the scale factor by which we are multiplying the elements of the tensor network, and we multiply the values of the resulting tensor network by this factor. This can be interesting because the different factors can be multiplied keeping their order of magnitude apart, so that we do not have overflows. ## 6 Conclusions We have developed a method to successfully initialize a layer of tensorized neural networks by using the partial computations of their Frobenius norms. We have also applied it to different layers and seen its scaling. A possible future line of research could be to investigate how to reduce the number of steps to be performed. Another could be to study the scaling of complexity with increasing size of each of the different types of existing layers. We could also apply it to the methods mentioned in Sec. 5 for example in combinatorial optimization[6] to determine the appropriate decay factor. ## Acknowledgement The research leading to this paper has received funding from the Q4Real project (Quantum Computing for Real Industries), HAZITEK 2022, no. ZE-2022/00033.
2301.00056
A Bayesian Neural Network Approach to identify Stars and AGNs observed by XMM Newton
In today's era, a tremendous amount of data is generated by different observatories and manual classification of data is something which is practically impossible. Hence, to classify and categorize the objects there are multiple machine and deep learning techniques used. However, these predictions are overconfident and won't be able to identify if the data actually belongs to the trained class. To solve this major problem of overconfidence, in this study we propose a novel Bayesian Neural Network which randomly samples weights from a distribution as opposed to the fixed weight vector considered in the frequentist approach. The study involves the classification of Stars and AGNs observed by XMM Newton. However, for testing purposes, we consider CV, Pulsars, ULX, and LMX along with Stars and AGNs which the algorithm refuses to predict with higher accuracy as opposed to the frequentist approaches wherein these objects are predicted as either Stars or AGNs. The proposed algorithm is one of the first instances wherein the use of Bayesian Neural Networks is done in observational astronomy. Additionally, we also make our algorithm to identify stars and AGNs in the whole XMM-Newton DR11 catalogue. The algorithm almost identifies 62807 data points as AGNs and 88107 data points as Stars with enough confidence. In all other cases, the algorithm refuses to make predictions due to high uncertainty and hence reduces the error rate.
Sarvesh Gharat, Bhaskar Bose
2022-12-30T21:29:50Z
http://arxiv.org/abs/2301.00056v1
# A Bayesian Neural Network Approach to identify Stars and AGNs observed by XMM Newton + ###### Abstract In today's era, a tremendous amount of data is generated by different observatories and manual classification of data is something which is practically impossible. Hence, to classify and categorize the objects there are multiple machine and deep learning techniques used. However, these predictions are overconfident and won't be able to identify if the data actually belongs to the trained class. To solve this major problem of overconfidence, in this study we propose a novel Bayesian Neural Network which randomly samples weights from a distribution as opposed to the fixed weight vector considered in the frequentist approach. The study involves the classification of Stars and AGNs observed by XMM Newton. However, for testing purposes, we consider CV, Pulsars, ULX, and LMX along with Stars and AGNs which the algorithm refuses to predict with higher accuracy as opposed to the frequentist approaches wherein these objects are predicted as either Stars or AGNs. The proposed algorithm is one of the first instances wherein the use of Bayesian Neural Networks is done in observational astronomy. Additionally, we also make our algorithm to identify stars and AGNs in the whole XMM-Newton DR11 catalogue. The algorithm almost identifies 62807 data points as AGNs and 88107 data points as Stars with enough confidence. In all other cases, the algorithm refuses to make predictions due to high uncertainty and hence reduces the error rate. keywords: methods: data analysis - methods: observational - methods: miscellaneous ## 1 Introduction Since the last few decades, a large amount of data is regularly generated by different observatories and surveys. The classification of this enormous amount of data by professional astronomers is time-consuming as well as practically impossible. To make the process simpler, various citizen science projects (Desjardins et al., 2021) (Cobb, 2021) (Allf et al., 2022) (Faherty et al., 2021) are introduced which has been reducing the required time by some extent. However, there are many instances wherein classifying the objects won't be simple and may require domain expertise. In this modern era, wherein Machine Learning and Neural Networks are widely used in multiple fields, there has been significant development in the use of these algorithms in Astronomy. Though these algorithms are accurate with their predictions there is certainly some overconfidence (Kristiadi et al., 2020) (Kristiadi et al., 2021) associated with it. Besides that, these algorithms tend to classify every input as one of the trained classes (Beaumont and Haziza, 2022) irrespective of whether it actually belongs to those trained classes ege: The algorithm trained to classify stars will also predict AGNs as one of the stars. To solve this major issue, in this study we propose a Bayesian Neural Network (Jospin et al., 2022) (Charnock et al., 2022) which refuses to make a prediction whenever it isn't confident about its predictions. The proposed algorithm is implemented on the data collected by XMM-Newton (Jansen et al., 2001). We do a binary classification to classify Stars and AGNs (Malek et al., 2013) (Golob et al., 2021). Additionally to test our algorithm with the inputs which don't belong to the trained class we consider data observed from CV, Pulsars, ULX, and LMX. Although, the algorithm doesn't refuse to predict all these objects, but the number of objects it predicts for these 4 classes is way smaller than that of trained classes. For the trained classes, the algorithm gives its predictions for almost 64% of the data points and avoids predicting the output whenever it is not confident about its predictions. The achieved accuracy in this binary classification task whenever the algorithm gives its prediction is 98.41%. On the other hand, only 14.6% of the incorrect data points are predicted as one of the classes by the algorithm. The percentage decrease from 100% to 14.6% in the case of different inputs is what dominates our model over other frequentist algorithms. ## 2 Methodology In this section, we discuss the methodology used to perform this study. This section is divided into the following subsections. * Data Collection and Feature Extraction * Model Architecture * Training and Testing ### Data Collection and Feature Extraction In this study, we make use of data provided in "XMM-DR11 SEDs" Webb et al. (2020). We further cross-match the collected data with different vizier (Ochsenbein et al., 2000) catalogs. Please refer to Table 1 to view all the catalogs used in this study. As the proposed algorithm is a "supervised Bayesian algorithm", this happens to be one of the important steps for our algorithm to work. The provided data has 336 different features that can increase computational complexity by a larger extent and also has a lot of missing data points. Therefore in this study, we consider a set of 18 features corresponding to the observed source. The considered features for all the sources are available on our Github repository, more information of which is available on the official webpage 1 of the observatory. After cross-matching and reducing the number of features, we were left with a total of 19136 data points. The data distribution can be seen in Table 2. We further also plot the sources (Refer Figure1) based on their "Ra" and "Dec" to confirm if the data coverage of the considered sources matches with the actual data covered by the telescope. Footnote 1: [http://mmmssc.irap.omp.eu/Catalogue/4XMM-DR11/col_unsrc.html](http://mmmssc.irap.omp.eu/Catalogue/4XMM-DR11/col_unsrc.html) The collected data is further classified into train and test according to the \(80:20\) splitting condition. The exact number of data points is mentioned in Table 2 ### Model Architecture The proposed model has 1 input, hidden and output layers (refer Figure 2) with \(18,512\), and 2 neurons respectively. The reason for having 18 neurons in the input layer is the number of input features considered in this study. Further, to increase the non-linearity of the output, we make use of "Relu" (Fukushima, 1975)(Agarap, 2018) as an activation function for the first 2 layers. On the other hand, the output layer makes use of "Softmax" to make the predictions. This is done so that the output of the model will be the probability of image belonging to a particular class (Nwankpa et al., 2018)(Feng and Lu, 2019). The "optimizer" and "loss" used in this study are "Adam" (Kingma et al., 2020) and "Trace Elbo" (Wingate and Weber, 2013)(Ranganath et al., 2014) respectively. The overall idea of BNN (Izmailov et al., 2021)(Jospin et al., 2022)(Goan and Fookes, 2020) is to have a posterior distribution corresponding to all weights and biases such that, the output distribution produced by these posterior distributions is similar to that of the categorical distributions defined in the training dataset. Hence, convergence, in this case, can be achieved by minimizing the KL divergence between the output and the categorical distribution or just by maximizing the ELBO (Wingate and Weber, 2013)(Ranganath et al., 2014). We make use of normal distributions which are initialized with random mean and variance as prior (Fortuin et al., 2021), along with the likelihood derived from the data to construct the posterior distribution. ### Training and Testing The proposed model is constructed using Pytorch (Paszke et al., 2019) and Pyro (Bingham et al., 2019). The training of the model is conducted on Google Colaboratory, making use of NVIDIA K80 GPU (Carneiro et al., 2018). The model is trained over 2500 epochs with a learning rate of 0.01. Both these parameters i.e number of epochs and learning rate has to be tuned and are done by iterating the algorithm multiple times with varying parameter values. The algorithm is further asked to make 100 predictions corresponding to every sample in the test set. Every time it makes the prediction, the corresponding prediction probability varies. This is due to random sampling of weights and biases from the trained distributions. Further, the algorithm considers the "mean" and "standard deviation" corresponding to those probabilities to make a decision as to proceed with classification or not. \begin{table} \begin{tabular}{l c} \hline Class & Catalogue \\ \hline \hline AGN & VERONCAT (Véron-Cetty and Véron, 2010) \\ \hline LMX & NG531JSCKO (Lin et al., 2015) \\ & RITTERLMXB (Ritter and Kolb, 2003) \\ & LMXBCAT (Liu et al., 2007) \\ & INTREFCAT (Ebisawa et al., 2003) \\ & M31XMMKRAY (Stiole et al., 2008) \\ & M31CFCXO (Hofmann et al., 2013) \\ & RASSMASS (Hakakonsen and Rutledge, 2009) \\ \hline Pulsars & ATNF (Manchester et al., 2005) \\ & FERMIL2PSR (Abdo et al., 2013) \\ \hline CV & CVC (Drake et al., 2014) \\ \hline ULX & XSEG (Drake et al., 2014) \\ \hline Stars & CSSC (Skiff, 2014) \\ \hline \end{tabular} \end{table} Table 1: Catalogues used to create labeled data \begin{table} \begin{tabular}{l c c} \hline Class & Training Data & Test Data \\ \hline \hline AGN & 8295 & 2040 \\ \hline LMX & 0 & 49 \\ \hline Pulsars & 0 & 174 \\ \hline CV & 0 & 36 \\ \hline ULX & 0 & 261 \\ \hline Stars & 6649 & 1628 \\ \hline Total & 14944 & 4188 \\ \hline \end{tabular} \end{table} Table 2: Data distribution after cross-matching all the data points with catalogs mentioned in Table 1 Figure 1: Sky map coverage of considered data points ## 3 Results and Discussion The proposed algorithm is one of the initial attempts to implement "Bayesian Neural Networks" in observational astronomy which has shown significant results. The algorithm gives the predictions with an accuracy of more than 98% whenever it agrees to make predictions for trained classes. Table 3 represents confusion matrix of classified data. To calculate accuracy, we make use of the given formula. \[\text{Accuracy}=\frac{a_{11}+a_{22}}{a_{11}+a_{12}+a_{21}+a_{22}}\times 100\] In our case, the calculated accuracy is \[\text{Accuracy}=\frac{1312+986}{1312+6+31+986}\times 100=98.4\%\] As accuracy is not the only measure to evaluate any classification model, we further calculate precision, recall and f1 score corresponding to both the classes as shown in Table 4 Although, the obtained results from simpler "BNN" can be obtained via complex frequentist models, the uniqueness of the algorithm is that it agrees to classify only 14% of the unknown classes as one of the trained classes as opposed to frequentist approaches wherein all those samples are classified as one of these classes. Table 5 shows the percentage of data from untrained classes which are predicted as a Star or a AGN. As the algorithm gives significant results on labelled data, we make use of it to identify the possible Stars and AGNs in the raw data 2. The algorithm almost identifies almost 7.1% of data as AGNs and 10.04% of data as AGNs. Numerically, the number happens to be 62807 and 88107 respectively. Although, there's high probability that there exists more Stars and AGNs as compared to the given number the algorithm simply refuses to give the prediction as it isn't enough confident with the same. Footnote 2: [http://xrmsssc.irap.omp.eu/Catalogue/4DMM-DR11/col_unsrc.html](http://xrmsssc.irap.omp.eu/Catalogue/4DMM-DR11/col_unsrc.html) ## 4 Conclusions In this study, we propose a Bayesian approach to identify Stars and AGNs observed by XMM Newton. The proposed algorithm avoids making predictions whenever it is unsure about the predictions. Implementing such algorithms will help in reducing the number of wrong predictions which is one of the major drawbacks of algorithms making use of the frequentist approach. This is an important thing to consider as there always exists a situation wherein the algorithm receives an input on which it is never trained. The proposed algorithm also identifies 62807 Stars and 88107 AGNs in the data release 11 by XMM-Newton. ## 5 Conflict of Interest The authors declare that they have no conflict of interest. ## Data Availability The raw data used in this study is publicly made available by XMM Newton data archive. All the codes corresponding to the algorithm and the predicted objects along with the predictions will be publicly made available on "Github" and "paperswithcode" by June 2023.
2309.15244
Homotopy Relaxation Training Algorithms for Infinite-Width Two-Layer ReLU Neural Networks
In this paper, we present a novel training approach called the Homotopy Relaxation Training Algorithm (HRTA), aimed at accelerating the training process in contrast to traditional methods. Our algorithm incorporates two key mechanisms: one involves building a homotopy activation function that seamlessly connects the linear activation function with the ReLU activation function; the other technique entails relaxing the homotopy parameter to enhance the training refinement process. We have conducted an in-depth analysis of this novel method within the context of the neural tangent kernel (NTK), revealing significantly improved convergence rates. Our experimental results, especially when considering networks with larger widths, validate the theoretical conclusions. This proposed HRTA exhibits the potential for other activation functions and deep neural networks.
Yahong Yang, Qipin Chen, Wenrui Hao
2023-09-26T20:18:09Z
http://arxiv.org/abs/2309.15244v3
# Homotopy Relaxation Training Algorithms for Infinite-Width Two-Layer ReLU Neural Networks ###### Abstract In this paper, we present a novel training approach called the Homotopy Relaxation Training Algorithm (HRTA), aimed at accelerating the training process in contrast to traditional methods. Our algorithm incorporates two key mechanisms: one involves building a homotopy activation function that seamlessly connects the linear activation function with the ReLU activation function; the other technique entails relaxing the homotopy parameter to enhance the training refinement process. We have conducted an in-depth analysis of this novel method within the context of the neural tangent kernel (NTK), revealing significantly improved convergence rates. Our experimental results, especially when considering networks with larger widths, validate the theoretical conclusions. This proposed HRTA exhibits the potential for other activation functions and deep neural networks. ## 1 Introduction Neural networks (NNs) have become increasingly popular and widely used in scientific and engineering applications, such as image classification [16, 10], regularization [11, 26]. Finding an efficient way to train and obtain the parameters in NNs is an important task, enabling the application of NNs in various domains. Numerous studies have delved into training methods for NNs, as evidenced by the works of Erhan et al. (2010), Keskar et al. (2016), and You et al. (2019) [7, 15, 28]. However, the optimization of loss functions can become increasingly challenging over time, primarily due to the nonconvex energy landscape. Traditional algorithms such as the gradient descent method and the Adam method often lead to parameter entrapment in local minima or saddle points for prolonged periods. The homotopy training algorithm (HTA) was introduced as a remedy by making slight modifications to the NN structure. HTA draws its roots from the concept of homotopy continuation methods [20, 22, 8, 9], with its initial introduction found in [4]. However, constructing a homotopy function requires it to be aligned with the structure of neural networks and entails time-consuming training. In this paper, we introduce an innovative training approach called the homotopy relaxation training algorithm (HRTA). This approach leverages the homotopy concept, specifically focusing on the activation function, to address the challenges posed by the HRTA. We develop a homotopy activation function that establishes a connection between the linear activation function and the target activation function. By gradually adjusting the homotopy parameter, we enable a seamless transition toward the target activation function. Mathematically, the homotopy activation function is defined as \(\sigma_{s}\), where \(s\) is the homotopy parameter, and it takes the form \(\sigma_{s}(x)=(1-s)\text{Id}(x)+s\sigma(x)\). Here, \(\text{Id}(x)\) represents the identity function (i.e., the linear activation function), and \(\sigma(x)\) is the target activation function. The term "homotopy" in the algorithm signifies its evolution from an entirely linear neural network, where the initial activation function is the identity function (\(s=0\)). The homotopy activation function undergoes gradual adjustments until it transforms into a target neural network (\(s=1\)). This transition, from the identity function to the target function, mirrors the principles of homotopy methods. Furthermore, our analysis reveals that by extrapolating (or over-relaxing) the homotopy parameter (\(1<s<2\)), training performance can be further enhanced. In this context, we extend the concept of homotopy training, introducing what we refer to as "homotopy relaxation training." In this paper, we relax the homotopy parameter and allow \(s\) to take on any positive value in \([0,2]\), rather than being restricted to values in \([0,1]\). Moreover, we provide theoretical support for this algorithm, particularly in hyperparameter scenarios. Our analysis is closely related to neural tangent kernel methods [14, 2, 30, 3, 6, 13]. We establish that modifying the homotopy parameter at each step increases the smallest eigenvalue of the gradient descent kernel for infinite-width neural networks (see Theorem 1). Consequently, we present Theorem 2 to demonstrate the capacity of HRTA to enhance training speed. The paper is organized as follows: We introduce the HRTA in Section 2. Next, in Section 4, we conduct experiments, including supervised learning and solving partial differential equations, based on our algorithm. Finally, in Section 3, we provide theoretical support for our theory. ## 2 Homotopy Relaxation Training Algorithm In this paper, we consider supervised learning for NNs. Within a conventional supervised learning framework, the primary objective revolves around acquiring an understanding of a high-dimensional target function denoted as \(f(\mathbf{x})\), which finds its domain in \((0,1)^{d}\), with \(\|f\|_{L^{\infty}((0,1)^{d})}\leq 1\), through a finite collection of data samples \(\{(\mathbf{x}_{i},f(\mathbf{x}_{i}))\}_{i=1}^{n}\). When embarking on the training of a NN, our aim rests upon the discovery of a NN representation denoted as \(\phi(\mathbf{x};\mathbf{\theta})\) that serves as an approximation to \(f(\mathbf{x})\), a feat achieved through the utilization of randomly gathered data samples \(\{(\mathbf{x}_{i},f(\mathbf{x}_{i}))\}_{i=1}^{n}\), with \(\mathbf{\theta}\) representing the parameters within the NN architecture. It is assumed, in this paper, that the sequence \(\{\mathbf{x}_{i}\}_{i=1}^{n}\) constitutes an independent and identically distributed (i.i.d.) sequence of random variables, uniformly distributed across \((0,1)^{d}\). Denote \[\mathbf{\theta}_{S}:=\arg\min_{\mathbf{\theta}}\mathcal{R}_{S}(\mathbf{\theta}):=\arg\min _{\mathbf{\theta}}\frac{1}{2n}\sum_{i=1}^{n}|f(\mathbf{x}_{i})-\phi(\mathbf{x}_{i};\mathbf{ \theta})|^{2}. \tag{1}\] Next, we introduce the HRTA by defining \(\sigma_{s_{p}}(x)=(1-s_{p})\text{Id}(x)+s_{p}\sigma(x)\) with a discretized set points of the homotopy parameter \(\{s_{p}\}_{p=1}^{M}\in(0,2)\). We then proceed to obtain: \[\mathbf{\theta}_{S}^{s_{p}}:=\arg\min_{\mathbf{\theta}}\mathcal{R}_{S,s_{p}}(\mathbf{ \theta}),\text{ with an initial guess }\mathbf{\theta}_{S}^{s_{p-1}},p=1,\cdots,M, \tag{2}\] where we initialize \(\mathbf{\theta}_{S}^{s_{0}}\) randomly, and \(\mathbf{\theta}_{S}^{s_{M}}\) represents the optimal parameter value that we ultimately achieve. In this paper, we consider a two-layer NN defined as follows \[\phi(\mathbf{x};\mathbf{\theta}):=\frac{1}{\sqrt{m}}\sum_{k=1}^{m}a_{k}\sigma(\mathbf{ \omega}_{k}^{\intercal}\mathbf{x}), \tag{3}\] with the activation function \(\sigma(z)=\text{ReLU}(z)=\max\{z,0\}\). The evolution of the traditional training can be written as the following differential equation: \[\frac{\mathrm{d}\mathbf{\theta}(t)}{\mathrm{d}t}=-\nabla_{\mathbf{\theta}}\mathcal{R} _{S}(\mathbf{\theta}(t)). \tag{4}\] In the HRTA setup, we train a sequences of leaky ReLU activate functions [27, 19]. Subsequently, for each of these Leaky ReLUs with given \(s_{p}\), we train the neural network on a time interval of \([t_{p-1},t_{p}]\): \[\frac{\mathrm{d}\mathbf{\theta}(t)}{\mathrm{d}t}=-\nabla_{\mathbf{\theta}}\mathcal{R} _{S,s_{p}}(\mathbf{\theta}(t)). \tag{5}\] Moreover, we have \(t_{0}=0\) and initialize the parameter vector \(\mathbf{\theta}(0)\), drawn from a normal distribution \(\mathcal{N}(\mathbf{0},\mathbf{I})\). Therefore the HRTA algorithm's progression is outlined in **Algorithm 1**. **Remark 1**.: _If \(s_{M}=1\), then upon completion, we will have obtained \(\mathbf{\theta}(t_{M})\) and the NN \(\phi_{s_{M}}(\mathbf{x};\mathbf{\theta}(t_{M}))\), characterized by pure ReLU activations. The crux of this algorithm is its ability to transition from Leaky ReLUs to a final configuration of a NN with pure ReLU activations. This transformation is orchestrated via a series of training iterations utilizing the homotopy approach._ _However, our paper demonstrates that there is no strict necessity to achieve \(s_{M}=1\). What we aim for is to obtain a NN with parameters \(\mathbf{\theta}\) that minimizes \(\mathcal{R}_{S,s_{M}}(\mathbf{\theta})\). This is because, for any value of \(s\), we can readily represent \(\phi_{s}(\mathbf{x};\mathbf{\theta})\) as a pure ReLU NN, as shown in the following equation:_ \[\sigma_{s}(x)=(1-s)\text{Id}(x)+s\sigma(x)=(1-s)\sigma(x)-(1-s)\sigma(-x)+s \sigma(x)=\sigma(x)-(1-s)\sigma(-x).\] _To put it simply, if we can effectively train a NN with Leaky-ReLU to learn the target functions, it implies that we can achieve the same level of performance with a NN using standard ReLU activation. Consequently, the theoretical analysis in the paper does not require that the final value of \(s_{M}\) must be set to 1. Moreover, our method is applicable even when \(s_{M}>1\), which we refer to as the relaxation part of HRTA. It's important to highlight that for \(s>1\) the decay speed may surpass that of a pure ReLU neural network. This is consistent with the training using the hat activation function (specifically, when \(s=2\) in the homotopy activation function) discussed in [12]., although it's worth noting that their work primarily focuses on the linear case (involving only the constant factor change), whereas our work extends this consideration to neural networks._ ``` input : Sample points of function \(\{(\mathbf{x}_{i},f(\mathbf{x}_{i}))\}_{i=1}^{n}\); Initialized homotopy parameter \(s_{1}>0\); Number of the iteration times \(M\); \(\zeta>0\); Training time of each iteration \(T\); \(t_{p}=pT\); learning rate \(\tau\); \(\mathbf{\theta}_{0}\sim\mathcal{N}(0,\mathbf{I})\). 1for\(p=1,2,\dots,M\)do 2for\(t\in[t_{p-1},t_{p}]\)do 3\(\mathbf{\theta}_{t+1}=\mathbf{\theta}_{t}-\tau\nabla_{\mathbf{\theta}}(\mathcal{R}_{s_{p}} (\mathbf{\theta}_{t}))\); 4 end for 5\(s_{p+1}:=s_{p}+\zeta\); 6 end for output :\(\phi_{s_{M}}(\mathbf{x},\mathbf{\theta}_{T_{M}})\) as the two layer NN approximation to approximate \(f(\mathbf{x})\). ``` **Algorithm 1**The Homotopy Relaxation Training Algorithm for Two Layer Neural Networks ## 3 Convergence Analysis In this section, we will delve into the convergence analysis of the HRTA based on the neural target kernel methods [14, 2, 30, 3, 6, 13]. For simplicity, we will initially focus on the case where \(M=2\). It is important to note that for cases with \(M>2\), all the analyses presented here can be readily extended. To start, we set the initial value of \(s_{1}>0\). The structure of the proof of Theorem 2 is shown in Figure 1. ### Gradient descent kernel The kernels characterizing the training dynamics for the \(p\)-th iteration take the following form: \[k_{p}^{[a]}(\mathbf{x},\mathbf{x}^{\prime}):= \mathbf{E}_{\mathbf{\omega}}\sigma_{s_{p}}(\mathbf{\omega}^{\intercal}\bm {x})\sigma_{s_{p}}(\mathbf{\omega}^{\intercal}\mathbf{x}^{\prime})\] \[k_{p}^{[\mathbf{\omega}]}(\mathbf{x},\mathbf{x}^{\prime}):= \mathbf{E}_{(a,\mathbf{\omega})}a^{2}\sigma_{s_{p}}^{\prime}(\mathbf{ \omega}^{\intercal}\mathbf{x})\sigma_{s_{p}}^{\prime}(\mathbf{\omega}^{\intercal}\bm {x}^{\prime})\mathbf{x}\cdot\mathbf{x}^{\prime}. \tag{6}\] The Gram matrices, denoted as \(\mathbf{K}_{p}^{[a]}\) and \(\mathbf{K}_{p}^{[\mathbf{\omega}]}\), corresponding to an infinite-width two-layer network with the activation function \(\sigma_{s_{p}}\), can be expressed as follows: \[K_{ij,p}^{[a]} =k_{p}^{[a]}(\mathbf{x}_{i},\mathbf{x}_{j}),\;\mathbf{K}_{p}^{[a]}=(K_{ij,p} ^{[a]})_{n\times n},\] \[K_{ij,p}^{[\mathbf{\omega}]} =k_{p}^{[\mathbf{\omega}]}(\mathbf{x}_{i},\mathbf{x}_{j}),\;\mathbf{K}_{p}^{[\bm {\omega}]}=(K_{ij,p}^{[\mathbf{\omega}]})_{n\times n} \tag{7}\] Moreover, the Gram matrices, referred to as \(\mathbf{G}_{p}^{[a]}\) and \(\mathbf{G}_{p}^{[\mathbf{\omega}]}\), corresponding to a finite-width two-layer network with the activation function \(\sigma_{s_{p}}\), can be defined as \[G_{ij,p}^{[a]} =\frac{1}{m}\sum_{k=1}^{m}\sigma_{s_{p}}(\mathbf{\omega}_{k}^{ \intercal}\mathbf{x})\sigma_{s_{p}}(\mathbf{\omega}_{k}^{\intercal}\mathbf{x}^{\prime}),\; G_{p}^{[a]}=(G_{ij,p}^{[a]})_{n\times n},\] \[G_{ij,p}^{[\mathbf{\omega}]} =\frac{1}{m}\sum_{k=1}^{m}a^{2}\sigma_{s_{p}}^{\prime}(\mathbf{\omega }_{k}^{\intercal}\mathbf{x})\sigma_{s_{p}}^{\prime}(\mathbf{\omega}_{k}^{\intercal} \mathbf{x}^{\prime})\mathbf{x}\cdot\mathbf{x}^{\prime},\;\mathbf{K}_{p}^{[\mathbf{\omega}]}=(G_{ ij,p}^{[\mathbf{\omega}]})_{n\times n}. \tag{8}\] **Assumption 1**.: _Denote \(\mathbf{K}^{[a]}\) and \(\mathbf{K}^{[\mathbf{\omega}]}\) are the Gram matrices for ReLU neural network and_ \[H^{[a]}_{ij}=k^{[a]}(-\mathbf{x}_{i},\mathbf{x}_{j}),\ \mathbf{H}^{[a]}=(H^{[a]}_{ ij})_{n\times n},\ \mathbf{H}^{[a]}_{ij}=k^{[\mathbf{\omega}]}(-\mathbf{x}_{i},\mathbf{x}_{j}),\ \mathbf{H}^{[\mathbf{\omega}]}=(H^{[\mathbf{ \omega}]}_{ij})_{n\times n}\] \[M^{[a]}_{ij}=k^{[a]}(\mathbf{x}_{i},-\mathbf{x}_{j}),\ \mathbf{M}^{[\mathbf{\omega}]}=(M^{[a]}_{ ij})_{n\times n},\ \mathbf{M}^{[\mathbf{\omega}]}_{ij}=k^{[\mathbf{\omega}]}(\mathbf{x}_{i},-\mathbf{x}_{j}),\ \mathbf{M}^{[\mathbf{ \omega}]}=(M^{[\mathbf{\omega}]}_{ij})_{n\times n}\] \[T^{[a]}_{ij}=k^{[a]}(-\mathbf{x}_{i},-\mathbf{x}_{j}),\ \mathbf{T}^{[\mathbf{ \omega}]}=(T^{[a]}_{ij})_{n\times n},\ \mathbf{T}^{[\mathbf{\omega}]}_{ij}=k^{[\mathbf{ \omega}]}(-\mathbf{x}_{i},-\mathbf{x}_{j}),\ \mathbf{T}^{[\mathbf{\omega}]}=(T^{[\mathbf{ \omega}]}_{ij})_{n\times n}. \tag{9}\] _Suppose that all matrices defined above are strictly positive definite._ **Remark 2**.: _We would like to point out that if, for all \(i\) and \(j\) satisfying \(i\neq j\), we have \(\pm\mathbf{x}_{i}\) not parallel to \(\pm\mathbf{x}_{j}\), then Assumption 1 is satisfied. The validity of this assertion can be established by referring to [6, Theorem 3.1] for the proof._ **Theorem 1**.: _Suppose Assumption 1 holds, denote_ \[\lambda_{a,p}:=\lambda_{\text{min}}\left(\mathbf{K}^{[a]}_{p}\right),\ \ \lambda_{\mathbf{\omega},p}:=\lambda_{\text{min}}\left(\mathbf{K}^{[\mathbf{ \omega}]}_{p}\right).\] _Then we have \(\lambda_{\mathbf{\omega},p+1}\geq\lambda_{\mathbf{\omega},p}>0,\ \lambda_{a,p+1}\geq \lambda_{a,p}>0\) for all \(0\leq s_{p}\leq s_{p+1}\)._ ### Convergence of \(t_{1}\) iteration All the proofs for this subsection can be found in Appendix A.3. **Lemma 1** (bounds of initial parameters).: _Given \(\delta\in(0,1)\), we have with probability at least \(1-\delta\) over the choice of \(\mathbf{\theta}(0)\) such that_ \[\max_{k\in[m]}\{|a_{k}(0)|,\|\mathbf{\omega}_{k}(0)\|_{\infty}\}\leq\sqrt{2\log \frac{2m(d+1)}{\delta}}. \tag{10}\] Next we are plan to bound \(\mathcal{R}_{S,s_{1}}(\mathbf{\theta}(0))\) based on lemmas of Rademacher complexity. Figure 1: Structure of proof of Theorem 2 **Lemma 2** (bound of initial empirical risk).: _Given \(\delta\in(0,1)\) and the sample set \(S=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\subset\Omega\) with \(\mathbf{x}_{i}\)'s drawn i.i.d. from uniform distribution. Suppose that Assumption 1 holds. We have with probability at least \(1-\delta\) over the choice of \(\mathbf{\theta}(0)\)_ \[\mathcal{R}_{S,s_{1}}\left(\mathbf{\theta}(0)\right)\leq\frac{1}{2}\left[1+2d\log \frac{4m(d+1)}{\delta}\left(2+6\sqrt{2\log(8/\delta)}\right)\right]^{2}. \tag{11}\] Moving forward, we will now delineate the core of the homotopy relaxation training process within each iteration. Due to the distinct nature of \(\mathbf{\omega}_{k}\) and \(a_{k}\), the training dynamics for the \(p\)-th iteration can be expressed as follows: \[\left\{\begin{array}{l}\frac{\mathrm{d}a_{k}(t)}{\mathrm{d}t}=-\nabla_{a_{k} }\mathcal{R}_{S,s_{1}}(\mathbf{\theta})=-\frac{1}{n\sqrt{m}}\sum_{i=1}^{n}e_{i,1} \sigma_{s_{p}}\left(\mathbf{w}_{k}^{\top}\mathbf{x}_{i}\right)\\ \frac{\mathrm{d}a_{k}(t)}{\mathrm{d}t}=-\nabla_{\mathbf{w}_{k}}\mathcal{R}_{S,s_{ 1}}(\mathbf{\theta})=-\frac{1}{n\sqrt{m}}\sum_{i=1}^{n}e_{i,1}a_{k}\sigma_{s_{p}} ^{\top}\left(\mathbf{w}_{k}^{\top}\mathbf{x}_{i}\right)\mathbf{x}_{i}\end{array}\right.\] where \(e_{i,1}=|f(\mathbf{x}_{i})-\phi_{s_{1}}(\mathbf{x}_{i};\mathbf{\theta})|\). Now, we are going to perform the convergence analysis. Before that, we need Proposition 1 to demonstrate that the kernel in the gradient descent dynamics during the initial phase is positive for finite-width NNs when \(m\) is large. **Proposition 1**.: _Given \(\delta\in(0,1)\) and the sample set \(S=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\subset\Omega\) with \(\mathbf{x}_{i}\)'s drawn i.i.d. with uniformly distributed. Suppose that Assumption 1 holds. If \(m\geq\frac{16n^{2}d^{2}C_{\psi,d}}{C_{0}\lambda^{2}}\log\frac{4n^{2}}{\delta}\) then with probability at least \(1-\delta\) over the choice of \(\mathbf{\theta}(0)\), we have_ \[\lambda_{\min}\left(\mathbf{G}_{1}\left(\mathbf{\theta}(0)\right)\right)\geq\frac{3}{ 4}(\lambda_{a,1}+\lambda_{\mathbf{\omega},1}).\] Set \[t_{1}^{*}=\inf\{t\mid\mathbf{\theta}(t)\not\in\mathcal{N}_{1}(\mathbf{\theta}(0))\} \tag{12}\] where \[\mathcal{N}_{1}(\mathbf{\theta}(0)):=\left\{\mathbf{\theta}\mid\|\mathbf{G}_{2}(\mathbf{ \theta})-\mathbf{G}_{1}(\mathbf{\theta}(0))\|_{F}\leq\frac{1}{4}(\lambda_{a,1}+\lambda _{\mathbf{\omega},1})\right\}.\] **Proposition 2**.: _Given \(\delta\in(0,1)\) and the sample set \(S=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\subset\Omega\) with \(\mathbf{x}_{i}\)'s drawn i.i.d. with uniformly distributed. Suppose that Assumption 1 holds. If \(m\geq\frac{16n^{2}d^{2}C_{\psi,d}}{C_{0}\lambda^{2}}\log\frac{4n^{2}}{\delta}\) then with probability at least \(1-\delta\) over the choice of \(\mathbf{\theta}(0)\), we have for any \(t\in[0,t_{1}^{*}]\)_ \[\mathcal{R}_{S,s_{1}}(\mathbf{\theta}(0))\leq\mathcal{R}_{S,s_{1}}(\mathbf{\theta}(0)) \exp\left(-\frac{t}{n}(\lambda_{a,1}+\lambda_{\mathbf{\omega},1})\right). \tag{13}\] ### Convergence of \(t_{2}\) iteration In this paper, without sacrificing generality, we focus our attention on the case where \(M=2\). However, it's important to note that our analysis and methodology can readily be extended to the broader scenario of \(M\geq 2\). All the proof in this paper can be found in Appendix A.4. **Proposition 3**.: _Given \(\delta\in(0,1)\) and the sample set \(S=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\subset\Omega\) with \(\mathbf{x}_{i}\)'s drawn i.i.d. with uniformly distributed. Suppose that Assumption 1 holds. If_ \[m\geq\max\left\{\frac{16n^{2}d^{2}C_{\psi,d}}{C_{0}\lambda^{2}}\log\frac{4n^{ 2}}{\delta},\frac{8n^{2}d^{2}\mathcal{R}_{S,s_{1}}\left(\mathbf{\theta}(0)\right)} {(\lambda_{a,1}+\lambda_{\mathbf{\omega},1})^{2}}\right\}\] _then with probability at least \(1-\delta\) over the choice of \(\mathbf{\theta}(0)\), we have for any \(t\in[0,t_{1}^{*}]\)_ \[\max_{k\in[m]}\{|a_{k}(t)-a_{k}(0)|,\|\mathbf{\omega}_{k}(t)-\mathbf{\omega}_{k}(0)\| _{\infty}\}\leq\frac{8\sqrt{2}nd\sqrt{\mathcal{R}_{S,s_{1}}\left(\mathbf{\theta}(0) \right)}}{\sqrt{m}(\lambda_{a,1}+\lambda_{\mathbf{\omega},1})}\sqrt{2\log\frac{4m (d+1)}{\delta}}=:\psi(m). \tag{14}\] For simplicity, we define a \(\mathcal{O}\left(\frac{\log m}{\sqrt{m}}\right)\) term \(\psi\), which is \[\psi(m):=\frac{8\sqrt{2}nd\sqrt{\mathcal{R}_{S,s_{1}}\left(\mathbf{\theta}(0) \right)}}{(\lambda_{a,1}+\lambda_{\mathbf{\omega},1})}\sqrt{2\log\frac{4m(d+1)}{ \delta}}.\] Moving forward, we will employ \(\mathbf{\theta}(t_{1}^{*})\) as the initial value for training over \(t_{2}\) iterations. However, before we proceed, it is crucial to carefully select the value of \(s_{2}\). This choice of \(s_{2}\) depends on both \(\mathcal{R}_{S,s_{1}}(\mathbf{\theta}(t_{1}^{*}))\) and a constant \(\zeta\), with the condition that \(\zeta\) is a positive constant, ensuring that \(0<\zeta\). Therefore, we define \(s_{2}\) as: \(s_{2}=s_{1}+\zeta\) where \(\zeta>0\) is a constant. It's important to emphasize that for each \(\mathbf{\theta}(t_{1}^{*})\), given that the training dynamics system operates without any random elements, we can determine it once we know \(\mathbf{\theta}(0)\). In other words, we can consider \(\mathbf{\theta}(t_{1}^{*})\) as two distinct functions, \(\mathbf{\theta}=(\bar{a},\mathbf{\omega})\), with \(\mathbf{\theta}(0)\) as their input. This implies that \(\bar{a}(\mathbf{\theta}(t_{0}))=a(t_{1}^{*})\) and \(\bar{\mathbf{\omega}}(\mathbf{\theta}(0))=\mathbf{\omega}(t_{1}^{*})\). **Proposition 4**.: _Given \(\delta\in(0,1)\) and the sample set \(S=\left\{(\mathbf{x}_{i},y_{i})\right\}_{i=1}^{n}\subset\Omega\) with \(\mathbf{x}_{i}\)'s drawn i.i.d. with uniformly distributed. Suppose that Assumption 1 holds. If_ \[m\geq\max\left\{\frac{16n^{2}d^{2}C_{\psi,d}}{C_{0}\lambda^{2}}\log\frac{4n^{ 2}}{\delta},n^{4}\left(\frac{128\sqrt{2}d\sqrt{\mathcal{R}_{S,s_{1}}\left(\bm {\theta}(0)\right)}}{(\lambda_{a,1}+\lambda_{\mathbf{\omega},1})\min\{\lambda_{a,2 },\lambda_{\mathbf{\omega},2}\}}2\log\frac{4m(d+1)}{\delta}\right)\right\}\] _then with probability at least \(1-\delta\) over the choice of \(\mathbf{\theta}(0)\), we have_ \[\lambda_{\min}\left(\mathbf{G}_{2}\left(\mathbf{\theta}(t_{1}^{*})\right)\right)\geq \frac{3}{4}(\lambda_{a,2}+\lambda_{2,\mathbf{\omega}}).\] **Remark 3**.: _In accordance with Proposition 4, we can establish that \(m\) follows a trend of \(\mathbf{O}\left(\frac{\log(1/\delta)}{\min\{\lambda_{a,2},\lambda_{\mathbf{\omega},2} \}}\right)\). This observation sheds light on our strategy of increasing the parameter \(s\) with each iteration. As we have proven in Theorem 1, the smallest eigenvalues of Gram matrices tend to increase as \(s\) increases._ _Now, consider the second iteration. For a fixed value of \(m\) that we have at this stage, a larger smallest eigenvalue implies that we can select a smaller value for \(\delta\). Consequently, this leads to a higher probability of \(\lambda_{\min}\left(\mathbf{G}_{2}\left(\mathbf{\theta}(t_{1}^{*})\right)\right)\) being positive._ Set \[t_{2}^{*}=\inf\{t\mid\mathbf{\theta}(t)\not\in\mathcal{N}_{2}(\mathbf{\theta}(t_{1}^{* }))\} \tag{15}\] where \[\mathcal{N}_{2}(\mathbf{\theta}(t_{1}^{*})):=\left\{\mathbf{\theta}\mid\left\|\mathbf{G}_ {2}(\mathbf{\theta})-\mathbf{G}_{2}(\mathbf{\theta}(t_{1}^{*}))\right\|_{F}\leq\frac{1}{4} (\lambda_{a,2}+\lambda_{\mathbf{\omega},2})\right\}.\] **Proposition 5**.: _Given \(\delta\in(0,1)\) and the sample set \(S=\left\{(\mathbf{x}_{i},y_{i})\right\}_{i=1}^{n}\subset\Omega\) with \(\mathbf{x}_{i}\)'s drawn i.i.d. with uniformly distributed. Suppose that Assumption 1 holds._ \[m\geq\max\left\{\frac{16n^{2}d^{2}C_{\psi,d}}{C_{0}\lambda^{2}}\log\frac{4n^{ 2}}{\delta},n^{4}\left(\frac{128\sqrt{2}d\sqrt{\mathcal{R}_{S,s_{1}}\left(\bm {\theta}(0)\right)}}{(\lambda_{a,1}+\lambda_{\mathbf{\omega},1})\min\{\lambda_{a,2 },\lambda_{\mathbf{\omega},2}\}}2\log\frac{4m(d+1)}{\delta}\right)\right\}\] _then with probability at least \(1-\delta\) over the choice of \(\mathbf{\theta}(0)\), we have for any \(t\in[t_{1}^{*},t_{2}^{*}]\)_ \[\mathcal{R}_{S,s_{2}}(\mathbf{\theta}(t))\leq\mathcal{R}_{S,s_{2}}(\mathbf{\theta}(t_{ 1}^{*}))\exp\left(-\frac{t-t_{1}^{*}}{n}(\lambda_{a,2}+\lambda_{\mathbf{\omega},2 })\right). \tag{16}\] ### Convergence of HRTA By combining Propositions 2 and 5, we can establish the convergence of the HRTA. **Theorem 2**.: _Given \(\delta\in(0,1)\), \(s_{1}\in(0,+\infty)\), \(\zeta>1\) and the sample set \(S=\left\{(\mathbf{x}_{i},y_{i})\right\}_{i=1}^{n}\subset\Omega\) with \(\mathbf{x}_{i}\)'s drawn i.i.d. with uniformly distributed. Suppose that Assumption 1 holds,_ \[m\geq\max\left\{\frac{16n^{2}d^{2}C_{\psi,d}}{C_{0}\lambda^{2}}\log\frac{4n^{ 2}}{\delta},n^{4}\left(\frac{128\sqrt{2}d\sqrt{\mathcal{R}_{S,s_{1}}\left(\bm {\theta}(0)\right)}}{(\lambda_{a,1}+\lambda_{\mathbf{\omega},1})\min\{\lambda_{a,2 },\lambda_{\mathbf{\omega},2}\}}2\log\frac{4m(d+1)}{\delta}\right)\right\}\] _then with probability at least \(1-\delta\) over the choice of \(\mathbf{\theta}(0)\), we have_ \[\left\{\begin{array}{l}\mathcal{R}_{S,s_{1}}(\mathbf{\theta}(t))\leq\mathcal{R} _{S,s_{1}}(\mathbf{\theta}(0))\exp\left(-\frac{t}{n}(\lambda_{a,1}+\lambda_{\mathbf{ \omega},1})\right),t\in[0,t_{1}^{*}]\\ \mathcal{R}_{S,s_{2}}(\mathbf{\theta}(t))\leq\mathcal{R}_{S,s_{2}}(\mathbf{\theta}(t_{1 }^{*}))\exp\left(-\frac{t-t_{1}^{*}}{n}(\lambda_{a,2}+\lambda_{\mathbf{\omega},2}) \right),\;t\in[t_{1}^{*},t_{2}^{*}].\end{array}\right.\] _where \(t_{i}^{*}\) are defined in Eqs. (12,15). Furthermore, we have that the decay speed in \([t_{1}^{*},t_{2}^{*}]\) can be faster than \([0,t_{1}^{*}]\), i.e._ \[\mathcal{R}_{S,s_{2}}(\mathbf{\theta}(t))\leq \mathcal{R}_{S,s_{2}}(\mathbf{\theta}(t_{1}^{*}))\exp\left(-\frac{t-t_ {1}^{*}}{n}(\lambda_{a,2}+\lambda_{\mathbf{\omega},2})\right)\] \[\leq \mathcal{R}_{S,s_{2}}(\mathbf{\theta}(t_{1}^{*}))\exp\left(-\frac{t-t_ {1}^{*}}{n}(\lambda_{a,1}+\lambda_{\mathbf{\omega},1})\right). \tag{17}\] Proof.: By amalgamating Propositions 2 and 5, we can readily derive the proof for Theorem 2. **Corollary 1**.: _Given \(\delta\in(0,1)\), \(s_{1}\in(0,+\infty)\), \(\zeta>1\) and the sample set \(S=\left\{(\mathbf{x}_{i},y_{i})\right\}_{i=1}^{n}\subset\Omega\) with \(\mathbf{x}_{i}\)'s drawn i.i.d. with uniformly distributed. Suppose that Assumption 1 holds,_ \[m\geq\max\left\{\frac{16n^{2}d^{2}C_{\psi,d}}{C_{0}\lambda^{2}}\log\frac{4n^{2 }}{\delta},n^{4}\left(\frac{128\sqrt{2}d\sqrt{\mathcal{R}_{S,s_{1}}\left(\mathbf{ \theta}(0)\right)}}{(\lambda_{a,1}+\lambda_{\mathbf{\omega},1})\min\{\lambda_{a,2},\lambda_{\mathbf{\omega},2}\}}2\log\frac{4m(d+1)}{\delta}\right)\right\}\] _and \(s_{2}:=\inf\left\{s\in(s_{1},2)\mid\mathcal{R}_{S,s}\left(\mathbf{\theta}(t_{1}^{* })\right)>\zeta\mathcal{R}_{S,s_{1}}\left(\mathbf{\theta}(t_{1}^{*})\right)\right\},\) then with probability at least \(1-\delta\) over the choice of \(\mathbf{\theta}(0)\), we have for any \(t\in[t_{1}^{*},t_{2}^{*}]\)_ \[\mathcal{R}_{S,s_{2}}(\mathbf{\theta}(t))\leq\zeta\mathcal{R}_{S,s_{1}}(\mathbf{ \theta}(0))\exp\left(-\frac{t_{1}^{*}}{n}(\lambda_{a,1}+\lambda_{\mathbf{\omega}, 1})\right)\exp\left(-\frac{t-t_{1}^{*}}{n}(\lambda_{a,2}+\lambda_{\mathbf{\omega}, 2})\right). \tag{18}\] **Remark 4**.: _In the case where \(M\geq 2\), note that we may consider the scenario where \(m\) becomes larger. However, it's important to emphasize that the order of \(m\) remains at \(\mathcal{O}(n^{4})\). This order does not increase substantially due to the fact that all the derivations presented in Subsection 3.3 can be smoothly generalized._ Building upon the proof of Theorem 2, we can recognize the advantages of the HRTA. In the initial iteration, the training process does not differ significantly from training using training for pure ReLU networks. The traditional method can effectively reduce the loss function within the set \(\mathcal{N}_{1}(\mathbf{\theta}(0))\), defined as: \[\mathcal{N}_{1}(\mathbf{\theta}(0)):=\left\{\mathbf{\theta}\mid\|\mathbf{G}_{1}(\mathbf{ \theta})-\mathbf{G}_{1}(\mathbf{\theta}(0))\|_{F}\leq\frac{1}{4}(\lambda_{a,1}+\lambda _{\mathbf{\omega},1})\right\}.\] In other words, the traditional method can effectively minimize the loss function within the time interval \([0,t_{1}^{*}]\), where \(t_{1}^{*}=\inf\{t\mid\mathbf{\theta}(t)\not\in\mathcal{N}_{1}(\mathbf{\theta}(0))\}.\)Outside of this range, the training speed may slow down significantly and take a long time to converge. However, the HRTA transits the training dynamics to a new kernel by introducing a new activation function. This allows training to converge efficiently within a new range of \(\mathbf{\theta}\), defined as: \[\mathcal{N}_{2}(\mathbf{\theta}(t_{1}^{*})):=\left\{\mathbf{\theta}\mid\|\mathbf{G}_{2}( \mathbf{\theta})-\mathbf{G}_{2}(\mathbf{\theta}(t_{1}^{*}))\|_{F}\leq\frac{1}{4}(\lambda_{ a,2}+\lambda_{\mathbf{\omega},2})\right\},\] if \(\mathbf{G}_{2}(\mathbf{\theta}(t_{1}^{*}))\) is strictly positive definite. Furthermore, we demonstrate that the minimum eigenvalue of \(\mathbf{G}_{2}(\mathbf{\theta}(t_{1}^{*}))\) surpasses that of \(\mathbf{G}_{1}(\mathbf{\theta}(0))\) under these conditions, as indicated by Theorem 1. This implies that, rather than decaying, the training speed may actually increase. This is one of the important reasons why we believe that relaxation surpasses traditional training methods in neural network training. Furthermore, in this paper, we provide evidence that \(\mathbf{G}_{2}(\mathbf{\theta}(t_{1}^{*}))\) indeed becomes strictly positive definite when the width of neural networks is sufficiently large. Building upon Proposition 4, we can see that the increasing smallest eigenvalue of Gram matrices in each iteration contributes to a higher likelihood of \(\mathbf{G}_{2}(\mathbf{\theta}(t_{1}^{*}))\) becoming strictly positive definite. In summary, HRTA offers three key advantages in training: \(\bullet\) It dynamically builds the activation function, allowing loss functions to resume their decay when the training progress slows down, all without compromising the accuracy of the approximation. \(\bullet\) It accelerates the decay rate by increasing the smallest eigenvalue of Gram matrices with each homotopy iteration. Consequently, it enhances the probability of Gram matrices becoming positive definite in each iteration, further improving the training process. ## 4 Experimental Results for the Homotopy Relaxation Training Algorithm In this section, we will use several numerical examples to demonstrate our theoretical analysis results. ### Function approximation by HRTA In the first part, our objective is to employ NNs to approximate functions of the form \(\sin\left(2\pi\sum_{i=1}^{d}x_{i}\right)\) for both \(d=1\) and \(d=3\). We will compare the performance of the HRTA method with the Adam optimizer. We used 100 uniform grid points for \(d=1\) and 125 uniform grid points for \(d=3\). Additional experiment details are provided in Appendix A.5. The following Figures 2 and 3 showcase the results achieved using a two-layer neural network with 1000 nodes to approximate \(\sin\left(2\pi\sum_{i=1}^{d}x_{i}\right)\) for both \(d=1\) and \(d=3\). We observe oscillations in the figures, which result from plotting the loss against iterations using a logarithmic scale. To mitigate these fluctuations, we decrease the learning rate, allowing the oscillations to gradually diminish during the later stages of training. It's worth noting that these oscillations occur in both the Adam and HRTA optimization algorithms and do not significantly impact the overall efficiency of HRTA. In our approach, the HRTA method with \(s=0.5\) signifies that we initially employ \(\sigma_{\frac{1}{2}}(x):=\frac{1}{2}\text{Id}(x)+\frac{1}{2}\text{ReLU}(x)\) as the activation functions. We transition to using ReLU as the activation function when the loss function does not decay rapidly. This transition characterizes the homotopy part of our method. Conversely, the HRTA kernel with \(s=1.5\) signifies that we begin with ReLU as the activation functions and switch to \(\sigma_{\frac{3}{2}}(x):=-\frac{1}{2}\text{Id}+\frac{2}{2}\text{ReLU}\) as the activation functions when the loss function does not decay quickly. This transition represents the relaxation part of our method. Based on these experiments, it becomes evident that both cases, \(s=0.5\) and \(s=1.5\), outperform the traditional method in terms of achieving lower error rates. The primary driver behind this improvement is the provision of two opportunities for the loss function to decrease. While the rate of decay in each step may not be faster than that of the traditional method, as observed in the case of \(s=0.5\) for approximating \(\sin(2\pi x)\) and \(\sin(2\pi(x_{1}+x_{2}+x_{3}))\), it's worth noting that the smallest eigenvalue of the training dynamics is smaller than that of the traditional method when \(s=0.5\), as demonstrated in Theorem 1. This is the reason why, in the first step, it decays slower than the traditional method. In the homotopy case with \(s=0.5\), it can be effectively utilized when we aim to train the ReLU neural network as the final configuration while obtaining a favorable initial value and achieving smaller errors than in the previous stages. Conversely, in the relaxation case with \(s=1.5\), it is valuable when we initially train the ReLU neural network, but the loss function does not decrease as expected. In this situation, changing the activation functions allows the error to start decreasing again without affecting the approximation quality. The advantage of both of these cases lies in their provision of two opportunities and an extended duration for the loss to decrease, which aligns with the results demonstrated in Theorem 2. This approach ensures robust training and improved convergence behavior in various scenarios. Furthermore, our method demonstrates its versatility as it is not limited to very overparameterized cases or two-layer neural networks. We have shown its effectiveness even in the context of three-layer neural networks and other numbers of nodes (i.e., widths of 200, 400, and 1000). The error rates are summarized in Table 1, and our method consistently outperforms traditional methods ### Solving partial differential equation by HRTA In the second part, our goal is to solve the Poisson equation as follows: \[\begin{cases}-\Delta u(x_{1},x_{2})=f(x_{1},x_{2})&\text{in}\;\Omega,\\ \frac{\partial u}{\partial\nu}=0&\text{on}\;\partial\Omega,\end{cases} \tag{19}\] using HRTA. Here \(\Omega\) is a domain within the interval \([0,1]^{2}\) and \(f(x_{1},x_{2})=\pi^{2}\left[\cos(\pi x_{1})+\cos(\pi x_{2})\right]\). The exact solution to this equation is denoted as \(u^{*}(x_{1},x_{2})=\cos(\pi x_{1})+\cos(\pi x_{2})\). We still performed two iterations with \(400\) sample points and employed \(1000\) nodes. However, we used the activation function \(\bar{\sigma}_{\frac{1}{2}}(x)=\frac{1}{2}\text{Id}(x)+\frac{1}{2}\bar{\sigma} (x)\), where \(\bar{\sigma}(x)=\frac{1}{2}\text{ReLU}^{2}(x)\), which is smoother. Here we consider to solve partial differential equations by Deep Ritz method \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Single Layer} & \multicolumn{2}{c|}{Multi Layer} \\ \hline & Adam & HRTA & Adam & HRTA \\ \hline 200 & \(2.02\times 10^{-5}\) & \(8.72\times 10^{-7}\) (s=1.5) & \(2.52\times 10^{-7}\) & \(9.47\times 10^{-8}\) (s=0.5) \\ \hline 400 & \(6.54\times 10^{-6}\) & \(8.55\times 10^{-7}\) (s=0.5) & \(4.83\times 10^{-8}\) & \(1.8\times 10^{-8}\) (s=1.5) \\ \hline 1000 & \(1.55\times 10^{-6}\) & \(1.88\times 10^{-7}\) (s=0.5) & \(2.20\times 10^{-7}\) & \(3.52\times 10^{-9}\) (s=1.5) \\ \hline \end{tabular} \end{table} Table 1: Comparisons between HRTA and Adam methods on different NNs. [29]. In the deep Ritz method, the loss function of the Eq. (19) can be read as \[\mathcal{E}_{D}(\mathbf{\theta}):=\frac{1}{2}\int_{\Omega}|\nabla\phi(\mathbf{x};\mathbf{ \theta})|^{2}\mathrm{d}\mathbf{x}+\frac{1}{2}\left(\int_{\Omega}\phi(\mathbf{x};\mathbf{ \theta})\mathrm{d}\mathbf{x}\right)^{2}-\int_{\Omega}f\phi(\mathbf{x};\mathbf{\theta}) \mathrm{d}\mathbf{x},\] where \(\mathbf{\theta}\) represents all the parameters in the neural network. Here, \(\Omega\) denotes the domain \((0,1)^{d}\). Proposition 1 in [17] establishes the equivalence between the loss function \(\mathcal{E}_{D}(\mathbf{\theta})\) and \(\|\phi(\mathbf{x};\mathbf{\theta})-u^{*}(\mathbf{x})\|_{H^{1}((0,1)^{2})}\), where \(u^{*}(\mathbf{x})\) denotes the exact solution of the PDEs which is \(u^{*}(x_{1},x_{2})=\cos(\pi x_{1})+\cos(\pi x_{2})\), and \(\|f\|_{H^{1}((0,1)^{2})}:=\left(\sum_{0\leq|\alpha|\leq 1}\|D^{\mathbf{\alpha}}f\|_{L^ {2}((0,1)^{2})}^{p}\right)^{1/2}\). Therefore, we can use supervised learning via Sobolev training [5; 23; 25] to solve the Poisson equation efficiently and accurately. Our experiments reveal that HRTA remains effective even when \(s=1.5\), as demonstrated in Figures 5 and 5: ## 5 Conclusion In summary, this paper introduces the Homotopy Relaxation Training Algorithm (HRTA), a method designed to expedite gradient descent when it encounters slowdowns. HRTA achieves this by relaxing homotopy activation functions to reshape the energy landscape of loss functions during slow convergence. Specifically, we adapt activation functions to boost the minimum eigenvalue of the gradient descent kernel, thereby accelerating convergence and increasing the likelihood of a positive minimum eigenvalue at each iteration. This paper establishes the theoretical basis for our algorithm, focusing on hyperparameters, while leaving the analysis in a more generalized context for future research. ## Acknowledgments YY and WH are supported by National Institute of General Medical Sciences through grant 1R35GM146894.
2308.16470
Domain-adaptive Message Passing Graph Neural Network
Cross-network node classification (CNNC), which aims to classify nodes in a label-deficient target network by transferring the knowledge from a source network with abundant labels, draws increasing attention recently. To address CNNC, we propose a domain-adaptive message passing graph neural network (DM-GNN), which integrates graph neural network (GNN) with conditional adversarial domain adaptation. DM-GNN is capable of learning informative representations for node classification that are also transferrable across networks. Firstly, a GNN encoder is constructed by dual feature extractors to separate ego-embedding learning from neighbor-embedding learning so as to jointly capture commonality and discrimination between connected nodes. Secondly, a label propagation node classifier is proposed to refine each node's label prediction by combining its own prediction and its neighbors' prediction. In addition, a label-aware propagation scheme is devised for the labeled source network to promote intra-class propagation while avoiding inter-class propagation, thus yielding label-discriminative source embeddings. Thirdly, conditional adversarial domain adaptation is performed to take the neighborhood-refined class-label information into account during adversarial domain adaptation, so that the class-conditional distributions across networks can be better matched. Comparisons with eleven state-of-the-art methods demonstrate the effectiveness of the proposed DM-GNN.
Xiao Shen, Shirui Pan, Kup-Sze Choi, Xi Zhou
2023-08-31T05:26:08Z
http://arxiv.org/abs/2308.16470v2
# Domain-adaptive Message Passing Graph Neural Network ###### Abstract Cross-network node classification (CNNC), which aims to classify nodes in a label-deficient target network by transferring the knowledge from a source network with abundant labels, draws increasing attention recently. To address CNNC, we propose a domain-adaptive message passing graph neural network (DM-GNN), which integrates graph neural network (GNN) with conditional adversarial domain adaptation. DM-GNN is capable of learning informative representations for node classification that are also transferrable across networks. Firstly, a GNN encoder is constructed by dual feature extractors to separate ego-embedding learning from neighbor-embedding learning so as to jointly capture commonality and discrimination between connected nodes. Secondly, a label propagation node classifier is proposed to refine each node's label prediction by combining its own prediction and its neighbors' prediction. In addition, a label-aware propagation scheme is devised for the labeled source network to promote intra-class propagation while avoiding inter-class propagation, thus yielding label-discriminative source embeddings. Thirdly, conditional adversarial domain adaptation is performed to take the neighborhood-refined class-label information into account during adversarial domain adaptation, so that the class-conditional distributions across networks can be better matched. Comparisons with eleven state-of-the-art methods demonstrate the effectiveness of the proposed DM-GNN. C + Footnote †: journal: Pattern Recognition , learning task which aims to accurately classify unlabeled nodes in a network given a subset of labeled nodes (Bhagat et al., 2011). The existing node classification tasks mostly focus on a single-network scenario where the training nodes and testing nodes are all sampled from one single network (Kipf and Welling, 2017; Liang et al., 2018; Pan et al., 2016; Velickovic et al., 2018; Yang et al., 2016; Zhang et al., 2018). However, in practice, it is often resource and time-intensive to manually gather node labels for each newly formed target network, while abundant node labels might have already been accessible in some auxiliary networks. Cross-network node classification (CNNC) describes the problem of node classification across different networks with different distributions by transferring the knowledge from a relevant labeled source network to accurately classify unlabeled nodes in a target network (Dai et al., 2023; Shen et al., 2020; Shen et al., 2021; M. Wu et al., 2020). CNNC is a valuable technique for a range of real-world applications. For example, in cross-network influence maximization, in order to maximize the influence in a new target network, CNNC can transfer the knowledge learned from a smaller source network, where all nodes have annotated labels reflecting their influence, to assist in the selection of the most influential nodes for the target network (Shen, Mao, et al., 2020). In cross-domain protein function prediction, given a source protein-protein interaction (PPI) network with abundant labels indicating protein functionalities, CNNC can help predict the protein functions in a new target PPI network (Hu et al., 2020). In cross-system recommendations, it is beneficial to transfer the knowledge learned from an online social network (e.g. Netflix) of users with plenty of social tags indicating their movie interests to predict the movie interests of users in another target online social network (e.g. Douban) (Zhu et al., 2018). To succeed in CNNC, one need to address the challenges from two aspects, namely, the node classification problem and the cross-network problem. For the former, the challenge is how to integrate topological structures, node content information and observed node labels to learn informative representations for subsequent classification. For the latter, given the inherent different distributions of topologies and node content between the source and target networks, the key challenge is how to mitigate the domain discrepancy and yield network-transferable representations. Graph neural networks (GNNs) (Z. Wu et al., 2020), which apply deep neural networks on graph-structured data, have become the state-of-the-art network representation learning method. GNNs typically adopt a message passing paradigm to aggregate the node's own features and the features of its neighbors to learn informative representations, which have demonstrated outstanding performance in semi-supervised node classification (Hamilton et al., 2017; Kipf and Welling, 2017; Liang et al., 2018; Pan et al., 2016; Velickovic et al., 2018; Yang et al., 2016; Zhang et al., 2018). However, since the training nodes and testing nodes in CNNC are sampled from different networks with different distributions, the domain discrepancy across networks impede direct use of a GNN trained on a source network for a new target network (Pan and Yang, 2010). Thus, the existing GNNs fail to learn network-transferable representations given that they focus only on single-network representation learning while without considering domain discrepancy across different networks (Shen, Dai, et al., 2020; Shen et al., 2021). Domain adaptation is an effective approach to mitigate the shift in data distributions across domains (Pan & Yang, 2010). Despite the significant achievements of domain adaptation in computer vision (CV) (Zhang, Li, et al., 2023; Zhang, Li, Tao, et al., 2021; Zhang, Li, Zhang, et al., 2021; Zhang et al., 2022; Zhang, Zhang, et al., 2023) and natural language processing (NLP) (Ramponi & Plank, 2020; Saunders, 2022), applying domain adaptation on graph-structured data is still non-trivial. This is because the conventional domain adaptation algorithms assume that every data sample in each domain is independent and identically distributed (i.i.d.). Clearly, graph-structured data obviously violate the i.i.d. assumption, i.e., nodes are not independent but relate to others via complex network connections. Therefore, the traditional domain adaptation algorithms based on the i.i.d. assumption would fail to model complex network structures and consequently perform poorly in CNNC (Shen, Dai, et al., 2020; Shen et al., 2021). However, it has been widely acknowledged that taking full advantage of network relationships between nodes should be indispensable to node classification (Hamilton et al., 2017; Kipf & Welling, 2017; Liang et al., 2018; Pan et al., 2016; Velickovic et al., 2018; Wang et al., 2020; Yang et al., 2016; Zhang et al., 2018). Given that GNNs are limited in handling domain discrepancy across networks and domain adaptation algorithms are limited in modeling complex network structures, utilizing either GNNs or domain adaptation alone cannot effectively tackle the challenges in the CNNC problem. To go beyond such limits, the integration of GNN with domain adaptation has become a promising paradigm to address CNNC (Dai et al., 2023; Shen, Dai, et al., 2020; Shen et al., 2021; M. Wu et al., 2020; Zhang et al., 2019). However, the existing CNNC algorithms have the two main weaknesses that need to be addressed but remain neglected. Firstly, the algorithms (Dai et al., 2023; M. Wu et al., 2020; Zhang et al., 2019) typically adopt graph convolutional network (GCN) (Kipf & Welling, 2017) or its variants (Li et al., 2019; Zhuang & Ma, 2018) for node embedding learning, while the entanglement of neighborhood aggregation and representation transformation in the GCN-like models can easily leads to over-smoothing (Dong et al., 2021; Klicpera et al., 2019; Li et al., 2018; Liu et al., 2020). In addition, in the GCN-like models, the ego-embedding and neighbor-embedding of each node are mixed at each graph convolution layer, the discrimination between connected nodes is not preserved, which yields poor classification performance when the connected nodes are dissimilar (Bo et al., 2021; Luan et al., 2020; Zhu et al., 2020). Secondly, the existing CNNC algorithms (Dai et al., 2023; Shen, Dai, et al., 2020; M. Wu et al., 2020; Zhang et al., 2019) mainly focus on matching the marginal distributions across networks but neglect the class-conditional shift across networks which can significantly hamper the finding of a joint ideal classifier for the source and target data (Long et al., 2018; Luo et al., 2020). To remedy the aforementioned weaknesses, we propose a novel domain-adaptive message passing graph neural network (DMGNN) to address CNNC. Firstly, to tackle the limitations of the GCN-like models which are widely adopted in existing CNNC algorithms (Dai et al., 2023; M. Wu et al., 2020; Zhang et al., 2019), a GNN encoder is constructed by dual feature extractors with different learnable parameters to learn the ego-embedding and neighbor-embedding of each node respectively. As a result, both the commonality and discrimination between connected nodes can be effectively captured. In addition, unlike the GCN-like models, the neighborhood aggregation and representation transformation have been decoupled by a second feature extractor to alleviate the over-smoothing problem. Secondly, a feature propagation loss is proposed to smooth the embeddings w.r.t. graph topology by encouraging each node's final embedding to become similar to a weighted average of its neighbors' final embeddings. Thirdly, most existing CNNC algorithms (Shen et al., 2020; Shen et al., 2021; M. Wu et al., 2020; Zhang et al., 2019) only focus on neighborhood aggregation at feature-level, and simply adopt a logistic regression or a multi-layer perceptron (MLP) as the node classifier. In contrast, DM-GNN unifies feature propagation with label propagation by proposing a label propagation mechanism in the node classifier to smooth the label prediction among the neighborhood. As a result, the label prediction of each node can be refined by combining its own prediction and the prediction from its neighbors within \(K\) steps. Moreover, for the fully labeled source network, we devise a label-aware propagation scheme to only allow the message (i.e. both features and label prediction) passing through the same labeled neighbors, which promotes intra-class propagation while avoiding inter-class propagation. Thus, label-discriminative source embeddings can be learned by DM-GNN. Lastly, instead of only matching the marginal distributions across networks in most existing CNNC algorithms (Dai et al., 2023; Shen et al., 2020; M. Wu et al., 2020; Zhang et al., 2019), a conditional adversarial domain adaptation approach (Long et al., 2018) is employed by DM-GNN to align the class-conditional distributions between the source and target networks. Specifically, a conditional domain discriminator is adopted to consider both embeddings and label prediction during adversarial domain adaptation. Instead of directly utilizing the own label prediction of each sample independently as in (Long et al., 2018), empowered by the label propagation node classifier, the proposed DM-GNN can utilize the neighborhood-refined label prediction during conditional adversarial domain adaptation, which guarantees more accurate predicted labels available to guide the alignment of the corresponding class-conditional distributions between two networks. With the label-discriminative source embeddings and class-conditional adversarial domain adaptation, the target nodes would be aligned to the source nodes with the same class to yielding label-discriminative target embeddings. The main contributions are summarized as follows: 1) To tackle the problems with the GCN-like models in most existing CNNC algorithms, a simple and effective GNN encoder with dual feature extractors is employed to separate ego-embedding from neighbor-embedding, and thereby jointly capture the commonality and discrimination between connected nodes. In addition, through decoupling neighborhood aggregation from representation transformation with a second feature extractor, the over-smoothing issue with the GCN-like models can be alleviated. 2) The proposed DM-GNN performs message passing among the neighborhood at both feature level and label level by designing a feature propagation loss and a label propagation node classifier respectively. A label-aware propagation mechanism is devised for the fully labeled source network to promote intra-class propagation while avoiding inter-class propagation to guarantee more label-discriminative source embeddings. In addition, a conditional adversarial domain adaptation approach is employed by DM-GNN to consider the neighborhood-refined label prediction during adversarial domain adaptation to align the target embeddings to the source embeddings associated with the same class. 3) Extensive experiments conducted on the benchmark datasets demonstrate the outperformance of the proposed DM-GNN for CNNC over eleven state-of-the-art methods, and the ablation study verifies the effectiveness of the model designs. ## 2 Related Works ### Network Representation Learning Network embedding is an effective network representation learning method to learn low-dimensional representations for graph-structured data. Pioneering network embedding methods, however, only consider plain network structures (Dai et al., 2018; Dai et al., 2019; Grover and Leskovec, 2016; Liu et al., 2021; Perozzi et al., 2014; Shen and Chung, 2017, 2020; Wang et al., 2020). Other than plain network structures, attributed network embedding algorithms further take node content information and available labels into account to learn more meaningful representations. GNNs are among the top-performing attributed network embedding algorithms, which refine the representation of each node by combining the information from its neighbors. Graph Convolutional Network (GCN) (Kipf and Welling, 2017) is the most representative GNN, which employs a layer-wise propagation rule to iteratively aggregate node features through the edges of the graph. Many follow-up GNNs inspired by GCN have been proposed. For example, Velickovic et al. (Velickovic et al., 2018) developed a GAT to leverage an attention mechanism to automatically learn appropriate edge weights during neighborhood aggregation. Hamilton et al. (Hamilton et al., 2017) proposed a GraphSAGE to support several neighborhood aggregation methods beyond averaging. Chen et al. (Chen et al., 2020) proposed a label-aware GCN to filter negative neighbors with different labels and add new positive neighbors with same labels during neighborhood aggregation. Yang et al. (Yang et al., 2020) developed a PGCN model which integrates network topology and node content information from a probabilistic perspective. In the GCN-like models, two important operations, i.e., feature transformation and neighborhood aggregation, are typically entangled at each convolutional layer. Recent studies showed that such entanglement is unnecessary and would easily lead to over-smoothing (Dong et al., 2021; Klicpera et al., 2019; Li et al., 2018; Liu et al., 2020). In addition, in typical GCN design, ego-embedding and neighbor-embedding are mixed through the average or weighted average aggregation method at each convolutional layer (Zhu et al., 2020). The mixing design performs well when the connected nodes have the same label but results in poor performance when the connected nodes have dissimilar features and different labels (Bo et al., 2021; Luan et al., 2020; Zhu et al., 2020). Although GNNs have achieved impressive performance in semi-supervised node classification, they are generally developed under a single-network setting, without considering domain discrepancy across different networks. Therefore, they fail to tackle prediction tasks across different networks effectively (Heimann et al., 2018; Shen et al., 2021). ### Domain Adaptation Domain adaptation is an effective approach to mitigate data distribution shift across domains. Self-training domain adaptation approaches (Chen et al., 2011; Shen et al., 2017; Shen, Mao, et al., 2020) are proposed to iteratively add the target samples with high prediction confidence into the training set. Feature-based deep domain adaptation algorithms have drawn attentions recently, which can be categorized into two families, i.e., using statistical approaches and adversarial learning approaches respectively. The family using statistical approaches reduces domain shift by minimizing the statistical metrics which measure the distribution discrepancy between different domains, such as Maximum Mean Discrepancy (MMD) (Gretton et al., 2007) and conditional MMD (Long et al., 2013). Inspired by Generative Adversarial Networks (GAN) (Goodfellow et al., 2014), a family of adversarial domain adaptation algorithms is developed to minimize domain shift by training a generator and a domain discriminator in an adversarial manner (Ganin et al., 2016; Shen et al., 2018; Tzeng et al., 2017). Most existing adversarial domain adaptation algorithms just focus on matching the marginal distributions across domains. However, matching the marginal distribution does not guarantee that the corresponding class-conditional distributions can be well matched. It has been revealed that conditional GANs (Mirza & Osindero, 2014; Odena et al., 2017) which condition the generator and discriminator on the associated label information can better align different distributions. Motivated by this finding, several conditional adversarial domain adaptation algorithms taking class-conditional information into account during adversarial domain adaptation have been proposed. For example, Long _et al._(Long et al., 2018) proposed a CDAN model which conditions adversarial domain adaptation on the discriminative label information predicted by the task classifier. Pei _et al._(Pei et al., 2018) proposed a MADA approach to employ multiple class-wise domain discriminators to capture the multimodal structures of data distributions across domains. Tang and Jia (Tang & Jia, 2020) proposed a DADA model to align the joint distributions of feature and category across domains, by adopting an integrated classifier which jointly predicts the category and domain labels. The domain adaptation algorithms developed in CV and NLP are typically based on the i.i.d. assumption. Nevertheless, due to the non-i.i.d. nature of graph-structured data, the traditional domain adaptation algorithms are limited in their capabilities to tackle the prediction tasks across networks (Dai et al., 2023; Shen, Dai, et al., 2020; Shen et al., 2021; M. Wu et al., 2020). ### Cross-network Node Classification CNNC draws increasing attention very recently. It aims to transfer the knowledge from a source network with abundant labeled data to accurately classify the nodes in a target network where labels are lacking. Shen _et al._(Shen et al., 2021) proposed a CDNE model to integrate two stacked auto-encoders (SAEs) with the MMD and conditional MMD metrics. Zhang _et al._ (Zhang et al., 2019) proposed a DANE model which applies GCN and adversarial domain adaptation to learn transferable embeddings in an unsupervised manner. Wu _et al._ (M. Wu et al., 2020) proposed a UDAGCN model which employs adversarial domain adaptation and a dual GCN framework (Zhuang & Ma, 2018) to capture the local consistency and global consistency of the graphs. Shen _et al._ (Shen, Dai, et al., 2020) proposed an ACDNE model to integrate adversarial domain adaptation with dual feature extractors which learn the representation of each node separately from that of its neighbors. Dai _et al._ (Dai et al., 2023) proposed an AdaGCN model to integrate GCN with adversarial domain adaptation. Besides, an AdaIGCN model is further proposed, which employs an improved GCN layer (Li et al., 2019) to flexibly adjust the neighborhood size in feature smoothing. Most existing CNNC algorithms (Shen, Dai, et al., 2020; Shen et al., 2021; M. Wu et al., 2020; Zhang et al., 2019) focus on feature propagation. The DM-GNN proposed in this paper unifies feature propagation and label propagation in cross-network embedding, which incorporates a label propagation mechanism in the node classifier to refine each node's label prediction by combining the label predictions from its neighbors. Moreover, for the labeled source network, a label-aware propagation mechanism is devised to promote intra-class propagation while avoiding inter-class propagation to produce more label-discriminative source embeddings. In addition, the adversarial domain adaptation approaches employed in most existing CNNC algorithms (Dai et al., 2023; Shen, Dai, et al., 2020; M. Wu et al., 2020; Zhang et al., 2019) only focus on matching the marginal distributions across networks, which cannot guarantee the corresponding class-conditional distributions across networks to be well aligned. In contrast, the proposed DM-GNN employs the conditional adversarial domain adaptation approach to condition GNN and domain discriminator on the neighborhood-refined class-label information. As a result, nodes from different networks but having the same class-label would have similar representations. ## 3 Proposed Model In this section, we describe in detail the three components of the proposed DM-GNN, namely, dual feature extractors, a label propagation node classifier and a conditional domain discriminator. The model architecture of DM-GNN is shown in Fig. 1. ### Notations Let \(\mathcal{G}=(V,E,\mathbf{A},\mathbf{X},\mathbf{Y})\) denote a network with a set of nodes \(V\), a set of edges \(E\), a topological proximity matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\), a node attribute matrix \(\mathbf{X}\in\mathbb{R}^{N\times N}\), and a node label matrix \(\mathbf{Y}\in\mathbb{R}^{N\times C}\), where \(\mathcal{N},\mathcal{W}\) and \(\mathcal{C}\) are the number of nodes, node attributes, and node labels in \(\mathcal{G}\) respectively. Each node \(\nu_{i}\in V\) is associated with a topological proximity vector \(\mathbf{a}_{i}\), an attribute vector \(\mathbf{x}_{i}\), and a label vector \(\mathbf{y}_{i}\). Specifically, \(\mathbf{y}_{i}\) is a one-hot vector if \(\nu_{i}\) is a labeled node; otherwise, \(\mathbf{y}_{i}\) is a zero vector. The superscripts \(s\) and \(t\) are employed to denote the source network and the target network. In this work, we focus on unsupervised CNNC problem, i.e., all nodes in the source network \(\mathcal{G}^{s}=(V^{s},E^{s},\mathbf{A}^{s},\mathbf{X}^{s},\mathbf{Y}^{s})\) have observed labels and all nodes in the target network \(\mathcal{G}^{t}=(V^{t},E^{t},\mathbf{A}^{t},\mathbf{X}^{t})\) are unlabeled. Note that \(\mathcal{G}^{s}\) is structurally very different from \(\mathcal{G}^{t}\), and the attribute distributions of \(\mathcal{G}^{s}\) and \(\mathcal{G}^{t}\) are also very different. The goal of CNNC is to learn appropriate cross-network embeddings based upon which a node classifier trained on the source network can be applied to accurately classify nodes for the target network. For clarity, Table 1 summarizes the frequently used notations in this article. Fig. 1: Model architecture of DM-GNN. A GNN encoder with dual feature extractors is employed to separate ego-embedding learning from neighbor-embedding learning. A label propagation node classifier is employed to refine label prediction. A conditional domain discriminator taking embeddings and predicted labels as the input is employed to compete against the GNN encoder by inserting a gradient reversal layer during back-propagation. ### Graph Neural Network Encoder with Dual Feature Extractors GCN-like models have been widely adopted by the state-of-the-art CNNC algorithms (Dai et al., 2023; M. Wu et al., 2020; Zhang et al., 2019) for network representation learning. In these models, ego-embedding and neighbor-embedding are typically mixed at each convolutional layer. Then, the same learnable parameters are employed on such mixed embeddings for representation transformation. It has been revealed in (Bo et al., 2021; Luan et al., 2020; Zhu et al., 2020) that the GCN-like models fail to preserve the discrimination between connected nodes since the layer-wise mixing strategy always forces connected nodes to have similar representations, even though they possess dissimilar attributes. In addition, recent literatures (Dong et al., 2021; Klicpera et al., 2019; Liu et al., 2020) have also revealed that the entanglement of neighborhood aggregation and representation transformation can lead to over-smoothing easily, eventually making the nodes from different classes having indistinguishable representations. To go beyond the limits of the typical GCN design, we construct a novel GNN encoder with dual feature extractors, which is distinct from the GCN-like models. Instead of mixing each node's embedding with its neighbors' embeddings, dual feature extractors with different learnable parameters are adopted to separate ego-embedding learning from neighbor-embedding learning. The two feature extractors are both constructed by an MLP with the same layer setting. However, the input of the two feature extractors is different. The first feature extractor (FE1) is employed to learn ego-embedding, hinging on each node's own attributes, as: \[\mathbf{h}_{i}^{\mathcal{F}_{1}(l)}=\text{ReLU}\big{(}\mathbf{h}_{i}^{\mathcal{F}_{1} (l-1)}\mathbf{W}^{\mathcal{F}_{2}(l)}+\mathbf{b}^{\mathcal{F}_{1}(l)}\big{)},1\leq l \leq l_{\mathcal{F}} \tag{1}\] where \(\mathbf{h}_{i}^{\mathcal{F}_{1}(0)}=\mathbf{x}_{i}\), and \(\mathbf{h}_{i}^{\mathcal{F}_{1}(l)}\) represents the \(l\)-th layer ego-embedding of \(\nu_{i}\). \(\mathbf{W}^{\mathcal{F}_{1}(l)}\) and \(\mathbf{b}^{\mathcal{F}_{1}(l)}\) are the trainable parameters associated with the \(l\)-th layer of FE1, and \(l_{\mathcal{F}}\) is the number of hidden layers in FE1. In addition, we perform feature propagation to aggregate the neighbor attributes of each node as follows: \[\mathbf{n}_{i}=\sum_{j=1,l\neq i}^{N}\frac{a_{ij}}{\sum_{g=g,g\neq i}^{N}a_{ig}} \mathbf{x}_{j} \tag{2}\] where \(\mathbf{n}_{i}\in\mathbb{R}^{1\times W}\) is the aggregated neighbor attribute vector of \(\nu_{i}\). \(a_{ij}\) is the topological proximity between \(\mathbf{v}_{i}\) and \(\mathbf{v}_{j}\), which is \begin{table} \begin{tabular}{c c} \hline \hline Notations & Descriptions \\ \hline \(\mathcal{G}\) & A network \\ \(\mathcal{N}\) & Number of nodes in \(\mathcal{G}\) \\ \(\mathcal{W}\) & Number of attributes in \(\mathcal{G}\) \\ \(\mathcal{C}\) & Number of labels in \(\mathcal{G}\) \\ \(\nu_{i}\) & \(i\)-th node in \(\mathcal{G}\) \\ \(\mathbf{a}_{i}\) & Topological proximity vector of \(\nu_{i}\) \\ \(\mathbf{x}_{i}\) & Attribute vector of \(\nu_{i}\) \\ \(\mathbf{n}_{i}\) & Neighbor attribute vector of \(\nu_{i}\) \\ \(\mathbf{y}_{i}\) & Label vector of \(\nu_{i}\) \\ \(\mathbf{h}_{i}^{\mathcal{F}_{1}(l_{\mathcal{F}})}\) & Ego-embedding of \(\nu_{i}\) learned by FE1 \\ \(\mathbf{h}_{i}^{\mathcal{F}_{1}(l_{\mathcal{F}})}\) & Neighbor-embedding of \(\nu_{i}\) learned by FE2 \\ \(\mathbf{e}_{i}\) & Final embedding of \(\nu_{i}\) \\ d & Embedding dimensionality \\ \hline \hline \end{tabular} \end{table} Table 1: Frequently used notations. measured by the positive pointwise mutual information (PPMI) metric (Levy & Goldberg, 2014). The PPMI metric was originally developed to measure the similarity between words in NLP. It was extended to measure high-order proximities between nodes in random-walk-based network embedding algorithms (Cao et al., 2016; Perozzi et al., 2014; Shen & Chung, 2017). It has been widely acknowledged that the PPMI metric which captures the high-order proximities is effective for node classification (Cao et al., 2016; Perozzi et al., 2014; Shen & Chung, 2017), thus, it has been widely adopted by the state-of-the-art CNNC algorithms (Dai et al., 2023; Shen, Dai, et al., 2020; Shen et al., 2021; M. Wu et al., 2020) to measure topological proximities. Firstly, we follow (Shen & Chung, 2017) to compute an aggregated transition probability matrix \(\mathbf{\mathcal{T}}\) within \(K\) steps, by assigning lower weights to more distant neighbors as \(\mathbf{\mathcal{T}}=\sum_{k=1}^{K}\mathbf{\mathcal{T}}^{(k)}/k\), where \(\mathbf{\mathcal{T}}^{(k)}\) is the \(k\)-step transition probability matrix. Then, the topological proximity between \(\nu_{i}\) and \(\nu_{j}\) is measured by PPMI (Levy & Goldberg, 2014) as: \[a_{ij}=\begin{cases}\max\left(\log\Big{(}\frac{\gamma_{ij}/\Sigma_{g-1}^{N} \gamma_{dg}}{\Sigma_{g=1}^{N}(\mathcal{T}_{gf}/\Sigma_{l=1}^{N}\mathcal{T}_{ gt})/\kappa}\Big{)},0\right),if\ \mathcal{T}_{ij}>0\\ 0,if\ \mathcal{T}_{ij}=0\end{cases} \tag{3}\] where \(\mathcal{T}_{ij}\) is the aggregated transition probability from \(\nu_{i}\) to \(\nu_{j}\) within \(K\) steps. Note that \(a_{ij}>0\) if \(\nu_{j}\) can reach \(\nu_{i}\) within \(K\) steps in \(\mathcal{G}\); otherwise, \(a_{ij}=0\). The second feature extractor (FE2) is employed to learn neighbor-embedding, hinging on the aggregated neighbor attributes, as: \[\mathbf{h}_{i}^{\mathcal{T}_{2}(l)}=\text{ReLU}\big{(}\mathbf{h}_{i}^{\mathcal{T}_{2} (l-1)}\mathbf{W}^{\mathcal{T}_{2}(l)}+\mathbf{b}^{\mathcal{T}_{2}(l)}\big{)},1\leq l \leq l_{\mathcal{P}} \tag{4}\] where \(\ \mathbf{h}_{i}^{\mathcal{T}_{2}(0)}=\mathbf{n}_{i}\), and \(\mathbf{h}_{i}^{\mathcal{T}_{2}(l)}\) represents the \(l\)-th layer neighbor-embedding of \(\nu_{i}\). \(\mathbf{W}^{\mathcal{T}_{2}(l)}\) and \(\mathbf{b}^{\mathcal{T}_{2}(l)}\) are the trainable parameters at the \(l\)-th layer of FE2. Note that in FE2, the neighborhood aggregation in Eq. (2) has been decoupled from the representation transformation in Eq. (4), which enable the number of propagation steps (\(K\)) to be independent of the number of hidden layers (\(l_{\mathcal{P}}\)) for representation transformation. It has been shown in (Dong et al., 2021; Klicpera et al., 2019; Liu et al., 2020) that such decoupling design can remedy over-smoothing in the GCN-like models. Next, we formulate the deepest ego-embedding and deepest neighbor-embedding, and feed them to a single-layer perceptron for non-linear transformation, as follows: \[\mathbf{e}_{i}=\text{ReLU}\big{(}\big{[}\mathbf{h}_{i}^{\mathcal{T}_{1}(\mathbf{\nu}_{i})} \big{]}\big{[}\mathbf{h}_{i}^{\mathcal{T}_{2}(\mathbf{\nu}_{j})}\big{]}\mathbf{W}_{c}+\mathbf{ b}_{c}\big{)} \tag{5}\] where \(\mathbf{e}_{i}\in\mathbb{R}^{1\times\text{d}}\) is the final embedding of \(\mathbf{\upsilon}_{l}\), \(\mathbb{d}\) is the embedding dimensionality, \(\mathbf{W}_{c}\) and \(\mathbf{b}_{c}\) are the trainable parameters. Note that in contrast to the GCN-like models, we employ different trainable parameters to learn ego-embedding and neighbor-embedding separately. With the help of this separation design, the final embedding learned by DM-GNN is capable of capturing both commonality and discrimination between connected nodes: (i) When two connected nodes \(\nu_{i}\) and \(\nu_{j}\) have similar attributes, FE1 generates similar ego-embeddings and FE2 generates similar neighbor-embeddings. By combining similar ego-embeddings and similar neighbor-embeddings, \(\nu_{i}\) and \(\nu_{j}\) would have similar final embeddings, which effectively capture the commonality between connected nodes. (ii) When two connected nodes \(\nu_{i}\) and \(\nu_{j}\) have dissimilar attributes, FE2 still generates similar neighbor-embeddings due to network connection, whereas FE1 generates dissimilar ego-embeddings due to dissimilar attributes, thus the discrimination between connected nodes can be captured. Then, by concatenating the similar neighbor-embeddings and dissimilar ego-embeddings, the final embeddings of \(\nu_{l}\) and \(\nu_{j}\) would not become prohibitively similar. We essentially conduct feature propagation in Eq. (2), i.e., propagating the input attributes of each node to its neighbors. Apart from propagating features during the input process, we further introduce a feature propagation loss to ensure that the output embeddings are also satisfied with the feature propagation objective, as: \[\frac{1}{\mathcal{N}}\sum_{i=1}^{\mathcal{N}}\left\|\mathbf{e}_{i}-\sum_{j=1,j\neq i }^{\mathcal{N}}\frac{a_{ij}}{\Sigma_{g=1,g\neq i}^{\mathcal{N}}a_{ig}}\mathbf{e}_{ j}\right\|^{2} \tag{6}\] Minimizing Eq. (6) makes each node's final embedding similar to a weighted average of its neighbors' final embeddings. Nevertheless, the aggregation in Eq. (6) does not consider class-label information. When a node has connections between different labeled nodes, aggregating the information from such different labeled neighbors would introduce noises and degrade the node classification performance. To avoid such noises, for the source network where all nodes have observed labels, we devise a label-aware feature propagation loss to allow features to propagate through the same labeled neighbors only, as: \[\frac{1}{\mathcal{N}^{s}}\sum_{i=1}^{\mathcal{N}^{s}}\left\|\mathbf{e}_{i}^{s}- \sum_{j=1,j\neq i}^{\mathcal{N}^{s}}\frac{a_{ij}^{s}\sigma_{ij}^{s}}{\Sigma_{g =1,g\neq i}^{\mathcal{N}^{s}}a_{ig}^{s}\sigma_{ig}^{s}}\mathbf{e}_{j}^{s}\right\|^ {2} \tag{7}\] where \(\sigma_{ij}^{s}\) is a label indicator showing whether \(\nu_{l}^{s}\) and \(\nu_{j}^{s}\) share common labels or not, i.e., \(\sigma_{ij}^{s}\)=1 if \(\nu_{l}^{s}\) and \(\nu_{j}^{s}\) share at least one common label; otherwise, \(\sigma_{ij}^{s}\)=0. Note that \(a_{ij}^{s}\sigma_{ij}^{s}\)>0 if and only if \(\nu_{j}^{s}\) is a same-labeled neighbor of \(\nu_{l}^{s}\) within \(K\) steps; otherwise, \(\sigma_{ij}^{s}\sigma_{ij}^{s}\)=0. Minimizing Eq. (7) constraints each source node to have similar embedding with its same-labeled neighbors, rather than with all of its neighbors. On the other hand, for the target network which has no label information, we use the feature propagation loss in Eq. (6). The total feature propagation loss of DM-GNN is defined as: \[\mathcal{L}_{\mathcal{F}}=\frac{1}{\mathcal{N}^{s}}\sum_{l=1}^{ \mathcal{N}^{s}}\left\|\mathbf{e}_{l}^{s}-\sum_{j=1,j\neq i}^{\mathcal{N}^{s}} \frac{a_{ij}^{s}\sigma_{ij}^{s}}{\Sigma_{g=1,g\neq i}^{\mathcal{N}^{s}}a_{ig }^{s}\sigma_{ig}^{s}}\mathbf{e}_{j}^{s}\right\|^{2}+\frac{1}{\mathcal{N}^{t}}\sum _{l=1}^{\mathcal{N}^{t}}\left\|\mathbf{e}_{l}^{t}-\sum_{j=1,j\neq i}^{\mathcal{N}^ {t}}\frac{a_{ij}^{t}}{\Sigma_{g=1,g\neq i}^{\mathcal{N}^{t}}a_{ig}^{t}}\mathbf{e}_ {j}^{t}\right\|^{2} \tag{8}\] where \(\mathcal{N}^{s}\) and \(\mathcal{N}^{t}\) are the number of nodes in \(\mathcal{G}^{s}\) and \(\mathcal{G}^{t}\). ### Label Propagation Node Classifier Most existing CNNC algorithms (Shen et al., 2020; Shen et al., 2021; M. Wu et al., 2020; Zhang et al., 2019) only focus on feature propagation, i.e., smoothing features of the neighboring nodes. The prior label propagation algorithms (Bengio et al., 2006; Zhou et al., 2004) aim to propagate the label probability distributions through the edges of the graph. Both GNNs and label propagation algorithms can be viewed as message passing algorithms on the graph, with the goal of feature smoothing and label smoothing over the neighborhood respectively. In DM-GNN, we propose to unify feature propagation and label propagation in cross-network embedding. Specifically, we incorporate label propagation in the node classifier by combining its own prediction and the prediction from its neighbors as follows: \[\mathbf{\hat{y}}_{i}=\phi\left(\mathbf{e}_{i}\mathbf{W}_{y}+\mathbf{b}_{y}+\sum_{l=1,j\neq i}^{N} \frac{a_{ij}}{\sum_{g=1,g\neq i}^{N}a_{iq}}\big{(}\mathbf{e}_{j}\mathbf{W}_{y}+\mathbf{b}_{ y}\big{)}\right) \tag{9}\] where \(\mathbf{\hat{y}}_{i}\in R^{1\times\mathcal{C}}\) is the predicted label probability vector of \(\mathbf{v}_{i}\). \(\mathbf{W}_{y}\) and \(\mathbf{b}_{y}\) are trainable parameters, and \(\mathbf{e}_{i}\mathbf{W}_{y}+\mathbf{b}_{y}\) is a vector of label logits (raw unnormalized label prediction) of \(\mathbf{v}_{i}\). \(\mathbf{\phi}(\cdot)\) can be a Softmax or Sigmoid function. In Eq. (9), the label logits of the neighbors which can reach \(\mathbf{v}_{i}\) within \(K\) steps in \(\mathcal{G}\) are aggregated. In addition, during label aggregation, higher weights are assigned to more closely connected neighbors (i.e. associated with higher topological proximities). Besides, similar to the label-aware feature propagation in Eq. (7), we modify Eq. (9) by incorporating the label indicator for the fully labeled source network as follows: \[\mathbf{\hat{y}}_{l}^{s}=\phi\begin{pmatrix}\mathbf{e}_{i}^{s}\mathbf{W}_{y}+\mathbf{b}_{y}\\ +\sum_{j=1,j\neq i}^{N^{s}}\frac{a_{ij}^{s}\delta_{ij}^{s}}{\sum_{g=1,g\neq i} ^{N^{s}}a_{iq}^{s}\delta_{iq}^{s}}\big{(}\mathbf{e}_{j}^{s}\mathbf{W}_{y}+\mathbf{b}_{y} \big{)}\end{pmatrix} \tag{10}\] which only allows label propagation from the same labeled neighbors of \(\mathbf{v}_{l}^{s}\) and avoids label propagation from the neighbors having different labels with \(\mathbf{v}_{l}^{s}\). The cross-entropy node classification loss is defined as: \[\mathcal{L}_{y}=\text{Cross Entropy}(\mathbf{\hat{y}}_{l}^{s},\mathbf{y}_{l}^{s}) \tag{11}\] where \(\mathbf{y}_{l}^{s}\in R^{1\times\mathcal{C}}\) is the ground-truth label vector of \(\mathbf{v}_{l}^{s}\). With the label-aware propagation mechanism in Eq. (10) and by minimizing Eq. (11), more label-discriminative source embeddings can be learned. ### Conditional Adversarial Domain Adaptation Inspired by GAN, a family of adversarial domain adaptation algorithms has been proposed, which demonstrate impressive performance in learning domain-invariant representations. The adversarial domain adaptation approaches (Ganin et al., 2016; Mao et al., 2017; Shen et al., 2018) adopted by the existing CNNC algorithms (Dai et al., 2023; Shen, Dai, et al., 2020; M. Wu et al., 2020; Zhang et al., 2019) focus on matching the marginal distributions of feature representations across networks, but adapting only feature representations cannot guarantee that the corresponding class-conditional distributions can be well matched (Long et al., 2018). To address this issue, in DM-GNN, we propose to condition adversarial domain adaptation on the class-label information. A simple solution is to directly concatenate the feature representation and label prediction as the input of the conditional domain discriminator (Mirza and Osindero, 2014; Odena et al., 2017). However, the concatenation strategy makes feature representation and label prediction independent of each other during adversarial domain adaptation (Long et al., 2018). To capture the multiplicative interactions between features and labels, we employ the tensor product between the embedding and label prediction as the input of conditional domain discriminator, as in the recent conditional adversarial domain adaptation algorithms (Long et al., 2018; Pei et al., 2018; Wang et al., 2019). Specifically, an MLP is employed to construct the conditional domain discriminator, i.e., \[\mathbf{h}_{l}^{\mathcal{D}(l)}=\text{ReLU}\big{(}\mathbf{h}_{l}^{\mathcal{D}(l-1)}\mathbf{W} ^{\mathcal{D}(l)}+\mathbf{b}^{\mathcal{D}(l)}\big{)},\mathbf{1}\leq l\leq l_{\mathcal{D}} \tag{12}\] where \(\mathbf{W}^{\mathcal{D}(l)}\) and \(\mathbf{b}^{\mathcal{D}(l)}\) are trainable parameters, \(l_{\mathcal{D}}\) is the number of hidden layers in conditional domain discriminator. \(\mathbf{h}_{l}^{\mathcal{D}(0)}=\mathbf{e}_{i}\bigotimes\mathbf{\hat{y}}_{i}\) represents the input of conditional domain discriminator, \(\bigotimes\) denotes the tensor product operation. \(\mathbf{\hat{y}}_{l}\) is the neighborhood-refined label probability vector of \(\mathbf{v}_{i}\), which is predicted via Eq. (10) if \(\mathbf{v}_{i}\) is from the source network, or via Eq. (9) if \(\mathbf{v}_{i}\) is from the target network. Note that unlike the studies in (Long et al., 2018; Pei et al., 2018; Wang et al., 2019) that utilize the own label prediction of each sample independently, the neighborhood-refined label prediction proposed by DM-GNN is expected to yield more accurate label prediction during conditional adversarial domain adaptation. By utilizing \(\mathbf{e}_{l}\bigotimes\mathbf{\hat{y}}_{l}\) as the input to conditional domain discriminator, the multimodal structures of the data distributions across networks can be well captured. In addition, the joint distribution of the embedding and class can be captured during adversarial domain adaptation. Next, a Softmax layer is added to predict which network a node comes from: \[\hat{d}_{i}=\text{Softmax}\big{(}\mathbf{h}_{l}^{\mathcal{D}(l_{\mathcal{D}})}\mathbf{ W}^{\mathcal{D}}+\mathbf{b}^{\mathcal{D}}\big{)} \tag{13}\] where \(\hat{d}_{i}\) is the probability of \(\mathbf{v}_{i}\) coming from the target network, predicted by the conditional domain discriminator. \(\mathbf{W}^{\mathcal{D}}\) and \(\mathbf{b}^{\mathcal{D}}\) are trainable parameters. By employing nodes from the two networks for training, the cross-entropy domain classification loss is defined as: \[\mathcal{L}_{\mathcal{D}}=\underset{\mathbf{v}_{i}\in\{\mathbf{v}^{\mathcal{D}}\cup \mathbf{v}^{\mathcal{D}}\}}{\text{CrossEntropy}}\big{(}\hat{d}_{i},d_{i}\big{)} \tag{14}\] where \(d_{i}\) is the ground-truth domain label of \(\mathbf{v}_{i}\), \(d_{i}=1\) if \(\mathbf{v}_{i}\in\mathbf{V}^{t}\) and \(d_{i}=0\) if \(\mathbf{v}_{i}\in\mathbf{V}^{s}\). ### Optimization of DM-GNN The DM-GNN is trained in an end-to-end manner, by optimizing the following minmax objective function: \[\underset{\theta_{\mathcal{F}},\theta_{\mathcal{F}}}{min}\left\{\mathcal{L}_{ y}+\mathbf{f}\mathcal{L}_{\mathcal{F}}+\lambda\underset{\theta_{\mathcal{D}}}{max}\{- \mathcal{L}_{\mathcal{D}}\}\right\} \tag{15}\] where \(\mathbf{f}\) and \(\lambda\) are the weights of feature propagation loss and domain classification loss. \(\theta_{\mathcal{F}},\theta_{\mathcal{Y}},\theta_{\mathcal{D}}\) denote the trainable parameters of GNN encoder, label propagation node classifier and conditional domain discriminator, respectively. Following (Ganin et al., 2016), a Gradient Reversal Layer (GRL) is inserted between the conditional domain discriminator and the GNN encoder to make them compete against each other during back-propagation. On one hand, \(\underset{\theta_{\mathcal{D}}}{\text{max}}\{-\mathcal{L}_{\mathcal{D}}\}\) enables the conditional domain discriminator to accurately distinguish the embeddings from different networks, conditioning on the class-label information predicted by the label propagation node classifier. On the other hand, \(\underset{\theta_{\mathcal{F}}}{\text{min}}\{-\mathcal{L}_{\mathcal{D}}\}\) encourages GNN encoder to fool conditional domain discriminator, by generating network-indistinguishable embeddings, conditioning on the predicted class-label information. After the training converges, the network-invariant embeddings, which also match well with the class-conditional distributions across networks, can be learned by DM-GNN. Algorithm 1 shows the training process of DM-GNN. It adopts a mini-batch training strategy, where half of the nodes are sampled from \(\mathcal{G}^{s}\) and the other half from \(\mathcal{G}^{t}\). Firstly, in each mini-batch, the cross-network embeddings are learned by dual feature extractors in Lines 3-5. The feature propagation loss is computed in Line 6. The label probabilities of the source-network nodes are refined by the label-aware propagation node classifier and the node classification loss is computed, as in Line 7. Next, by using the tensor product between the embedding vector and predicted label probability vector as the input to the conditional domain discriminator, the domain classification loss is computed in Line 8. The DM-GNN is optimized by stochastic gradient descent (SGD) in Line 9. Finally, the optimized parameters are employed to generate cross-network embeddings in Line 12. The labels of the nodes from \(\mathcal{G}^{t}\) are predicted by the label propagation node classifier in Line 13. ``` Input: Source network \(\mathcal{G}^{s}=(V^{s},E^{s},\mathbf{A}^{s},\mathbf{X}^{s},\mathbf{Y}^{s})\), target network \(\mathcal{G}^{t}=(V^{t},E^{t},\mathbf{A}^{t},\mathbf{X}^{t})\), batch size \(b\), feature propagation weight \(\mathbf{f}\), domain adaptation weight \(\lambda\). ``` 1 while not max iteration do: 2 for each mini-batch \(B\) do: 3 For each \(v_{i}^{s}\in V^{s}\) and each \(v_{j}^{t}\in V^{t}\) in \(B\): 4 Learn ego-embedding in (1), neighbor-embedding in (4), and final embedding in (5); 5 end for 6 Compute \(\mathcal{L}_{\mathcal{F}}\) in (8) based on \(\{(\mathbf{e}_{i}^{s},\mathbf{a}_{i}^{s},\mathbf{y}_{i}^{s})\}_{i=1}^{b/2}\) and \(\{(\mathbf{e}_{j}^{t},\mathbf{a}_{i}^{t})\}_{j=1}^{b/2}\); 7 Refine label probabilities \(\{\mathbf{\hat{y}}_{i}^{s}\}_{i=1}^{b/2}\) in (10), and compute \(\mathcal{L}_{\mathcal{D}}\) in (11) based on \(\{(\mathbf{\hat{y}}_{i}^{s},\mathbf{y}_{i}^{s})\}_{i=1}^{b/2}\); 8 Compute \(\mathcal{L}_{\mathcal{D}}\) in (14) based on \(\{(\mathbf{e}_{i}^{s},\mathbf{\hat{y}}_{i}^{s},d_{i})\}_{i=1}^{b/2}\) and \(\{(\mathbf{e}_{i}^{s},\mathbf{\hat{y}}_{i}^{s},d_{j})\}_{i=1}^{b/2}\); 9 Update parameters \(\theta_{\mathcal{F}},\theta_{\mathcal{Y}},\theta_{\mathcal{D}}\) to optimize (15) via SGD; 10 end for 11 end while 12 Use optimized \(\theta_{\mathcal{F}}^{s}\) to learn \(\{\mathbf{e}_{i}^{s}\}_{i=1}^{\mathcal{N}^{s}}\) and \(\{\mathbf{e}_{j}^{t}\}_{j=1}^{\mathcal{N}^{t}}\) via (1), (4) and (5); 13 Apply optimized \(\theta_{\mathcal{Y}}^{s}\) on \(\{\mathbf{e}_{j}^{t}\}_{j=1}^{\mathcal{N}^{t}}\) to predict \(\{\mathbf{\hat{y}}_{j}^{t}\}_{j=1}^{\mathcal{N}^{t}}\) via (9). ``` **Output:** Cross-network embeddings: \(\{\mathbf{e}_{i}^{s}\}_{i=1}^{\mathcal{N}^{s}}\) and \(\{\mathbf{e}_{j}^{t}\}_{j=1}^{\mathcal{N}^{t}}\). Predicted node labels for \(\mathcal{G}^{t}\): \(\{\mathbf{\hat{y}}_{j}^{t}\}_{j=1}^{\mathcal{N}^{t}}\). ``` **Algorithm 1**DM-GNN ### Analysis of CNNC Problem w.r.t. Domain Adaptation The CNNC problem can be seen as applying domain adaptation on node classification. Here, we discuss the relation between the model design of DM-GNN and the domain adaptation theory. Firstly, we follow (Ganin et al., 2016) to consider the source and target domains over the fixed representation space \(\mathbf{e}\), and a family of source classifiers \(\mathbf{\hat{A}}\) in hypothesis space \(\mathcal{H}\). The source risk \(R_{s}(\mathbf{\hat{A}})\) and target risk \(R_{t}(\mathbf{\hat{A}})\) of a node classifier \(\mathbf{\hat{A}}\in\mathcal{H}\) w.r.t. the source distribution \(\mathbb{P}_{s}\) and target distribution \(\mathbb{Q}_{t}\) are given by: \[R_{s}(\mathbf{\hat{A}})=\mathbb{E}_{(\mathbf{e},y)-\mathbb{P}_{s}}[\mathbf{\hat{A}}(\mathbf{e}) \neq y]\] \[R_{t}(\mathbf{\hat{A}})=\mathbb{E}_{(\mathbf{e},y)-\mathbb{Q}_{t}}[\mathbf{\hat{A}}(\mathbf{e}) \neq y] \tag{16}\] Let \(\hat{A}^{*}=\arg\min_{A\in\mathcal{I}}R_{s}(\hat{A})+R_{t}(\hat{A})\) be the joint ideal hypothesis for both \(\mathbb{P}_{s}\) and \(\mathbb{Q}_{t}\), the distribution discrepancy between the source and target domains is defined as (Long et al., 2018): \[disc(\mathbb{P}_{s},\mathbb{Q}_{t})=|R_{s}(\hat{A},\hat{A}^{*})-R_{t}(\hat{A}, \hat{A}^{*})| \tag{17}\] where \(R_{s}(\hat{A},\hat{A}^{*})=\mathbb{E}_{(\mathbf{e},\mathbf{y})\sim\mathbb{P}_{s}}[\hat {A}(\mathbf{e})\neq\hat{A}^{*}(\mathbf{e})]\) and \(R_{t}(\hat{A},\hat{A}^{*})=\mathbb{E}_{(\mathbf{e},\mathbf{y})\sim\mathbb{Q}_{t}}[\hat {A}(\mathbf{e})\neq\hat{A}^{*}(\mathbf{e})]\) are the disagreement between the two hypotheses \(\hat{A},\hat{A}^{*}\in\mathcal{H}\) w.r.t. \(\mathbb{P}_{s}\) and \(\mathbb{Q}_{t}\) respectively. According to the domain adaptation theory (Ben-David et al., 2010), the target risk \(R_{t}(\hat{A})\) is bounded by the source risk\(R_{s}(\hat{A})\) and domain discrepancy \(disc(\mathbb{P}_{s},\mathbb{Q}_{t})\), i.e., \[R_{t}(\hat{A})\leq R_{s}(\hat{A})+disc(\mathbb{P}_{s},\mathbb{Q}_{t})+[R_{s} (\hat{A}^{*})+R_{t}(\hat{A}^{*})] \tag{18}\] On one hand, to reduce source risk \(R_{s}(\hat{A})\), a typical solution is to minimize the classification loss on the labeled source domain (Ganin et al., 2016; Long et al., 2018), which is also adopted in DM-GNN by minimizing \(\mathcal{L}_{\mathcal{Y}}\) in Eq. (11). In contrast to previous literature, DM-GNN proposes a label-aware propagation mechanism for the labeled source network, which includes the label-aware feature propagation loss in Eq. (7) as a part of \(\mathcal{L}_{\mathcal{F}}\) and the label-aware label propagation mechanism in the node classifier in Eq. (10). This label-aware propagation mechanism can promote intra-class propagation while avoiding inter-class propagation based on the observed labels of the source network, thus, yielding more label-discriminative source embeddings to effectively reduce the source risk \(R_{s}(\hat{A})\). On the other hand, to reduce domain discrepancy \(disc(\mathbb{P}_{s},\mathbb{Q}_{t})\), DM-GNN employs the conditional adversarial domain adaptation algorithm (Long et al., 2018) by minimaxing the domain classification loss \(\mathcal{L}_{\mathcal{D}}\) in Eq. (14). It has been theoretically proven in (Long et al., 2018) that training the optimal domain discriminator can produce an upper bound of domain discrepancy \(disc(\mathbb{P}_{s},\mathbb{Q}_{t})\). In (Long et al., 2018), the domain discriminator takes the tensor product between the embedding vector and the own label prediction vector of each sample as the input. Instead, the proposed DM-GNN utilizes the neighborhood-refined label prediction to replace the own label prediction, which can condition adversarial domain adaptation on more accurate label prediction by taking the neighborhood information into account. In summary, by optimizing the overall objective function of DM-GNN in Eq. (15), the target risk bounded by the source risk and domain discrepancy can be effectively reduced to achieve good node classification performance in the target network. ### Complexity Analysis of DM-GNN The time complexity of computing the aggregated neighbor attributes as the input of FE2 is \(O\left(\left(nnz(\mathbf{A}^{s})+nnz(\mathbf{A}^{t})\right)\mathcal{W}\right)\), where \(\mathcal{W}\) is the number of attributes, \(nnz(\mathbf{A}^{s})\) and \(nnz(\mathbf{A}^{t})\) denote the number of non-zero elements in \(\mathbf{A}^{s}\) and \(\mathbf{A}^{t}\), which is linear to the number of edges in two networks. The time complexity of aggregating the neighbors' label probabilities is \(O\left(\left(nnz(\mathbf{A}^{s})+nnz(\mathbf{A}^{t})\right)\mathbbm{d}\mathbbm{1}\right)\), where \(\mathbbm{d}\) is the embedding dimensionality, and \(\mathbbm{1}\) is the number of training iterations, which is linear to the number of edges in the two networks. In addition, FE1, FE2, and the conditional domain discriminator are constructed as an MLP respectively, where the time complexity is linear to the number of nodes in the two networks. Thus, the overall time complexity of DM-GNN is linear to the size of the two networks. ## 4 Experiments In this section, we empirically evaluate the performance of the proposed DM-GNN. We aim to answer the following research questions: **RQ1**: Whether network representation learning and domain adaptation are indispensable for CNNC? **RQ2**: How does the proposed DM-GNN perform as compared with the state-of-the-art methods? **RQ3**: Whether the proposed model produces meaningful embedding visualizations? **RQ4**: How are the contributions of different components in DM-GNN? **RQ5**: How do the hyper-parameters affect the performance of DM-GNN? ### Experimental Setup #### 4.1.1 Datasets Five real-world cross-network benchmark homophilic datasets constructed in (Shen et al., 2021) were used in our experiments. Blog1 and Blog2 are two online social networks, where each network captures the friendship between bloggers. Each node is associated with an attribute vector, which are the keywords extracted from the blogger's self-description. A node is associated with one label for multi-class node classification, which indicates the blogger's interested group. In addition, Citationv1, DBLpv7 and ACMv9 are three citation networks, where each network captures the citation relationship between papers. Each node is associated with an attribute vector, i.e., the sparse bag-of-words features extracted from the paper title. A node can be associated with multiple labels for multi-label node classification, which indicate the relevant research areas of the paper. Two CNNC tasks were performed between two Blog networks, and six CNNC tasks were conducted among three citation networks. It is worth noting that the existing CNNC literature and our proposed DM-GNN mainly focus on the CNNC problem on the homophily graphs. However, it is interesting to explore the CNNC performance on the graphs with different level of homophily. To this end, we followed (Shen et al., 2021) to randomly extract two real-world networks from the benchmark heterophilic Squirrel dataset (Rozemberczki et al., 2021). In the Squirrel dataset, each node is an article from the English Wikipedia, each edge represents the mutual links between two articles, node attributes indicate the presence of particular nouns in the articles, and nodes are categorized into 5 classes based on the amount of their average traffic. Note that Squirrel1 and Squirrel2 are two disjoint subnetworks extracted from the original Squirrel dataset, where Squirrel1 and Squirrel2 do not share any common nodes, and there are no edges connecting nodes from Squirrel1 and Squirrel2. Two CNNC tasks can be performed between Squirrel1 and Squirrel2, by selecting one as the source network and the other as the target network. The statistics of all datasets used in our experiments are shown in Table 2. Following (Zhu et al., 2020), we measured the homophily ratio of each graph as the fraction of intra-class edges connecting nodes with the same class label. As shown in Table 2, the three citation networks (Citationv1, DBLPv7 and ACMv9) are with high homophily, the two Blog networks (Blog1 and Blog2) are with medium homophily, while the two Wikipedia networks (Squirrel1 and Squirrel2) are with low homophily. #### 4.1.2 Comparing Algorithms Three types of algorithms, totaled 11 algorithms, were adopted to compare with proposed DM-GNN. Traditional Domain Adaptation: **MMD**(Gretton et al., 2007) is a simple statistical metric widely utilized to match the mean of the distributions between two domains. **DANN**(Ganin et al., 2016) inserts a GRL between the feature extractor and the domain discriminator to reduce domain discrepancy in an adversarial training manner. Graph Neural Networks: **ANRL**(Zhang et al., 2018) leverages a SAE to reconstruct the neighbor attributes of each node and a skip-gram model to capture network structures. **SEANO**(Liang et al., 2018) designs a GNN with dual inputs hinging on the attributes of the nodes and their neighbors respectively, and dual outputs predicting node labels and node contexts respectively. **GCN**(Kipf and Welling, 2017) employs a layer-wise propagation mechanism to update each node's representation by repeatedly averaging its own representation and those of its neighbors. Cross-network Node Classification: **NetTr**(Fang et al., 2013) projects the label propagation matrices of two networks into a common latent space. **CDNE**(Shen et al., 2021) utilizes two SAEs to respectively reconstruct the topological structures of two networks. Besides, the MMD metrics were incorporated to match the distributions between two networks. **ACDNE**(Shen et al., 2020) employs two feature extractors to learn latent representations based on each node's own attributes and the aggregated attributes of its neighbors, and employs an adversarial domain adaptation approach to learn network-invariant representations. **UDA-GCN**(M. Wu et al., 2020) employs a dual-GCN model (Zhuang and Ma, 2018) for network representation learning and an adversarial domain adaptation approach to reduce domain discrepancy. **AdaGCN**(Dai et al., 2023) incorporates adversarial domain adaptation into GCN to learn network-invariant representations. **AdaIGCN**(Dai et al., 2023) further improves on AdaGCN by adopting an improved graph convolutional filter (Li et al., 2019). #### 4.1.3 Implementation Details In the proposed DM-GNN1, we set the neighborhood size as \(K\)=3 when measuring the topological proximities via the PPMI metric. An MLP with two hidden layers was employed to construct FE1 and FE2, where the dimensionalities were set as 512 and 128 respectively for the first and second hidden layers. The embedding dimensionality was set as \(\mathbbm{d}=128\). The conditional domain discriminator was constructed by an MLP with two hidden layers, with the dimensionality of each layer as 128. We set the weight of feature propagation loss as \(\boldsymbol{\psi}=0.1\) on the citation networks, \(\boldsymbol{\psi}=10^{-3}\) on the Blog networks, and \(\boldsymbol{\psi}=10^{-4}\) on the heterophilic Wikipedia networks. DM-GNN was trained by SGD with a momentum rate of 0.9 over shuffled mini-batches, where the batch size was set to 100. We set the initial learning rate as \(\mu_{0}=0.01\) for the Blog networks, \(\mu_{0}=0.02\) for the citation networks, and \(\mu_{0}=0.001\) for the heterophilic Wikipedia networks. Then, we followed [19] to decay the learning rate as \(\mu=\frac{\mu_{0}}{(1+10)^{0.75}}\) and progressively increase the domain adaptation weight as \(\lambda=\frac{2}{1+exp\ (-10)}-1\), where \(i\) is the training progress linearly changing from 0 to 1. Footnote 1: Our code is released at [https://github.com/shenxiaocam/DM_GNN](https://github.com/shenxiaocam/DM_GNN). For two traditional domain adaptation baselines which only consider node attributes, an MLP (with similar settings as FE1 and FE2 in the proposed DM-GNN) was employed for feature representation learning. To tailor the GNNs which were originally developed for a single-network scenario to CNNC, we constructed a unified network with the first \(\mathcal{N}^{s}\) nodes from the source network and the last \(\mathcal{N}^{t}\) nodes from the target network. Then, the unified network was employed as one input network to learn cross-network embeddings. Besides, for the CNNC baselines, we used the codes provided by the authors of the algorithms and carefully tuned their hyper-parameters to report their optimal performance. Following [18, 19, 19], the Micro-F1 and Macro-F1 metrics were employed to evaluate the CNNC performance. The mean and standard deviations of the F1 scores on the homophilic graphs and the heterophilic graphs are reported in Table 3 and Table 4 respectively. ### Cross-network Node Classification Results #### 4.2.1 Performance of Traditional Domain Adaptation Methods and Graph Neural Networks (RQ1) On one hand, as shown in Table 3, DANN outperformed MMD in all tasks. This reflects that the adversarial domain adaptation approach yields more domain-invariant representations, as compared to the statistical approach. However, both MMD and DANN performed rather poorly in all CNNC tasks. This is because the traditional domain adaptation algorithms consider each data sample as i.i.d. during feature representation learning. The i.i.d. assumption works well in text and image data. However, it is rather unsuitable for graph-structured data, since the nodes in network are indeed not independent but related to others via complex network connections. On the other hand, the GNNs (ANRL, SEANO and GCN), which are even originally developed for a single-network scenario, can significantly outperform MMD and DANN in almost all the CNNC tasks. This again demonstrates that taking network topologies into account is essential for node classification [11, 12, 13]. However, ANRL, SEANO and GCN still performed significantly worse than the CNNC algorithms (CDNE, ACDNE, AdaGCN, AdaIGCN, UDA-GCN and DM-GNN) which integrate GNN with domain adaptation. This reveals that incorporating domain adaptation to reduce domain discrepancy across networks is indeed necessary for CNNC. Therefore, to succeed in CNNC, on one hand, employing GNNs to jointly model network structures and node attributes is indispensable. On the other hand, employing domain adaptation to mitigate distribution shift across networks is also indispensable. #### 4.2.2 Performance Comparison with State-of-the-Arts (RQ2) As shown in Table 3, NetTr performed the worst among all the CNNC baselines. This is because NetTr only considers network structures when learning common latent features across networks. However, the target network is very structurally different from the source network. Although CDNE also utilizes the network-specific topological structures to learn embeddings for each network, in contrast to NetTr, CDNE further leverages the less network-specific node attributes to align the embeddings between two networks. By taking both network structures and node attributes into account in cross-network embedding, CDNE therefore \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{\(\delta^{s}\to\delta^{t}\)} & \multirow{2}{*}{\begin{tabular}{c} F1 \\ (\%) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} MMD \\ (0.4) \\ \end{tabular} } & \multirow{2}{*}{DANN} & \multirow{2}{*}{ANRL} & \multirow{2}{*}{SEANO} & \multirow{2}{*}{GCN} & \multirow{2}{*}{NetTr} & \multirow{2}{*}{CDNE} & \multirow{2}{*}{ACDense} & \multirow{2}{*}{AdaGCN} & \multirow{2}{*}{AdaGCN} & \multirow{2}{*}{AdaGCN} \\ & & & & & & & & & & & & \\ \hline \multirow{4}{*}{\begin{tabular}{c} Blog1\(\rightarrow\)Blog2 \\ \end{tabular} } & Micro & 43.85 & 44.95 & 47.76 & 49.87 & 51.14 & 50.14 & 66.6 & 66.25 & 62.99 & 57.16 & 64.74 & **66.8** \\ & & (0.4) & (0.49) & (1.55) & (1.81) & (1.37) & (0) & (0.64) & (0.56) & (1.34) & (2.78) & (1.13) & **(0.27)** \\ & & Macro & 43.7 & 44.84 & 45.91 & 49.59 & 47.88 & 49.18 & 66.43 & 66 & 62.26 & 55.66 & 64.29 & **66.53** \\ & & (0.45) & (0.42) & (1.52) & (1.92) & (2.05) & (0) & (0.52) & (0.53) & (1.38) & (2.65) & (1.26) & **(0.3)** \\ \hline \multirow{4}{*}{\begin{tabular}{c} Blog2\(\rightarrow\)Blog1 \\ \end{tabular} } & Micro & 45.95 & 46.56 & 44.17 & 50.23 & 49.83 & 52.43 & 63.84 & 63.54 & 60.03 & 51.11 & 57.99 & **65.55** \\ & & (0.63) & (0.69) & (1.21) & (1.11) & (1.8) & (0) & (0.82) & (0.43) & (2.6) & (5.53) & (0.61) & **(0.25)** \\ & & 45.8 & 46.42 & 42.26 & 49.85 & 46.34 & 51.51 & 63.66 & 63.51 & 59.41 & 49.53 & 57.62 & **65.42** \\ & & (0.62) & (0.65) & (1.68) & (0.83) & (2.29) & (0) & (0.85) & (0.38) & (2.53) & (5.73) & (0.67) & **(0.3)** \\ \hline \multirow{4}{*}{\begin{tabular}{c} Citationv1\(\rightarrow\)DBLPv7 \\ \end{tabular} } & Micro & 57.01 & 57.85 & 66.03 & 69.31 & 71.24 & 59.88 & 74.15 & 77.35 & 76.6 & 77.21 & 78.25 & **79.5** \\ & & (0.24) & (0.21) & (1.28) & (1.09) & (0.87) & (0) & (0.4) & (0.77) & (0.53) & (0.53) & (0.36) & **(0.13)** \\ & & 53.58 & 55.15 & 62.78 & 66.94 & 68.12 & 55.18 & 71.71 & 76.09 & 74.83 & 74.37 & 77.00 & **78.35** \\ & & (0.25) & (0.17) & (0.85) & (0.99) & (1.36) & (0) & (0.67) & (0.6) & (0.53) & (0.87) & (0.77) & **(0.19)** \\ \hline \multirow{4}{*}{\begin{tabular}{c} DBLPv7\(\rightarrow\)Citationv1 \\ \end{tabular} } & Micro & 53.4 & 56.27 & 66.64 & 71.5 & 71.63 & 59.11 & 79.61 & 82.09 & 80.54 & 82.46 & 77.47 & **82.91** \\ & & (0.17) & (0.33) & (0.83) & (0.62) & (0.67) & (0) & (0.34) & (0.15) & (1.24) & (1.04) & (1.44) & **(0.17)** \\ & & 49.62 & 54.13 & 63.44 & 69.54 & 67.19 & 55.53 & 78.05 & 80.25 & 78.76 & 80.67 & 74.88 & **81.17** \\ & & (0.28) & (0.35) & (0.88) & (1.01) & (0.73) & (0) & (0.29) & (0.2) & (1.24) & (1.49) & (1.47) & **(0.28)** \\ \hline \multirow{4}{*}{\begin{tabular}{c} Citationv1\(\rightarrow\)ACMv9 \\ \end{tabular} } & Micro & 54.16 & 55.53 & 64.46 & 67.81 & 71.32 & 57.75 & 77.52 & 79.56 & 76.01 & 77.56 & **80.46** \\ & & (0.15) & (0.2) & (0.78) & (0.7) & (0.52) & (0) & (0.43) & (0.28) & (1.73) & (0.97) & (0.94) & **(0.14)** \\ & & Macro & 51.15 & 53.45 & 62.02 & 66.25 & 69.19 & 53.44 & 76.79 & 78.88 & 75.64 & 77.46 & 75.88 & **80.19** \\ & & (0.15) & (0.26) & (0.86) & (1.01) & (0.6) & (0) & (0.35) & (0.34) & (1.73) & (1.04) & (1.02) & **(0.22)** \\ \hline \multirow{4}{*}{\begin{tabular}{c} ACMv9\(\rightarrow\)Citationv1 \\ \end{tabular} } & Micro & 54.48 & 56.73 & 68.41 & 72.03 & 73.56 & 58.81 & 78.91 & 83.27 & 80.16 & 83.32 & 81.44 & **83.92** \\ & & (0.16) & (0.17) & (0.67) & (0.53) & (1.01) & (0) & (0.29) & (0.07) & (2.43) & (0.94) & (0.45) & **(0.21)** \\ & & 52.01 & 54.92 & 65.77 & 70.29 & 70.03 & 55.46 & 77 & 81.66 & 78.05 & 81.56 & 79.79 & **82.25** \\ & & (0.26) & (0.23) & (0.56) & (0.59) & (1.51) & (0) & (0.27) & (0.19) & (2.43) & (0.71) & (0.31) & **(0.22)** \\ \hline \multirow{4}{*}{ \begin{tabular}{c} DBLPv7\(\rightarrow\)ACMv9 \\ \end{tabular} } & Micro & 51.43 & 53.11 & 63.08 & 66.64 & 66.83 & 56.23 & 76.59 & 76.3 significantly outperformed NetTr. However, one disadvantage of CDNE is that it models network topologies and node attributes separately, i.e., employing topologies to capture the proximities between nodes within a network, while employing attributes to capture the proximities between nodes from different networks. In contrast, ACDNE unifies network topologies and node attributes in a principled way so as to jointly capture topological proximities and attributed affinity between nodes within a network and across networks. Moreover, unlike CDNE which utilizes the MMD-based statistical approach for domain adaptation, ACDNE employs a more effective adversarial learning approach. Therefore, ACDNE can achieve better overall performance than CDNE. Next, we discuss the performance of AdaGCN, AdaIGCN and UDA-GCN. Note that AdaGCN directly employs original GCN (Kipf & Welling, 2017) for network representation learning. While both AdaIGCN and UDA-GCN employ the GCN variants to further improve the performance of GCN, thus yielding better overall performance than AdaGCN. However, AdaIGCN and UDA-GCN still performed worse than ACDNE and DM-GNN in most tasks. Since the GCN variants (Li et al., 2019), (Zhuang & Ma, 2018) adopt the typical GCN design, which mixes ego-embedding and neighbor-embedding at each convolution layer, they would easily suffer from over-smoothing and fail to capture the discrimination between connected nodes. A recent study showed that separating ego-embedding from neighbor-embedding contributes to more effective node classification, especially when connected nodes do not perfectly possess homophily (Zhu et al., 2020). Both ACDNE and DM-GNN employ dual feature extractors with different learnable parameters to separately learn ego-embedding from neighbor-embedding. FE2 focuses on capturing the attributed affinity between connected nodes within \(K\) steps in a network, while FE1 captures the attributed affinity between two nodes even though they do not have any network connections. In addition, when a node has attributes which are very distinct from its neighbors, FE1 would be more effective in capturing the discrimination between the node and its neighbors. In contrast, FE2 is more advantageous when two connected nodes are perfectly satisfied with the homophily assumption. Thus, with the integration of FE1 and FE2, both commonality and discrimination can be well captured by ACDNE and DM-GNN to yield more informative representations, as compared to the CNNC baselines which adopt GCN-like models. Moreover, one can see that DM-GNN outperformed ACDNE in all the CNNC tasks. For example, DM-GNN yielded 3% higher Micro-F1 and Macro-F1 scores than ACDNE, in the task from Blog2 to Blog1. Also, DM-GNN achieved 2% higher Micro-F1 score and 3% higher Macro-F1 score than ACDNE, in the CNNC task from ACMV9 to DBLPv7. It is worth noting that DM-GNN is distinct from ACDNE in terms of three aspects. Firstly, to enhance training stability, a feature propagation loss is incorporated into DM-GNN to make the final embedding of each node similar to a weighted average of the embeddings of its neighbors. Secondly, to refine label prediction, a label propagation mechanism is incorporated into the node classifier by DM-GNN to combine the label prediction of each node and its neighbors. In addition, a label-aware propagation mechanism is devised in DM-GNN to promote intra-class propagation while avoiding inter-class propagation for the labeled source network, which guarantees more label-discriminative source embeddings. Thirdly, to better mitigate the distribution discrepancy across networks, a conditional adversarial domain adaptation approach is employed in DM-GNN to condition adversarial domain adaptation on the neighborhood-refined class-label probabilities predicted by the label propagation node classifier. As a result, the corresponding class-conditional distributions across networks can be better matched. The outperformance of DM-GNN over ACDNE demonstrates that the incorporated feature propagation loss, label propagation node classifier and conditional domain discriminator can further boost the CNNC performance. Finally, we report the performance of the state-of-the-art CNNC baselines and the proposed DM-GNN on the heterophilic graphs in Table 4. We can observe that CDNE performed the worst among all comparing methods on the heterophilic graphs. This is because CDNE separately models network topologies and node attributes, and learns node embeddings by reconstructing the network proximity matrix. However, the connected nodes on the heterophilic graphs tend to have different class labels, directly reconstructing the network proximity matrix of such heterophilic graphs would make the connected nodes belonging to different classes have similar node embeddings, which unavoidably degrades the node classification performance. In addition, one can see that GCN, AdaGCN, and UDA-GCN all achieved significantly lower scores than ACDNE and DM-GNN on the heterophilic graphs. We believe the reason behind is that ACDNE and DM-GNN employ dual feature extractors to learn ego-embedding and neighbor-embedding of each node separately, which is capable of capturing both commonality and discrimination between connected nodes. While GCN, AdaGCN, and UDA-GCN which employ the GCN-like models for node embedding learning would fail to capture discrimination, consequently yielding unsatisfactory CNNC performance especially on the graphs with medium homophily (i.e., Blog networks on Table 3) and heterophily (i.e., Wikipedia networks on Table 4). ### Visualization of Cross-network Embedding (RQ3) The t-SNE toolkit (Maaten & Hinton, 2008) was employed to visualize the cross-network embeddings learned by different algorithms. Figs. 2 and 3 show the visualization results on the Blog networks and the citation networks respectively. Firstly, DM-GNN maps the nodes from different classes into separable areas, i.e., the embeddings generated by DM-GNN are indeed label-discriminative. In addition, nodes belonging to the same class but from different networks have been mapped into the same cluster by DM-GNN, i.e., the embeddings generated by DM-GNN are network-invariant and the corresponding class-conditional \begin{table} \begin{tabular}{c c c c c c c c} \hline \(g^{s}\to g^{t}\) & F1(\%) & GCN & CDNE & ACDNE & AdaGCN & UDA-GCN & DM-GNN \\ \hline \multirow{2}{*}{Squirrel1\(\rightarrow\)Squirrel2} & Micro & 20.92 (0.64) & 20.32 (0.48) & 30.84 (0.20) & 23.77 (0.28) & 23.67 (0.51) & **34.22 (0.95)** \\ & Macro & 18.25 (1.74) & 19.15 (0.95) & 30.75 (0.18) & 21.76 (0.49) & 22.06 (0.64) & **34.00 (0.81)** \\ \hline \multirow{2}{*}{Squirrel2\(\rightarrow\)Squirrel1} & Micro & 21.07 (0.77) & 19.34 (0.79) & 31.50 (0.40) & 23.95 (0.27) & 22.30 (0.97) & **33.85 (1.00)** \\ & Macro & 18.01 (1.17) & 17.28 (0.90) & 31.42 (0.45) & 20.53 (1.38) & 21.54 (1.05) & **33.89 (0.63)** \\ \hline \end{tabular} \end{table} Table 4: Micro-F1 and macro-F1 of CNNC on the heterophilic graphs. The highest F1 value among the algorithms are shown in boldface. (The numbers in parentheses are the standard deviations over 5 random initializations). distributions across networks have been well matched. Figure 2: Visualization of cross-network embeddings learned by different algorithms for the task from Blog1 to Blog2. Different colors are used to represent different labels. The triangle and plus symbols are utilized to represent nodes from the source and the target networks. However, the class boundaries of cross-network embeddings learned by GCN are not clear. This might be caused by the over Fig. 3: Visualization of cross-network embeddings learned by different algorithms for the task from Citationv1 to DBLPv7. Different colors are used to represent different labels. The triangle and plus symbols are utilized to represent nodes from the source and the target networks. smoothing issue, i.e., the embeddings are over-smoothed and the nodes from different classes finally have indistinguishable embeddings after iterative layer-wise propagation (Li et al., 2018; Liu et al., 2020). In addition, although UDA-GCN and AdaIGCN utilize the GCN variants instead of the original GCN for network representation learning, they still generate much less clear class boundaries than DM-GNN. This is because the GCN variants (Li et al., 2019; Zhuang and Ma, 2018) still adopt the typical design which mixes ego-embedding and neighbor-embedding, and also entangles neighborhood aggregation and representation transformation at each layer. Unlike GCN, the GNN encoder in DM-GNN separates ego-embedding from neighbor-embedding via dual feature extractors, and also decouples neighborhood aggregation from representation transformation in FE2. Thus, DM-GNN can better preserve the individuality of each node, and avoid the nodes of different classes from becoming indistinguishable. Besides, one can see that the network representation learning ability of DM-GNN is much better than CDNE. This reflects that the joint modeling of network structures and node attributes is able to obtain more meaningful representations than modeling the two kinds of information separately. It can be seen that both DM-GNN and ACDNE yield more label-discriminative and network-invariant embeddings than other baselines. This is because, instead of the use of GCN-like models in the baselines, both DM-GNN and ACDNE employ dual feature extractors to construct the GNN encoder so as to better preserve the discrimination between connected nodes and to alleviate the over-smoothing issue. This again confirms that an effective GNN encoder is the key success factor for CNNC. As shown in Fig. 2(a) and Fig. 2(b), for the learning task transferring from Blog1 to Blog2, DM-GNN and ACDNE have similar visualization performance, which is consistent with the results in Table 3, i.e., DM-GNN achieves comparable F1 scores with ACDNE on the Blog1-to-Blog2 task. For the learning task transferring from Citationv1 to DBLpv7, DM-GNN significantly outperforms ACDNE by an absolute 2.15% and 2.26% in terms of Micro-F1 and Macro-F1, and a consistent trend is also evident from the visualization. As shown in Fig. 3(b), for ACDNE, a small number of magenta nodes, both the triangle (i.e. source) and plus (i.e. target) symbols, are found in the cluster of green nodes. This is clearly avoided by DM-GNN, as shown in Fig. 3(a), where the magenta triangles (i.e. source) are kept apart from the green triangles (i.e. source). This reflects that DM-GNN produces more label-discriminative source embeddings, which benefits from the label-aware propagation mechanism devised for the fully labeled source network to promote intra-class propagation while avoiding inter-class propagation. On the other hand, DM-GNN further avoids the magenta plus nodes (i.e. target) from getting close to the green triangles (i.e. source). This reflects that DM-GNN can better align with the corresponding class-conditional distributions across networks, benefiting from conditional adversarial domain adaptation on the neighborhood-refined label prediction in DM-GNN. ### Ablation Study (RQ4) We conducted extensive ablation study to investigate the contribution from different components in DM-GNN. Five variants DM-GNN were built. The results are shown in Table 5. Firstly, the variant without FE1 yields lower F1 scores than DM-GNN since FE1 can capture the similarity of attributes between nodes even without any network connections. In addition, FE1 can also preserve discriminative ego-embeddings for connected nodes with dissimilar attributes. Thus, FE1 is a key component in the model design of DM-GNN. Secondly, the variant without FE2 leads to much lower F1 scores than DM-GNN. Since FE2 utilizes the neighbors' aggregated attributes as the input, which is essentially performing feature propagation, the necessity of feature propagation on node classification has also been verified in other GNNs (Hamilton et al., 2017; Kipf & Welling, 2017; Liang et al., 2018; Velickovic et al., 2018). In fact, DM-GNN without either FE1 or FE2 has poor performance, reflecting that FE1 and FE2 can capture the complementary information to each other. Thirdly, the performance of the variant without feature propagation loss is lower than DM-GNN, which demonstrates that besides performing feature propagation in the input process, it is also helpful to ensure that the output embeddings meet the feature propagation goal. Fourthly, the variant without label propagation mechanism also has lower F1 scores than DM-GNN, indicating that incorporating the label propagation mechanism into node classifier to refine each node's label prediction can further boost the CNNC performance. Lastly, the variant without conditional domain discriminator yields significantly lower F1 scores than DM-GNN, verifying that reducing domain discrepancy across networks is necessary in CNNC. ### Parameter Sensitivity (RQ5) The sensitivities of hyper-parameters \(\boldsymbol{\psi},\mathbbm{d},K\) on the performance of DM-GNN were studied. For the weight of feature propagation loss \(\boldsymbol{\psi}\), as shown in Fig. 4(a), setting smaller value of \(\boldsymbol{\psi}\) (i.e. \(10^{-3}\)) for the dense Blog networks yields good performance. In contrast, setting relatively larger value of \(\boldsymbol{\psi}\) (i.e. \(10^{-1}\)) for the sparse citation networks would achieve good performance. For the embedding dimensionality \(\mathbbm{d}\), as shown in Fig. 4(b), setting \(\mathbbm{d}\in\{128,256,512\}\) for the Blog networks gives good performance whereas smaller values of \(\mathbbm{d}\) leads to inferior performance. For the citation networks, DM-GNN is insensitive to \(\mathbbm{d}\), i.e., good performance is achieved with \(\mathbbm{d}\in\{32,64,128,256,512\}\). \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Model Variants} & \multirow{2}{*}{\begin{tabular}{c} F1 \\ (\%) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Citationv1\(\rightarrow\) \\ DBLPv7\(\uparrow\) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(\text{Citationv1}\) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} ACMv9\(\rightarrow\) \\ Citationv1 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} DBLPv7\(\rightarrow\) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} ACMv9\(\rightarrow\) \\ DBLPv7\(\uparrow\) \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Blog1\(\rightarrow\) \\ Blog2\(\rightarrow\) \\ \end{tabular} } \\ \hline \multirow{2}{*}{DM-GNN} & Micro & 79.50 & 82.91 & 80.46 & 83.92 & 76.64 & 78.15 & 66.80 & 65.55 \\ & Macro & 78.35 & 81.17 & 80.19 & 82.25 & 76.42 & 76.43 & 66.53 & 65.42 \\ \hline \multirow{2}{*}{w/o FE1} & Micro & 78.75 & 81.32 & 80.26 & 83.27 & 76.53 & 77.76 & 52.97 & 49.88 \\ & Macro & 77.79 & 79.59 & 80.31 & 81.69 & 75.90 & 75.77 & 52.27 & 48.49 \\ \hline \multirow{2}{*}{w/o FE2} & Micro & 70.93 & 70.97 & 68.97 & 74.44 & 63.16 & 68.70 & 42.97 & 52.29 \\ & Macro & 67.94 & 67.32 & 67.52 & 71.73 & 60.19 & 65.22 & 40.41 & 52.01 \\ \hline \multirow{2}{*}{w/o Feature Propagation Loss} & Micro & 79.36 & 82.29 & 79.31 & 84.16 & 74.31 & 77.70 & 65.59 & 60.36 \\ & Macro & 78.16 & 80.47 & 79.00 & 82.53 & 74.00 & 76.44 & 65.34 & 59.24 \\ \hline \multirow{2}{*}{w/o Label Propagation} & Micro & 78.85 & 82.16 & 79.15 & 83.54 & 75.55 & 76.98 & 62.87 & 64.73 \\ & Macro & 77.64 & 80.42 & 78.81 & 81.79 & 75.40 & 75.28 & 62.76 & 64.71 \\ \hline \multirow{2}{*}{w/o Conditional Domain Discriminator} & Micro & 75.64 & 79.54 & 78.07 & 76.57 & 68.78 & 69.93 & 59.05 & 58.21 \\ & Macro & 74.03 & 77.45 & 77.65 & 74.95 & 67.63 & 67.85 & 58.54 & 57.60 \\ \hline \hline \end{tabular} \end{table} Table 5Micro-F1 and macro-F1 of DM-GNN variants. For the neighborhood size \(K\) on message propagation, as shown in Fig. 4(c), \(K\geq 1\) in the citation networks always yields significantly higher F1 scores than \(K=1\). This reflects that taking advantage of high-order proximities is helpful for node classification, which agrees with previous works (Cao et al., 2015; Shen and Chung, 2017; Shen et al., 2021). Surprisingly, as shown in Fig. 4(c), for the learning task transferring from Blog1 to Blog2, the performance drops significantly when \(K=2\) and then rebounds to a level greater than that at \(K=1\) when \(K\geq 3\). This might be due to the properties of the Blog datasets. In fact, a similar phenomenon (i.e. performance drop when \(K=2\)) was also observed in CDNE (Shen et al., 2021). To investigate the phenomenon, \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Statistics} & \multicolumn{8}{c}{\(K\)} \\ \cline{3-10} & & 1 & 2 & 3 & 4 & 5 & 6 & 8 & 10 \\ \hline \multirow{3}{*}{Blog1} & Number of connected node pairs & 33471 & 269055 & 292936 & 313124 & 322789 & 328695 & 334581 & 337213 \\ \cline{2-10} & Number connected node pairs belonging to the same class & 13359 & 76683 & 85911 & 92214 & 95395 & 97240 & 99051 & 99811 \\ \cline{2-10} & Fraction of connected node pairs belonging to the same class & 0.3991 & **0.2850** & 0.2933 & 0.2945 & 0.2955 & 0.2958 & 0.2960 & 0.2960 \\ \hline \multirow{3}{*}{Blog2} & Number connected node pairs & 53836 & 428391 & 469457 & 502201 & 517304 & 526770 & 536422 & 540878 \\ \cline{2-10} & Number connected node pairs belonging to the same class & 21544 & 122481 & 138233 & 148580 & 153697 & 156752 & 159711 & 160986 \\ \cline{2-10} & Fraction of connected node pairs belonging to the same class & 0.4002 & **0.2859** & 0.2945 & 0.2959 & 0.2971 & 0.2976 & 0.2977 & 0.2976 \\ \hline \hline \end{tabular} \end{table} Table 6: Statistics of the Blog1 and Blog2 datasets at different \(K\) values Figure 4: Sensitivities of the hyper-parameters \(\phi,\mathsf{d},K\) on the performance of DM-GNN. for different \(K\) values, we counted the number of connected node pairs and evaluate the proportion of connected node pairs belonging to the same class as a higher proportion of such node pairs is beneficial for node classification. As shown in Table 6, it is found that when \(K\)=2, the fraction of connected node pairs belonging to the same class is the lowest (i.e., 0.2850 in Blog1 and 0.2859 in Blog2) among all different \(K\) values, thus leading to poor node classification performance. However, it is noted that even the proportion at \(K\)=1 is the highest proportion but the Micro-F1 value is significantly lower than that when \(K\geq 3\). This is because when \(K\)=1, despite having the highest proportion, the number of connected node pairs belonging to the same class (i.e., 13359 in Blog1 and 21544 in Blog2) is much smaller than that obtained when \(K\geq 3\) (i.e., more than 80000 in Blog1 and more than 130000 in Blog2). That is, the number of connected nodes pairs with common labels is rather insufficient when \(K\)=1. Thus, it is necessary to utilize high-order proximities by considering more distant neighbors. ## 5 Conclusions We have proposed a novel domain-adaptive message passing GNN, named DM-GNN, which integrates GNN with conditional adversarial domain adaptation to effectively address the challenging CNNC problem. Firstly, dual feature extractors with different learnable parameters are employed to separately learn ego-embedding from neighbor-embedding of each node so as to jointly capture both the commonality and discrimination between connected nodes. Secondly, DM-GNN unifies feature propagation and label propagation in cross-network embedding, which propagates the input attributes, the output embeddings and the label prediction of each node over its neighborhood. In particular, a label-aware propagation scheme is devised for the fully labeled source network, to promote intra-class propagation while avoiding inter-class propagation. As a result, more label-discriminative source embeddings can be learned by DM-GNN. Thirdly, a conditional adversarial domain adaptation approach is employed by DM-GNN to condition GNN and domain discriminator on the neighborhood-refined class-label probabilities during adversarial domain adaptation. As a result, the class-conditional distributions across networks can be better matched to produce label-discriminative target embeddings. Extensive experiments on real-world benchmark datasets demonstrate the distinctive CNNC performance of DM-GNN over the state-of-the-art methods. In DM-GNN, the fixed topological proximities are utilized during both feature and label propagation among the neighbors within \(K\) steps. To improve the effectiveness of message passing among the neighborhood, it is promising to adopt the attention-based GNN to learn adaptive edge weights during neighborhood aggregation. Compared to previous state-of-the-art CNNC approaches using the GCN-like models for node embedding learning, the proposed DM-GNN empowered by the dual feature extractor design can better capture discrimination between connected nodes on both homophilic and heterophilic graphs. However, the feature and label propagation mechanism designed in DM-GNN is still under the homophily assumption, which cannot explicitly avoid noises from the inter-class neighbors on the target graphs with high heterophily. Thus, in order to achieve more competitive CNNC performance on the heterophilic graphs, more research is needed to design the GNN encoder specifically for the heterophilic graphs, e.g., employing positive and negative adaptive edge weights to distinguish intra-class and inter-class neighbors during both feature and label propagation. ## Acknowledgments This work was supported in part by Hainan Provincial Natural Science Foundation of China (No. 322RC570), National Natural Science Foundation of China (No. 62102124), the Project of Strategic Importance of the Hong Kong Polytechnic University (No. 1-ZE1V), and the Research Start-up Fund of Hainan University (No. KYQD(ZR)-22016).
2309.12259
Soft Merging: A Flexible and Robust Soft Model Merging Approach for Enhanced Neural Network Performance
Stochastic Gradient Descent (SGD), a widely used optimization algorithm in deep learning, is often limited to converging to local optima due to the non-convex nature of the problem. Leveraging these local optima to improve model performance remains a challenging task. Given the inherent complexity of neural networks, the simple arithmetic averaging of the obtained local optima models in undesirable results. This paper proposes a {\em soft merging} method that facilitates rapid merging of multiple models, simplifies the merging of specific parts of neural networks, and enhances robustness against malicious models with extreme values. This is achieved by learning gate parameters through a surrogate of the $l_0$ norm using hard concrete distribution without modifying the model weights of the given local optima models. This merging process not only enhances the model performance by converging to a better local optimum, but also minimizes computational costs, offering an efficient and explicit learning process integrated with stochastic gradient descent. Thorough experiments underscore the effectiveness and superior performance of the merged neural networks.
Hao Chen, Yusen Wu, Phuong Nguyen, Chao Liu, Yelena Yesha
2023-09-21T17:07:31Z
http://arxiv.org/abs/2309.12259v1
Soft Merging: A Flexible and Robust Soft Model Merging Approach for Enhanced Neural Network Performance ###### Abstract Stochastic Gradient Descent (SGD), a widely used optimization algorithm in deep learning, is often limited to converging to local optima due to the non-convex nature of the problem. Leveraging these local optima to improve model performance remains a challenging task. Given the inherent complexity of neural networks, the simple arithmetic averaging of the obtained local optima models in undesirable results. This paper proposes a _soft merging_ method that facilitates rapid merging of multiple models, simplifies the merging of specific parts of neural networks, and enhances robustness against malicious models with extreme values. This is achieved by learning gate parameters through a surrogate of the \(l_{0}\) norm using hard concrete distribution without modifying the model weights of the given local optima models. This merging process not only enhances the model performance by converging to a better local optimum, but also minimizes computational costs, offering an efficient and explicit learning process integrated with stochastic gradient descent. Thorough experiments underscore the effectiveness and superior performance of the merged neural networks. Hao Chen\({}^{\dagger}\), Yusen Wu\({}^{*}\), Phuong Nguyen\({}^{*}\), Chao Liu\({}^{\dagger}\), Yelena Yesha\({}^{*}\)\({}^{*}\)Dept. of Computer Science, University of Miami, Miami, FL, USA \({}^{\dagger}\)Dept. of Computer Science, University of Maryland, Baltimore County, MD, USA Model Merging, Model Optimization ## 1 Introduction In the recent decade, deep learning has been flourishing in various domains. However, the inherent complexity of neural networks, with their intricate non-linearity and non-convexity, poses formidable challenges. The stochastic gradient descent (SGD) algorithm, despite using identical training data and network architectures, converges to distinct local optima due to different initializations. This leads to a fundamental question: _Can the diverse local optima be leveraged to merge models, enhancing performance and moving closer to a more favorable global optimum?_ Convolutional neural networks exhibit various architectural paradigms like ShuffleNet [1], ResNet [2], UNet [3] and DenseNet [4], each with unique features. Our primary challenge lies in devising an algorithm that accommodates these disparate designs. The secondary challenge involves ensuring the robustness of the merging algorithm across models with vastly varying parameter values. Additionally, we face the third challenge of selectively merging specific components rather than all parameters, aiming for efficiency. Model merging, as a novel and challenging research direction, has seen limited exploration in existing literature. Unlike model combination and aggregation [5, 6, 7], which fuses different architectures, model merging improves the model performance by integrating the trained ones (local optima) with congruent architectures. Simple techniques like arithmetic averaging fall short due to the intricate nature of neural networks as addressed in the paper [8]. Further, it proposes to merge the models by solving a permutation problem, assuming that local optima exhibit similar performances, they match the neurons of two models. The [9] proposed a general framework for merging the models, with "teacher" and "student" concepts. Primarily the existing methods focus on neuron-level merging, which means they are targeting the weights of the neural network. However, relying solely on this approach has limitations in applicability, flexibility, and robustness, particularly with irregular models. To address these issues, we introduce a novel paradigm called _soft merging_, known for efficiency, adaptability, and robustness. Our method draws from model merging and channel pruning research [10, 9, 11, 12]. It involves concurrent training of gate parameters for multiple models, using a differentiable surrogate of \(l_{0}\) regularization to identify crucial parts. Instead of updating the weights, it only picks the best ones from the provided set of weights. This enables selective merging across various layers and architectures, with enhanced adaptability. In summary, our contributions include: * Our proposal outlines a general procedure for selectively soft merging multiple models simultaneously with diverse neural network architectures. * We present an algorithm that achieves linear complexity for efficient soft merging. * Extending neural network model merging to accommodate a wide range of deep learning designs ensures robustness, even in the presence of anomalies. ## 2 Proposed Methods ### Problem statement Suppose there are \(J\) given models denoted as \(\{\mathcal{M}_{j}\}_{j=1}^{J}\) with the same neural network architecture. Given training data \(\mathbf{X}\) and labels \(\mathbf{Y}\), our goal to find the optimal model \(\mathcal{M}^{*}\) from the object function \[\min_{\{g_{j}\}_{j}}\sum_{j}\mathcal{L}(g_{j}\mathcal{M}_{j}(\mathbf{X}), \mathbf{Y};\mathbf{\theta}_{j}),s.t.\sum_{j}g_{j}=1, \tag{1}\] where \(g_{j}\in\{0,1\}\) is a gate parameter, \(\mathcal{L}\) is the loss function, and \(\mathbf{\theta}_{j}\) represents the neural networks parameters in the \(j\)-th model. Labels \(\mathbf{Y}\) may not be necessary in some learning tasks, and they can be ignored in specific learning objective functions. The loss function \(\mathcal{L}\) is a general function, which can be utilized in various machine learning methods, including supervised, unsupervised, and semi-supervised approaches. The Eq.(1) is called the model-level merging, if \(\mathbf{\theta}_{j}\) is fixed, which is equivalent to picking the best model among the \(J\) models. Jointly learning \(\mathbf{\theta}_{j}\) and \(g_{j}\) belongs to the wide-sense _hard_ merging, because \(\mathbf{\theta}_{j}\) may change during the merging process. While, if learning gates parameter \(g_{j}\) only, with \(\mathbf{\theta}_{j}\) held fixed, it is referred to as _soft merging_. Especially, here Eq.(1) performs the soft merging on the model, namely picking the best granule from all the granules, with each model as a granule, which is a high-level merging. In the following section, we will introduce soft merging at different levels. ### Model Merging at Various Levels of Granularity To merge the full model of the neural networks, we can also apply the merging process at a lower level by merging the individual modules or layers. Suppose the model \(\mathcal{M}_{j}\) consists of \(L\) layers; we can disassemble \(\mathcal{M}_{j}\) into individual layers \(\mathcal{M}_{j}:=\mathcal{F}(\{\Lambda_{I,j}\}_{l=1}^{L})\), where \(\mathcal{F}(\cdot)\) is a structural function to connect each layer which bears the same design for all \(J\) models, and \(\Lambda_{l,j}\) is the \(l\)th-layer in the model \(\mathcal{M}_{j}\). Some of the layers in the \(j\)-th model could aggregate to a module, which is defined as \(\Phi_{m,j}:=\mathcal{F}_{m}(\{\Lambda_{l,j}\}_{l=m}^{m^{\prime}})\) as the \(m\)-th module. Here, whether we are referring to the model \(\mathcal{M}\), the module \(\Phi\), or the layer \(\Lambda\), they all share the fundamental characteristic of being composed of linear or non-linear functions. So the model \(\mathcal{M}_{j}\) can be written as \[\mathcal{M}_{j}=\mathcal{F}_{M}(\{\Phi_{m,j}\}_{m=1}^{M})=\mathcal{F}(\{ \Lambda_{l,j}\}_{l=1}^{L}), \tag{2}\] where \(\mathcal{F}_{M}(\cdot)\) is the structural function to connect all the modules in the model \(\mathcal{M}_{j}\). The objective of module-level merging is to address the following problem \[\min_{\{g_{m,j}\}_{m,j}}\mathcal{L}(\mathcal{M}(\mathbf{X}), \mathbf{Y};\mathbf{\theta})\ \ s.t.\sum_{j}g_{m,j}=1\] \[\Phi_{m}=\sum_{j}g_{m,j}\Phi_{m,j},\ \mathcal{M}=\mathcal{F}_{M}(\{ \Phi_{m}\}_{m=1}^{M}) \tag{3}\] where \(g_{m,j}\) are the module-level gates. The gates are applied after the data through the module \(\Phi_{m}\) and the whole flow is managed by \(\mathcal{F}_{M}\). Similarly, the layer-level problem can be formulated as \[\min_{\{g_{l,j}\}_{l,j}}\ \mathcal{L}(\mathcal{M}(\mathbf{X}), \mathbf{Y};\mathbf{\theta}),\ \ s.t.\sum_{j}g_{l,j}=1,\] \[\text{with}\ \Lambda_{l}=\sum_{j}g_{l,j}\Lambda_{l,j},\ \mathcal{M}= \mathcal{F}(\{\Lambda_{l}\}_{l=1}^{L}) \tag{4}\] ### Merging Optimization Algorithms We consider \(g_{j}\) as a random variable following a Bernoulli distribution, however, it is not differentiable. To solve this problem, the constraint can then be reformulated in relation to the set of variables \(\{g_{j}\}_{j}=\mathbf{g}\in\{0,1\}^{J}\), and \(\|\mathbf{g}\|_{0}=1\) indicating that only one element in \(\mathbf{g}\) is allowed to be non-zero. Similarly, in Eq.(3) and (4), the constraint can be restated in terms of \(L_{0}\) norm. However, the \(L_{0}\) norm is not differentiable nor convex. A typical surrogate of \(L_{0}\) norms as \(L_{1}\) norm, as a convex constraint, but imposing the sparsity would introduce another constraint. In the paper by Louizos et al. [10], a surrogate approach was introduced. This approach utilizes a random variable governed by a hard concrete distribution to address the \(L_{0}\) norm constraint. Notably, this surrogate method retains differentiability through the implementation of the reparameterization trick. **Hard Concrete Distribution**. The probability density function (PDF) of concrete distribution is written as, \[p(s;\beta,\alpha)=\frac{\alpha\beta s^{\beta-1}(1-s)^{\beta-1}}{(s^{\beta}+ \alpha(1-s)^{\beta})^{2}},\ \ 0<s<1. \tag{5}\] with the cumulative distribution function (CDF) as \[F(s;\beta,\alpha)=\frac{1}{e^{\log\alpha+\beta(\log(1-s)-\log s)}} \tag{6}\] where \(\alpha>0\) and \(0<\beta<1\). The parameter \(\alpha\) controls the distribution This binary-like concrete distribution is a smooth approximation of Bernoulli distribution[13], because it can be reparameterized with uniform distribution \(u\sim\mathcal{U}(0,1)\) as \(s=\text{Sigmoid}(\log(u)-\log(1-u)+\log\alpha)\), where \(\text{Sigmoid}(x)=\frac{1}{1+e^{-x}}\). However, the concrete distribution does not include \(0\) and \(1\). To tackle this problem, the [10] proposed a method stretching \(s\) to \((\gamma,\zeta)\) by \(\bar{s}=s\zeta+(1-s)\gamma\), with \(\gamma<0\) and \(\zeta>1\). Then by folding \(\bar{s}\) into \((0,1)\) by \(g=\min(1,\max(\bar{s},0))\), the hard concrete distribution has the CDF simply as \[Q(s;\beta,\alpha)=F(\frac{s-\gamma}{\zeta-\gamma}),\ \ 0\leq s\leq 1 \tag{7}\] and the PDF as \[q(s;\beta,\alpha)=F(\frac{\gamma}{\gamma-\zeta})\delta(s)+\left(1-F( \frac{1-\gamma}{\zeta-\gamma})\right)\delta(s-1)\] \[+\left(F(\frac{1-\gamma}{\zeta-\gamma})-F(\frac{\gamma}{\gamma- \zeta})\right)p(\frac{s-\gamma}{\zeta-\gamma}),\ \ 0\leq s\leq 1. \tag{8}\] The comparisons among examples of concrete, stretched concrete, and hard concrete distribution are shown in Fig.1. **Surrogate Loss Functions** All the gates are replaced by the surrogate probability random variables following the hard concrete distribution. With the reparameterization trick [14], reformulated with the Lagrangian multiplier, the loss function for model-level merging is \[\min_{\{\alpha_{j},\beta_{j}\}}\sum_{j}\mathcal{L}(\hat{s}_{j} \mathcal{M}_{j}(\mathbf{X}),\mathbf{Y};\boldsymbol{\theta}_{j})+\lambda(\hat{ s}_{j}-\frac{1}{J})\] \[\text{with }\hat{s}_{j}\sim q(s_{j}>0;\alpha_{j},\beta_{j}) \tag{9}\] Similarly, we can get the module- and layer-level merging loss functions respectively as \[\min_{\{\alpha_{m,j},\beta_{m,j}\}_{m,j}} \mathcal{L}(\mathcal{M}(\mathbf{X}),\mathbf{Y};\boldsymbol{ \theta})+\lambda\sum_{m,j}(\hat{s}_{m,j}-\frac{M}{J})\] \[\text{with }\hat{s}_{m,j}\sim q(s_{m,j}>0;\alpha_{m,j},\beta_{m,j}) \tag{10}\] \[\min_{\{\alpha_{l,j},\beta_{l,j}\}_{l,j}} \mathcal{L}(\mathcal{M}(\mathbf{X}),\mathbf{Y};\boldsymbol{\theta })+\lambda\sum_{l,j}(\hat{s}_{l,j}-\frac{L}{J})\] \[\text{with }\hat{s}_{l,j}\sim q(s_{l,j}>0;\alpha_{l,j},\beta_{l,j}) \tag{11}\] ### General training algorithm We have the full-model soft merging in different levels as shown in Eq.(9) - (11). If one just selects a few layers or modules to perform the soft merging, the loss function should be changed accordingly. For example, supposing to merge the 1st and the 5th layers of the models, which means a layer-level merging, it requires the training parameter \(\alpha_{1,j},\alpha_{5,j},\beta_{1,j}\) and \(\beta_{5,j}\) as the random variable and others as fixed with given gate value as \(1\), using formulation (11). Here we propose the general problem formulation for full-model and selective soft merging in different levels as \[\min_{\boldsymbol{\alpha},\boldsymbol{\beta}}\mathcal{L}_{1}(\mathbf{X}, \mathbf{Y})+\lambda\mathcal{L}_{2}(\boldsymbol{\alpha},\boldsymbol{\beta}) \tag{12}\] where \(\mathcal{L}_{1}\) is related to model performance and \(\mathcal{L}_{2}\) is the term controlling the merging, including the sampling process for the reparameterization. In the formulation (12), the parameters \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\) represent two sets of parameters selected for the process of selective soft merging. Notably, the hyper-parameter \(\lambda\) remains fixed as a user-defined tuning parameter and is not learned during training. The training methodology is relatively straightforward, involving the application of SGD in the mini-batch fashion as outlined in Table 1. ## 3 Experiments We conducted multiple experiments at various levels of merging to demonstrate the performance of multi-model soft merging, assess the robustness of the merging process, and explore selective merging across diverse neural networks. Nevertheless, the tasks of the experiments include supervised classification and unsupervised source separation. We used the ESC-\(50\)[15], and MNIST (with mitture) as the data sets. The neural networks are Audio Spectrogram Transformer (AST) [16], ResNet18 [2], Variational auto-encoder (VAE) [17]. We apply soft merging at various levels to evaluate the performance of our proposed algorithm. By experimenting with different settings, we aim to demonstrate the versatility of our soft merging approach across a broad spectrum of tasks. **Model-Level: 10 Models Merging**. The proposed algorithm involves model-level soft merging of 10 vision transformer (ViT) models for audio source [16] classification using the ESC-50 dataset. This approach employs parallel selection of the best model post-training, in contrast to the sequential comparison of neural network models. The dataset comprises 50 environmental sound classes, each containing 40 examples, which are divided into 1600 training and 400 validation samples. These models, initially pre-trained on ImageNet, process audio spectrograms using non-overlapping patches. Within the pool of 10 models, ranging from notably underperforming to highly competent ones, the soft-merging technique demonstrates its effectiveness even with limited training data. Furthermore, learning from validation data is accomplished within a mere 5 epochs, thereby reducing computational complexity compared to traditional sequential inference methods. The performance of the merged model, as illustrated in Fig.2, \begin{table} \begin{tabular}{l} \hline \hline Input: \(\mathbf{X}\), \(\mathbf{Y}\), \(\{\mathcal{M}_{j}\}\), \(\lambda\), the learning rate \(\eta\) \\ Output: \(\mathcal{M}^{*}\) \\ \hline 1: Initialize \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\) randomly \\ 2: For \(b=0,1,\dots\) _/ *b_-th mini-batch */ \\ 3: \(\mathcal{L}_{l}=\mathcal{L}_{1}(\mathcal{M}(\mathbf{X}^{(b)}),\mathbf{Y}^{(b) })+\lambda\mathcal{L}_{2}(\boldsymbol{\alpha},\boldsymbol{\beta})\) /*\(\mathbf{X}^{(b)},\mathbf{Y}^{(b)}\) \\ & as the data and labels for current mini-batch, \(\mathcal{L}_{l}\) is the loss function for full-model or selective merging*/ \\ 4: \(\boldsymbol{\alpha}=\boldsymbol{\alpha}+\eta\frac{\partial\mathcal{L}_{l}}{ \partial\boldsymbol{\alpha}},\boldsymbol{\beta}=\boldsymbol{\beta}+\eta\frac{ \partial\mathcal{L}_{l}}{\partial\boldsymbol{\beta}}\) \\ 5: Next \(b\) \\ 6: \(\mathcal{M}^{*}\) contains the gate parameters \(\,\delta^{*}\sim q(\mathbf{s};\boldsymbol{\alpha}^{*},\boldsymbol{\beta}^{*})\) \\ \hline \hline \end{tabular} \end{table} Table 1: General training algorithm. showcases its capabilities, while the learned attention parameters \(\log\alpha\) in Table 2 provide insights into model quality. Remarkably, even with unfavorable initializations, the training process successfully identifies correct gradient directions. Model 10 emerges as the top-performing choice, demonstrating the algorithm's effectiveness in model selection without the necessity for extensive hyperparameter tuning. **Module-Level: Three Models Merging** This experiment aims to determine if our algorithm could effectively identify correct modules within a collection of both trained and untrained modules. Specifically, we took one trained ResNet18 model with MNIST and two untrained ResNet18 models, splitting each into two halves to create a total of six modules. Among these, two modules held functional values while the other four contained random values. Our objective was to discern the functional modules using learned gate values. Despite the initial poor performance of the three individual models due to the untrained modules, applying soft merging yielded promising outcomes, indicating successful learning of the correct gates. During this experiment, we utilized parameters \(\lambda=5\), with a learning rate of 0.001 across 150 epochs. The initial \(\log\mathbf{\alpha}\) values followed a Gaussian distribution with \(\mathcal{N}(0,0.01)\). The learning curve in Fig.(a)a depicted the merged model's progression, demonstrating that while the initial performance was subpar due to random gate initialization (Fig.(b)b), both training and validation accuracy improved significantly and quickly converged after around 80 epochs. Notably, the convergence of \(\log\mathbf{\alpha}\) values in Fig.(b)b did not occur within 150 epochs, indicating that the parameter does not possess inherent bounds due to the formulation in Eq. (5) and (6). **Selective Layer-Level Merging** In our unsupervised source separation experiment, we adapted Variational Autoencoders (VAEs) for blind source separation, showcasing the capabilities of our algorithm in such settings. We applied this concept to image data, similar to audio and RF blind source separation problems[18]. By manually creating MNIST mixtures without labels, we mirrored the approach in [17]. We used two trained models with similar signal-to-interference ratio (SIR) performance and chose one layer in the encoder and one layer in the decoder to conduct the soft merging, which requires choosing a primary and secondary model. The VAE KL penalty \(\beta_{KL}\) increased up to 0.5 per epoch for 10 epochs, with \(\mathbf{\beta}=0\) and \(\lambda=1\). The gate values in the last batch are depicted in Fig. 4, where different prime model selections led to varying \(\log\mathbf{\alpha}\), and still maintaining the SIR around \(29\) but better than the one before merged. ## 4 Conclusions Our research introduces the innovative concept of soft merging, a paradigm that addresses adaptability, efficiency, and robustness challenges in enhancing deep learning models. Our approach provides a versatile method for selectively integrating diverse neural network architectures, ultimately leading to improved model performance and a more favorable achievement of a better local optimum. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline **Model \#** & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) \\ \hline **Init.** & \(0.0089\) & \(-0.0185\) & \(-0.0075\) & \(0.0276\) & **0.0059** \\ **Final** & \(-0.7604\) & \(-0.7877\) & \(0.6155\) & \(0.6509\) & \(0.6722\) \\ \hline \hline **Model \#** & \(6\) & \(7\) & \(8\) & \(9\) & **10** \\ \hline **Init.** & \(0.0023\) & \(0.0120\) & \(-0.0098\) & \(0.0115\) & \(-0.0003\) \\ **Final** & \(0.7943\) & \(0.7943\) & \(0.7878\) & \(0.8740\) & **0.8870** \\ \hline \end{tabular} \end{table} Table 2: Model-level merging \(\log\alpha\) values Figure 4: The gate values of the last mini-batch training data. Figure 3: Module-level merging result, using three ResNet18 models and manually split into \(6\) modules, with only two correct ones Figure 2: Accuracy Comparison of Models and Merged Model
2309.06628
Epistemic Modeling Uncertainty of Rapid Neural Network Ensembles for Adaptive Learning
Emulator embedded neural networks, which are a type of physics informed neural network, leverage multi-fidelity data sources for efficient design exploration of aerospace engineering systems. Multiple realizations of the neural network models are trained with different random initializations. The ensemble of model realizations is used to assess epistemic modeling uncertainty caused due to lack of training samples. This uncertainty estimation is crucial information for successful goal-oriented adaptive learning in an aerospace system design exploration. However, the costs of training the ensemble models often become prohibitive and pose a computational challenge, especially when the models are not trained in parallel during adaptive learning. In this work, a new type of emulator embedded neural network is presented using the rapid neural network paradigm. Unlike the conventional neural network training that optimizes the weights and biases of all the network layers by using gradient-based backpropagation, rapid neural network training adjusts only the last layer connection weights by applying a linear regression technique. It is found that the proposed emulator embedded neural network trains near-instantaneously, typically without loss of prediction accuracy. The proposed method is demonstrated on multiple analytical examples, as well as an aerospace flight parameter study of a generic hypersonic vehicle.
Atticus Beachy, Harok Bae, Jose Camberos, Ramana Grandhi
2023-09-12T22:34:34Z
http://arxiv.org/abs/2309.06628v1
# Epistemic Modeling Uncertainty of Rapid Neural Network Ensembles for Adaptive Learning ###### Abstract **Emulator embedded neural networks, which are a type of physics informed neural network, leverage multi-fidelity data sources for efficient design exploration of aerospace engineering systems. Multiple realizations of the neural network models are trained with different random initializations. The ensemble of model realizations is used to assess epistemic modeling uncertainty caused due to lack of training samples. This uncertainty estimation is crucial information for successful goal-oriented adaptive learning in an aerospace system design exploration. However, the costs of training the ensemble models often become prohibitive and pose a computational challenge, especially when the models are not trained in parallel during adaptive learning. In this work, a new type of emulator embedded neural network is presented using the rapid neural network paradigm. Unlike the conventional neural network training that optimizes the weights and biases of all the network layers by using gradient-based backpropagation, rapid neural network training adjusts only the last layer connection weights by applying a linear regression technique. It is found that the proposed emulator embedded neural network trains near-instantaneously, typically without loss of prediction accuracy. The proposed method is demonstrated on multiple analytical examples, as well as an aerospace flight parameter study of a generic hypersonic vehicle.** _Keywords:_ Machine Learning, Neural Network, Multifidelity, Active Learning, Aircraft Design ## 1 Introduction Design exploration for high-performance aircraft requires computational modeling of multiple unconventional design configurations. Models must capture aerodynamics, structure, and propulsion, as well as interactions between these disciplines. A common design exploration technique is to sample the expensive physics-based models in a design of experiments and then use the sample data to train an inexpensive meta-model. Conventional metamodels include regression models such as ridge regression [1], Lasso [2], Polynomial Chaos Expansion [3], Gaussian process regression (GPR) or kriging [4, 5, 6], and neural networks [7]. However, many simulation evaluations are needed for the design of experiments because of the large number of independent parameters for each design and the complex responses resulting from interactions across multiple disciplines. Because high-fidelity simulations are expensive, the total computational costs can easily become computationally intractable. Computational cost reduction is often achieved using Multi-Fidelity Methods (MFM) and Active Learning (AL). MFMs work by supplementing High-Fidelity (HF) simulations with less accurate but inexpensive Low-Fidelity (LF) simulations. AL involves intelligent generation of training data in an iterative process: rebuilding the metamodel after each HF sample is added, and then using the metamodel to determine the best HF sample to add next. When performing Multi-Fidelity (MF) modeling [8, 9], the usual strategy is to generate many affordable Low-Fidelity (LF) samples to capture the design space and correct them using a small number of expensive High-Fidelity (HF) samples. MF modeling is a more cost-effective way of training an accurate surrogate model than using a single fidelity, which will suffer from sparseness with only HF samples and inaccuracy with only LF samples. One method for generating LF data is Model Order Reduction (MOR). Nachar et al. [10] used this technique to generate LF data for a multi-fidelity kriging model. Reduced order models [11, 12, 13] are constructed by approximating the low-dimensional manifold on which the solutions lie. The manifold can be approximated linearly using Proper Orthogonal Decomposition, or nonlinearly using various methods such as an autoencoder neural network [14]. This approximation enables vastly decreasing the model degrees of freedom, which in turn reduces the computational costs of a simulation. Typically, reduced order models are used to give final predictions instead of being used as LF models in a multi-fidelity surrogate. However, this limits the aggressiveness with which the model can be reduced without introducing unacceptable errors into the final predictions. Even after very aggressive reduction, reduced order models can still provide valuable information about trends when used as LF models. LF models can also be constructed by coarsening the mesh, simplifying the physics, or utilizing historical data from a similar problem. When performing AL, location-specific epistemic uncertainty information is critical for determining where to add additional samples. Kriging is popular in large part because it returns this uncertainty. Cokriging [15, 16] is a popular type of MFM which extends kriging to multiple fidelities. It typically performs well but has difficulties if the LF function is not well correlated with the HF function. It also cannot incorporate more than a single LF function unless they fall into a strict hierarchy known beforehand. As an alternative to cokriging, a localized-Galerkin kriging approach has been developed which can combine multiple nonhierarchical LF functions and enable adaptive learning [17, 18]. However, kriging and kriging-based methods have fundamental numerical limitations. Fitting a kriging model requires optimization of the hyperparameters \(\theta\), where a different \(\theta\) parameter exists for each dimension. Additionally, each evaluation of the loss function requires inverting the covariance matrix, an operation with computational complexity on the order of \(O(d^{3})\), where \(d\) is the number of dimensions. This makes kriging-based methods poorly suited for modeling functions above a couple of dozen dimensions, especially if the number of training samples is high. In this work, we use Neural Networks (NNs) to avoid the limitations of kriging. A common NN architecture, feed forward fully-connected, is shown in Fig. 1. A neural network is a structure composed of layers of neurons with weighted connections to the neurons of other layers. Each neuron takes as input the weighted sum of neurons in the previous layer, plus the neuron's bias term. This input is then transformed by the neuron's activation function, and output in turn to the neurons of the next layer. Figure 1: Illustration of a fully connected multi-layer NN. A single hidden layer with sufficiently many neurons will enable a NN to model any continuous function. Training a NN involves adjusting the many weights and biases to output accurate predictions on training data. Typically, weight optimization is performed using a gradient descent algorithm such as Adam, with the gradients calculated via backpropagation. The more weights and biases a NN has, the higher the degree of flexibility. This allows NNs to function as universal approximators. Specifically, the Universal Approximation Theorem (UAT) states that a feed-forward neural network with a single hidden layer can approximate any continuous function to any degree of accuracy over a bounded region if sufficiently many neurons with nonpolynomial activation functions are included in the hidden layer [19, 20, 21]. However, the UAT does not put upper bounds on the error of a neural network with a limited number of neurons. Therefore, in practice, the number of layers and neurons in each layer is typically selected based on experience or trial and error. Unfortunately, using conventional NNs for aerospace engineering design studies faces several practical drawbacks. In aerospace applications, physical experiments or high-fidelity simulations are typically expensive, meaning HF data will be sparse. Therefore, design studies often require extrapolating beyond the bounds of the HF data, which interpolators such as NNs are poorly suited to. One approach for mitigating these practical problems is to use Physics Informed Neural Networks (PINNs) [22, 23]. These combine physics information with a NN model. Typically, the physics information takes the form of differential equations, such as governing equations and boundary conditions. PINNs have various structures, such as NNs modifying parameters of the differential equations to improve accuracy, or NNs adding corrections and detail on top of a simplified model that does not capture the full physics. The physics information constrains the model, alleviating overfitting and increasing the accuracy of extrapolations beyond the HF data points. PINNs can also use physics models other than differential equations, such as first-principle models, data-driven models, and expert knowledge models. PINNs can also include multiple fidelities of data, for instance, by training a NN on LF training data and using the output as an input to the PINN trained on HF data [24, 25, 26]. Recently, the authors proposed the Emulator Embedded Neural Network (E2NN) [27, 28] a generic framework for combining any mix of physics-based models, experimental data, and any other information sources. Instead of using an LF model as an input to the main NN, E2NN embeds emulator models into its hidden layer neurons intrusively. A neuron with an LF model embedded into it is called an emulator. The E2NN is trained to transform and mingle the information from multiple emulators within the NN by finding the optimal connection weights and biases. This has an effect like that of a standard PINN architecture, reducing overfitting and enabling accurate extrapolations beyond sparse HF data points. E2NN performed well in the tests of the reference papers, handling non-stationary function behaviors, capturing a high dimensional Rosenbrock function with low-quality LF models, and successfully modeling stress in a Generic Hypersonic Vehicle (GHV) engineering example. To enable AL, it is necessary to capture epistemic prediction uncertainty for a data acquisition metric. Using a Bayesian NN [29, 30] is one option for obtaining epistemic learning uncertainty. In a Bayesian NN, the weights are assigned probability distributions rather than fixed values. Therefore, the prediction output is not a point estimate but a Gaussian distribution. Bayesian NNs are trained through backpropagation, but they require significantly higher computational costs than conventional NNs to optimize the probability density distribution of weights and biases. Ensemble methods [31, 32] combine multiple models to improve accuracy and provide uncertainty information. Accuracy improves more when the errors of the models are less correlated. In the extreme case where \(M\) models have equal and entirely uncorrelated errors, averaging the models decreases error by a factor of \(1/M\)[33]. Error correlation decreases when models are less similar, which can be achieved by training on different subsets of data or using models with different underlying assumptions [34]. Uteva et al. [35] tested an active learning scheme of sampling where two different GPR models disagreed the most. Because the GPR models were trained on different sets of data, they had different \(\theta\)-hyperparameters and thus different behavior. This method outperformed sampling the location of maximum predicted variance of a single GPR model. Lin et al. [36] combined two NNs with different architectures into an ensemble, and added adaptive samples at the locations of maximum disagreement. Christiano et al. [37] combined three neural networks in an ensemble and added additional training samples where the variance of the predictions was maximized. While existing ensemble methods can output a location of maximum variance for adaptive learning, they do not output a predictive probability distribution like GPR does. Such a predictive distribution is highly useful, offering additional insight into the model and enabling the use of probabilistic acquisition functions such as Expected Improvement. In this work, we present a formal statistical treatment to extract such a probability distribution. Specifically, we estimate the epistemic learning uncertainty by combining multiple E2NN models and calculating a Bayesian predictive distribution. Training a single E2NN model typically requires several minutes to reach convergence, which means updating the entire ensemble can be time-consuming. Therefore, a method combining Neural Networks and linear regression is explored to alleviate the cost of neural network training. Early exploration of the method was performed by Schmidt et al. [38]. The method combines two basic steps: first, creating a neural network with random weights and biases, and second, setting the last layer of weights using a linear regression technique such as ordinary least squares or ridge regression. These steps are far cheaper than backpropagation-based neural network training, enabling accelerated retraining of the E2NN ensemble during active learning. The next section, Section 2, contains a brief review of E2NN. Section 3 covers the proposed approach of adaptive and rapid learning, which is followed by Section 4 showing fundamental mathematical demonstrations and a practical aerospace engineering example involving predicting the aerodynamic performance of the generic hypersonic vehicle. ## 2 Review of Emulator Embedded Neural Networks The E2NN has low fidelity models, called emulators, embedded in specific neurons of the neural network's architecture. The emulators contribute to regularization and preconditioning for increased accuracy. The emulators can take the form of any low-cost information source, such as reduced/decomposed physics-based models, legacy equations, data from a related problem, models with coarsened mesh or missing physics, etc. An emulator embedded in the last hidden layer can only be scaled before being added to the response, while an emulator embedded in the second-to-last hidden layer can be transformed through any functional mapping (by the Universal Approximation Theorem [19, 20, 21]) before being added to the response. The simplest solution when selecting the architecture is to embed the emulator in all hidden layers and allow the E2NN training to select which instances are useful. The flexibility of E2NN in selecting between an arbitrary number of LF models, each embedded in multiple hidden layers, enables wide applicability to problems with LF models of high, low, or unknown accuracy. The architecture of E2NN is illustrated in Fig. 2. Figure 2: Architecture of an Emulator Embedded Neural Network. ## 3 Proposed Approach: Adaptive Learning with Non-Deterministic Emulator Embedded Neural Network The main technical contribution of this work is a novel method for combining predictions of an ensemble of models to approximate epistemic modeling uncertainty. This uncertainty information lowers training data costs by enabling Adaptive Learning (AL). For optimization problems, the Expected Improvement (EI) metric identifies adaptive sampling locations of maximum information gain. The EI assessment requires a probability density function, which is estimated by training an ensemble of E2NN models and estimating the underlying distribution of predictions. Greater disagreement among individual E2NN realizations is typically due to a lack of training samples and implies greater epistemic uncertainty. A second contribution involved speeding up E2NN training hundreds of times using Rapid Neural Network (RaNN) training, allowing fast model updating during AL iterations. Ensemble uncertainty estimation is discussed in Subsection 3.1, and AL using EI is explained in Subsection 3.2 Finally, the RaNN methodology is summarized in Subsection 3.3, and practical considerations for avoiding numerical errors are discussed in Subsection 3.4. ### Proposed Approach for Assessing Epistemic Modeling Uncertainty of E2NN The epistemic modeling uncertainty is estimated using an ensemble of E2NN models. The E2NN model predictions agree at the training points, but the predictions between points depend on the random initializations of the weights and biases. In other words, the E2NN model predictions are samples drawn from a stochastic prediction function in the design space. A higher magnitude of disagreement implies greater epistemic modeling uncertainty. This suggests a useful assumption: model the epistemic uncertainty as the aleatoric uncertainty of the E2NN model predictions. Specifically, assume the two pdfs are approximately equal at each prediction point \(x\): \[p(y_{true}(x))\approx p(y_{E2NN}(x)) \tag{1}\] However, finding the exact aleatoric distribution of \(y_{E2NN}(x)\) requires training an infinite number of E2NN models. Instead, we use a finite number of model predictions at a point \(x\) as data \(D(x)\) to construct a posterior predictive distribution (ppd) for the E2NN predictions. This ppd is then used as an estimate of the epistemic pdf of the true function. \[p(y_{true}(x))\approx p(y_{E2NN}(x)|D(x)) \tag{2}\] Finding the ppd requires introducing a second assumption: The aleatoric E2NN predictions are normally distributed at each point \(x\) \[y_{E2NN}(x)\sim N(\mu,\sigma) \tag{3}\] The process of combining multiple E2NN model predictions to estimate the epistemic uncertainty is illustrated in Fig. 3. For the general case of finding a posterior predictive distribution for a normal distribution from which we have \(n\) iid data points \(D\) but no additional information, we include a proof in Appendix A. First, we need a prior and a likelihood. The prior for the mean and variance is a normal-inverse-chi-squared distribution (\(NI\chi^{2}\)) \[p(\mu,\sigma^{2})=NI\chi^{2}(\mu_{0},\kappa_{0},\nu_{0},\sigma_{0}^{2})=N(\mu |\mu_{0},\sigma^{2}/\kappa_{0})\cdot\chi^{2}(\sigma^{2}|\nu_{0},\sigma_{0}^{2}) \tag{4}\] Here \(\mu_{0}\) is the prior mean and \(\kappa_{0}\) is the strength of the prior mean, while \(\sigma_{0}^{2}\) is the prior variance and \(\nu_{0}\) is the strength of the prior variance. The likelihood of the \(n\) iid data points is the product of the likelihoods of the individual data points. By Bayes' rule, the joint posterior distribution for \(\mu\) and \(\sigma^{2}\) is proportional to the prior times the likelihood. Selecting the constants \(\kappa_{0}\), \(\sigma_{0}\) and \(\nu_{0}\) for an uninformative prior and integrating to marginalize out \(\mu\) and \(\sigma\) in the posterior yields the predictive posterior distribution. Given \(n\) iid samples, this is given by a t-distribution \[p(y|D)=t_{n-1}\left(\bar{y},\frac{1+n}{n}s^{2}\right) \tag{5}\] where \(\bar{y}\) is the sample mean and \(s^{2}\) is the sample variance. \[s^{2}=\frac{1}{n-1}\sum_{i}(y_{i}-\bar{y})^{2} \tag{6}\] The final pdf of \(y\) given the data \(D\) is a t-distribution instead of a normal distribution because it combines both Bayesian epistemic and aleatoric uncertainty. The epistemic component of the uncertainty can be reduced by increasing the number of samples. As the number of samples or ensemble models \(n\) approaches \(\infty\), the pdf of Eq. 5 will approach a normal distribution with the correct mean and standard deviation. For \(n=2\) samples (\(\nu=1\)) this yields a Cauchy, or Lorentzian, distribution, which has tails so heavy that the variance \(\sigma^{2}\) and mean \(\mu\) are undefined [39]. For \(n=3\) samples (\(\nu=2\)) the mean \(\mu\) is defined but the variance \(\sigma^{2}\) is infinite. Therefore, more than 3 ensemble models should always be used to estimate the mean when an uninformative prior is used. Substituting Eq. 5 into Eq. 1 yields the final estimate of the epistemic modeling uncertainty distribution, \[p(y_{true}(x))=t_{n-1}\left(\bar{y}_{E2NN}(x),\frac{1+n}{n}\cdot s_{E2NN}(x) \right)^{2} \tag{7}\] where \(\bar{y}_{E2NN}(x)\) is the sample mean prediction of the \(n\) E2NN models in the ensemble, \[\bar{y}_{E2NN}(x)=\frac{1}{n}\sum_{i}y_{E2NN_{i}}(x) \tag{8}\] and \(s_{E2NN}(x)\) is the sample variance of E2NN model predictions in the ensemble. \[s_{E2NN}(x)^{2}=\frac{1}{n-1}\sum_{i}(y_{E2NN_{i}}(x)-\bar{y}_{E2NN}(x))^{2} \tag{9}\] Use of an uninformative prior results in conservative and robust estimation of the epistemic modeling uncertainty. Figure 3: Illustration of using multiple E2NN model realizations to estimate the underlying aleatoric probability distribution. This estimate is used as an approximation of the epistemic modeling uncertainty. ### Adaptive Learning (AL) Using Expected Improvement Adaptive learning allows data collection to be focused in areas of interest and areas with high uncertainty, reducing the number of samples required for design exploration and alleviating computational costs. Typically, an acquisition function is used to measure the value of information gained from adding data at a new location. The overall process requires a few simple steps: 1. Generate HF responses from an initial design of experiments. 2. Use sample data to build an ensemble of E2NN models. 3. Maximize the acquisition function using an optimization technique. 4. If the maximum acquisition value is above tolerance, add a training sample at the location and go to step 2. Otherwise, stop because the optimization has converged. The criteria for where to add additional data depends on the design exploration goals. A common goal is global optimization. Global minimization seeks to find the minimum of a function in \(d\)-dimensional space. \[x_{opt}=\operatorname*{argmin}_{x\in\mathbb{R}^{d}}y(x) \tag{10}\] For this objective, Expected Improvement (EI) is applicable [4, 18]. Informally, EI at a design point is the amount by which the design will, on average, improve over the current optimum. Formally, this is calculated by integrating the product of the degree and likelihood of every level of improvement. Likelihoods are given by the predictive probability distribution. Any new sample worse than the current optimum yields an improvement of 0. The general expression for EI is given by Eq. 11 and illustrated in Fig. 4. \[EI(x)=\int_{-\infty}^{\infty}pdf(y)\cdot max(y_{current\ opt}-y,0)\cdot dy \tag{11}\] The EI value for a Gaussian predicted probability distribution is expressed as the closed-form expression, \[EI(x)=\left(f_{min}-\hat{y}(x)\cdot\Phi\left(\frac{f_{min}-\hat{y}(x)}{\sigma_ {z}(x)}\right)+\sigma_{z}(x)\cdot\phi\left(\frac{f_{min}-\hat{y}(x)}{\sigma_ {z}(x)}\right)\right) \tag{12}\] where \(\phi(\cdot)\) and \(\Phi(\cdot)\) are the pdf and cdf of the standard normal distribution, respectively; \(\hat{y}(x)\) and \(\sigma_{z}(x)\) are the mean and standard deviation of the predictive probability distribution, respectively; and \(f_{min}\) is the current optimum. The current optimum can be defined as either the best sample point found so far, or the best mean prediction of the current surrogate. We use the former definition. The two definitions approach each other as the adaptive learning converges on the optimum. Figure 4: Illustration of Expected Improvement calculation for adaptive learning. Unlike a kriging model, an E2NN ensemble returns a Student's t-distribution instead of a Gaussian distribution. The resulting formulation of EI in this case is [40] \[E[I(x)]=(f_{min}-\mu)\Phi_{t}(z)+\frac{\nu}{\nu-1}\left(1+\frac{z^{2}}{\nu} \right)\sigma\phi_{t}(z) \tag{13}\] where the t-score of the best point found so far is \[z=\frac{f_{min}-\mu}{\sigma} \tag{14}\] and \(\phi_{t}(\cdot)\) and \(\Phi_{t}(\cdot)\) are the pdf and cdf of the standard Student's t-distribution, respectively. Also, \(\mu\), \(\sigma\), and \(\nu\) are the mean, scale factor, and degrees of freedom of the predictive t-distribution. ### Training of E2NNs As Rapid Neural Networks Typically, NNs are trained using auto-differentiation and backpropagation. The main problem in NN training is saddle points, which look like local minima because all gradients are zero and the objective function is increasing along almost all direction vectors. However, in a tiny fraction of possible directions the objective function decreases. If a NN has millions of weights, determining which direction to follow to escape from the saddle point is difficult. Optimizers such as Adam use momentum and other techniques to reduce the chance of getting stuck at a saddle point. However, a single E2NN model still takes significant computational time to reach convergence, requiring minutes to hours for a moderately sized neural network. Depending on the number of samples and the dimensionality of the problem, training an ensemble of E2NNs for adaptive learning will introduce significant computational costs. To increase speed, the E2NN models can be trained as Rapid Neural Networks (RaNNs). Essentially, this involves initializing a neural network with random weights and biases in all layers and then training only the last layer connections. The last layer's weights and bias are trained by formulating and solving a linear regression problem such as ridge regression, skipping the iterative training process entirely. This accelerates training multiple orders of magnitude. These models are sometimes referred to as extreme learning machines, a term coined by Dr. Guang-Bin Huang, although the idea existed previously. An early example of RaNN by Schmidt et al. in 1992 utilized a feed-forward network with one hidden layer [38]. The weights connecting the input to the first layer were set randomly, and the weights connecting the hidden layer to the output were computed by minimizing the sum of squared errors. The optimal weights are analytically computable using standard matrix operations, resulting in very fast training. Saunders et al. [41] used ridge regression instead of least squares regression when computing the weights connecting the hidden layer to the output. Huang et al. later developed equivalent methods [42, 43], and demonstrated that like conventional NNs, RaNNs are universal approximators [44], with the ability to capture any continuous function over a bounded region to arbitrary accuracy if a sufficient number of neurons exist in the hidden layer. However, despite being universal approximators, RaNNs require many more hidden layer neurons to accurately approximate a complex function than NNs trained with backpropagation and gradient descent. This is because the hidden layer neurons are essentially functions that are linearly combined to yield the NN prediction. Backpropagation intelligently selects these functions, while RaNN relies on randomly chosen functions and therefore requires more of them to construct a good fit. If fewer neurons and higher robustness are desired for the final model, an ensemble of E2NN models can be trained using backpropagation after the adaptive learning has converged. In this work, we propose applying the random initialization and linear regression techniques to E2NN models instead of standard NNs. Multiple realizations of E2NN with RaNN training are combined into an ensemble, enabling uncertainty estimation and adaptive learning. The proposed framework of adaptive learning is demonstrated in multiple example problems in Section 4. ### Practical Considerations for Avoiding Large Numerical Errors Setting the last layer weights using linear regression sometimes causes numerical issues when capturing highly nonlinear functions, even when enough neurons are included. If ridge regression or LASSO is used, the E2NN model will not interpolate the training data points. If no regularization is used, the weights become very large to force interpolation. Large positive values are added to large negative values, resulting in round-off error of the E2NN prediction. The resulting fit is not smooth, but jitters up and down randomly. Numerical stability can be improved by using a Fourier activation function such as \(\sin(x)\). This is reminiscent of a Fourier series, which can capture even non-continuous function to arbitrary accuracy. In fact, a NN with Fourier activation functions and a single hidden layer with \(n\) neurons can capture the first \(n\) terms of a Fourier series. Each neuron computes a different term of the Fourier Series, with the first layer of weights controlling frequency, the bias terms controlling offset, and the second layer weights controlling amplitudes. However, when using rapid training only the last layer weights are optimized. As points are added to a highly nonlinear function, interpolation becomes more difficult and numerical instability is introduced despite the Fourier activation. This is counteracted by increasing the frequency of the Fourier activation function, which enables more rapid changes of the fit. For smooth or benign functions, the Swish activation function tends to outperform Fourier. Therefore, we include some E2NN models using Swish and some using Fourier within the ensemble. Any model with any weights of magnitude above the tolerance of 100 are considered unstable and dropped from the ensemble. Additionally, any model with NRMSE on the training data above the tolerance of 0.001 is dropped from the ensemble, where NRMSE is defined as \[NRMSE=\sqrt{\frac{\sum_{i=1}^{N}(\hat{y}_{i}-Y_{i})^{2}}{\sum_{i=1}^{N}(\bar{ Y}-Y_{i})^{2}}} \tag{15}\] for predictions \(\hat{y}_{i}\) and \(N\) training samples \(Y_{i}\). If over half of the Fourier models are dropped from the ensemble, all Fourier models have their frequencies increased and are retrained. These changes eliminate noisy and unstable models from the ensemble, as well as modifying models to remain stable as new points are added to the training data. ## 4 Numerical Experiments In aerospace applications, the costs of HF sample generation, i.e., computational fluid dynamics simulation, aeroelasticity analysis, coupled aerothermal analysis, etc., are typically far higher than the costs of generating LF samples and training NN models. Therefore, in the following examples, we compare the cost of various prediction models in terms of the number of HF samples required, rather than the computer wall-clock or GPU time. We assume that enough LF samples are collected to train an accurate metamodel, which is used to cheaply compute the emulator activations whenever the neural network makes predictions. In the following examples, we use a fully connected feed-forward neural network architecture. All LF functions are embedded as emulators in all hidden layers. The input variables are scaled to \([-1,1]\). The weights are initialized using the Glorot normal distribution. Biases are initialized differently for Swish and Fourier activation functions. Fourier biases are initialized uniformly between \([0,2\pi]\) to set the functions to a random phase or offset, while Swish biases are initialized uniformly on the region \([-4,4]\). In total, each ensemble contains 16 E2NN models with a variety of architectures and activation functions. Identical models make the same assumptions about underlying function behavior when deciding how to interpolate between points. If these assumptions are incorrect, the error will affect the whole ensemble. Therefore, including dissimilar models results in more robust predictions. Four activation functions and two different architectures yield 8 unique NN models. Each unique model is included twice for a total of 16 E2NN models in the ensemble. The four activation functions are \(\text{swish}(x)\), \(\sin(\text{scale }x)\), \(\sin(1.1\text{ }scale\ x)\), and \(\sin(1.2\text{ }scale\ x)\). The two architectures include a small network and a large network. The small network has a single hidden layer with \(2n\) neurons, where \(n\) is the number of training samples. This means the number of neurons is dynamically increased as new training samples are added. The large network has two hidden layers, where the first hidden layer has 200 neurons, and the second hidden layer has 5000 neurons. Having most neurons in the second hidden layer enables more of the NN weights to be adjusted by linear regression. The large and small NNs use different scale terms for the Fourier activation functions. The large NN scale term is increased whenever more than half the large Fourier NNs have bad fits, and the small NN scale term is increased whenever over half the small Fourier NNs have bad fits. Because numerical instability is already corrected for, we do not use ridge regression. Instead, we perform unregularized linear regression using the Moore-Penrose inverse with a numerically stabilized \(\Sigma\) term. ### One-dimensional analytic example with a linearly deviated LF model An optimization problem with the following form is considered. \[x_{opt}=\operatorname*{argmin}_{x\in[0,1]}y_{HF}(x) \tag{16}\] Here \(x\) is a design variable on the interval \([0,1]\). The high-fidelity function \(y_{HF}(x)\) and its low-fidelity counterpart \(y_{LF}(x)\) are given in Eqs. 17 and 18. These functions have been used previously in the literature when discussing MF modeling methods [17, 45]. \[y_{HF}(x)=(6x-2)^{2}\sin(12x-4) \tag{17}\] \[y_{LF}(x)=0.5y_{HF}(x)+10(x-0.5)-5 \tag{18}\] As shown in Fig. 4(a), the initial fit uses three HF samples at \(x=[0,0.5,1]\) and an LF function which is linearly deviated from the HF function. The 16 E2NN models used in the ensemble are shown in Fig. 4(b). Three ensemble models are outliers, significantly overestimating the function value near the optimum. Two of these models nearly overlap, looking like a single model. All three of these inferior models are small NNs with Fourier activation functions. The mean and 95% probability range of the predictive t-distribution are both shown in Fig. 5(a). From this t-distribution, the Expected Improvement is calculated in Fig. 5(b). A new sample is added at the location of maximum EI. The true optimum is \(x_{opt}=0.7572\), \(y_{opt}=-6.0207\). As shown in Fig. 6(a), the first adaptive sample at \(x=0.7545\) lands very near the optimum, and the retrained ensemble's mean prediction is highly accurate. Figure 5: Initial problem and fitting of the E2NN model. After the first adaptive sample, the maximum EI is still above tolerance. Therefore, an additional sample is added as shown in Fig. 6(b). The second adaptive sample at \(x=0.7571\) is only \(10^{-4}\) from the true optimum. After the second adaptive sample, the maximum EI value falls below tolerance, and the adaptive sampling converges. ### Two-dimensional analytical example The proposed ensemble method is compared with the popular kriging method for minimization of the following two-dimensional function. \[y_{HF}(x_{1},x_{2})=\sin(21(x_{1}-0.9)^{4})\cos(2(x_{1}-0.9))+(x_{1}-0.7)/2+2*x_ {2}^{2}\sin(x_{1}x_{2}) \tag{19}\] This nonstationary test function was introduced in [46, 47][49, 50]. The kriging method uses only HF training samples during optimization, but E2NN makes use of the following LF function. \[y_{LF}(x_{1},x_{2})=\frac{y_{HF}(x_{1},x_{2})-2+x_{1}+x_{2}}{1+0.25x_{1}+0.5x_{2}} \tag{20}\] Figure 6: Initial model and expected improvement. Figure 7: Iterative model as adaptive samples are added. The independent variables \(x_{1}\) and \(x_{2}\) are constrained to the intervals \(x_{1}\in[0.05,1.05]\), \(x_{2}\in[0,1]\). The LF function exhibits nonlinear deviation from the HF function as shown in Fig. 8. Eight training samples are selected to build the initial model using Latin hypercube sampling. The initial E2NN ensemble prediction is shown in Fig. 8(a). The resulting EI is shown in Fig. 8(b), and the ensemble prediction after a new sample is added is shown in Fig. 8(c). The initial fit is excellent, with only a small difference between the mean prediction and HF function. After the first adaptive sample is added, the optimum is accurately pinpointed. To compare the performance of adaptive learning with single fidelity kriging, EGO with a kriging model is run with the same initial set of 8 points. The best kriging sample is shown in Fig. 9(a). After adding a sample near the location of the optimum, the kriging model still does not capture the underlying trend of the HF model, as shown in Fig. 9(c). The E2NN ensemble maintains higher accuracy over the design space by leveraging the LF emulator. For the E2NN ensemble, the algorithm adds one more sample further up the valley that the optimum lies in, and then terminates because the expected improvement converges below tolerance. The final fit is shown in Fig. 10(a). The kriging model adds 29 samples before it converges, and still doesn't find the exact optimum, as shown in Fig. 10(b). Figure 8: Comparison of HF and LF functions for a nonstationary test function. Figure 9: Adaptive sampling of E2NN ensemble. ### Three-dimensional CFD example using a Hypersonic Vehicle Wing This example explores modeling the Lift to Drag Ratio (\(CL/CD\)) of a wing of the Generic Hypersonic Vehicle (GHV) given various flight conditions. The GHV was developed at the Wright-Patterson Air Force Base to allow researchers outside the Air Force Research Laboratory to perform hypersonic modeling studies. The parametric geometry of the wing used was developed by researchers at the Air Force Institute of Technology [48] and is shown in Fig. 12. For this example, we studied the maximum lift-to-drag ratio of the GHV wing design with respect to three operational condition variables: Mach number (normalized as \(x_{1}\)), Angle of Attack (normalized as \(x_{2}\)) and Altitude (normalized as \(x_{3}\)). The Mach number ranges from \([1.2,4.0]\), while the angle of attack ranges from \([-5^{\circ},8^{\circ}]\) and the altitude ranges from \([0,50\text{ km}]\). The speed of sound decreases with altitude, so the same Mach number denotes a lower speed at higher altitude. Atmospheric properties were calculated using the scikit-aero python library based on the U.S. 1976 Standard Atmosphere. FUN3D [49] performed the CFD calculations. We used Reynolds Averaged Navier Stokes (RANS) with a mesh of \(272,007\) tetrahedral cells for the HF model, and Euler with a mesh of \(29,643\) tetrahedral cells for the LF model. To enable rapid calling of the LF model during each NN evaluation, \(300\) evaluations of the LF model were performed and used to train a GPR model. This GPR model was then used as the LF function. The HF and LF meshes are compared in Fig. 13. the Angle of Attack (\(x_{2}\)) is the most influential variable, followed by Mach number (\(x_{1}\)), with Altitude (\(x_{3}\)) contributing little effect. The models show similar trends, except for Mach number which is linear according to the LF model and quadratic according to the HF model. The captured viscous effects and finer mesh enable the HF model to capture more complexity resulting from the underlying physics. The problem is formulated as minimizing the negative of the lift-to-drag ratio rather than maximizing the lift-to-drag ratio, following optimization convention. Both the ensemble and GPR models are initialized with 10 HF samples selected using Latin Hypercube Sampling. Five different optimization runs are completed for each method, using the same 5 random LHS initializations to ensure a fair comparison. The optimization convergence as samples are added is shown in Fig. 15. Both models start with the same initial optimum values because they share the same initial design of experiments. However, the E2NN ensemble model improves much more quickly than GPR, and converges to a better optimum. E2NN requires only 11 HF samples to reach a better optimum on average than GPR reached after 51 HF samples. In some cases, the GPR model converges prematurely before finding the optimum solution. E2NN much more consistently finds the optimum of \(-13\), which corresponds to a lift-to-drag ratio of 13. Figure 12: Illustration of GHV wing used in CFD analysis. Figure 13: Comparison of HF and LF meshes used in CFD analysis. ## 5 Conclusions In this research, we present a novel framework of adaptive machine learning for engineering design exploration using an ensemble of Emulator Embedded Neural Networks (E2NN) trained as rapid neural networks. This approach allows for the assessment of modeling uncertainty and expedites learning based on the design study's objectives. The proposed framework is successfully demonstrated with multiple analytical examples and an aerospace problem utilizing CFD analysis for a hypersonic vehicle. The performance of the proposed E2NN framework is highlighted when compared to a single-fidelity kriging optimization. E2NN exhibited superior robustness and reduced computational costs in converging to the true optimum. The central contribution in this work is a novel technique for approximating epistemic modeling uncertainty using an ensemble of E2NN models. The ensemble predictions are analyzed using Bayesian statistics to produce a t-distribution estimate of the true function response. The uncertainty estimation methodology allows for active learning, which reduces the training data required through efficient goal-oriented design exploration. This is a sequential process of intelligently adding more data based on the ensemble, and then rebuilding the ensemble. The training of the ensemble is drastically accelerated by applying the Rapid Neural Network (RaNN) methodology, enabling individual E2NNs to be trained near-instantaneously. This Figure 14: Comparison of HF and LF CFD models. Figure 15: Comparison of E2NN and GPR convergence towards optimum \(-CL/CD\). Average values across all 5 runs are shown as thick lines, while the individual run values are shown as thin lines. speedup is enabled by setting some neural network weights to random values and setting others using fast techniques such as linear regression and ridge regression. The essential components of the proposed framework are 1) the inclusion of emulators, 2) the ensemble uncertainty estimation, 3) active learning, and 4) the RaNN methodology. These components work together to make the overall methodology feasible and effective. For instance, the active learning would not be possible without the uncertainty estimation. Emulators make the ensemble more robust by stabilizing individual E2NN fits, resulting in fewer defective or outlier fits and better uncertainty estimation. The inclusion of the emulators and the use of adaptive sampling reduces the cost of training data generation more than either method could individually. Finally, the RaNN methodology accelerates training of individual E2NN models by hundreds to thousands of times, preventing the many re-trainings required by the ensemble and adaptive sampling methodologies from introducing significant computational costs. All these techniques combine synergistically to create a robust and efficient framework for engineering design exploration. **Declaration of Competing Interests** The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. **Research Data** No data was used for the research described in this article. The computer codes used for the numerical examples are available at [https://github.com/AtticusBeachy/multi-fidelity-nn-ensemble-examples](https://github.com/AtticusBeachy/multi-fidelity-nn-ensemble-examples). **Acknowledgement** This research was sponsored by the Air Force Research Laboratory (AFRL) and Strategic Council for Higher Education (SOCHE) under agreement FA8650-19-2-9300. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied by the Strategic Council for Higher Education and AFRL or the U.S. Government.
2309.09043
Forward Invariance in Neural Network Controlled Systems
We present a framework based on interval analysis and monotone systems theory to certify and search for forward invariant sets in nonlinear systems with neural network controllers. The framework (i) constructs localized first-order inclusion functions for the closed-loop system using Jacobian bounds and existing neural network verification tools; (ii) builds a dynamical embedding system where its evaluation along a single trajectory directly corresponds with a nested family of hyper-rectangles provably converging to an attractive set of the original system; (iii) utilizes linear transformations to build families of nested paralleletopes with the same properties. The framework is automated in Python using our interval analysis toolbox $\texttt{npinterval}$, in conjunction with the symbolic arithmetic toolbox $\texttt{sympy}$, demonstrated on an $8$-dimensional leader-follower system.
Akash Harapanahalli, Saber Jafarpour, Samuel Coogan
2023-09-16T16:49:19Z
http://arxiv.org/abs/2309.09043v2
# Forward Invariance in Neural Network Controlled Systems ###### Abstract We present a framework based on interval analysis and monotone systems theory to certify and search for forward invariant sets in nonlinear systems with neural network controllers. The framework (i) constructs localized first-order inclusion functions for the closed-loop system using Jacobian bounds and existing neural network verification tools; (ii) builds a dynamical embedding system where its evaluation along a single trajectory directly corresponds with a nested family of hyper-rectangles provably converging to an attractive set of the original system; (iii) utilizes linear transformations to build families of nested parallelotopes with the same properties. The framework is automated in Python using our interval analysis toolbox printerval, in conjunction with the symbolic arithmetic toolbox sympy, demonstrated on an \(\delta\)-dimensional leader-follower system. ## I Introduction Learning enabled components are becoming increasingly prevalent in modern control systems. Their ease of computation and ability to outperform optimization-based approaches make them valuable for in-the-loop usage. However, neural networks are known to be vulnerable to input perturbations--small changes in their input can yield wildly varying results. In safety-critical applications, under uncertainty in the system, it is paramount to verify safe system behavior for an infinite time horizon. Such behaviors are guaranteed through _robust forward invariant sets_, _i.e._, a set for which a system will never leave under any uncertainty. Forward invariant sets are useful for a variety of tasks. In monitoring, a safety specification extends to infinite time if the system is guaranteed to enter an invariant set. Additionally, for asymptotic behavior of a system, an invariant set can be used as a robustness margin to replace the traditional equilibrium viewpoint in the presence of disturbances. In designing controllers, one can induce infinite-time safe behavior by ensuring the existence of an invariant set containing all initial states of the system and excluding any unsafe regions. There are several classical techniques in the literature for certifying forward invariant sets, such as Lyapunov-based analysis [1], barrier-based methods [2], and set-based approaches [3]. However, a naive application of these methods generally fails when confronted with high-dimensional and highly nonlinear neural network controllers in-the-loop. _Literature review:_ The problem of verifying the input-output behavior of standalone neural networks has been studied extensively [4]. There is a growing body of literature studying verification of neural networks applied in feedback loops, which presents unique challenges due to the accumulation of error in closed-loop, _i.e._, the wrapping effect. For example, there are functional approaches such as POLAR [5], JuliaReach [6], and ReachMM [7] for nonlinear systems, and linear (resp. semi-definite) programming ReachLP [8] (resp. Reach-SDP [9]) for linear systems. While these methods verify finite time safety, their guarantees do not readily extend to infinite time. In particular, it is not clear how to adapt these tools to search and certify forward invariant sets of neural network controlled systems. There are a handful of papers that directly study forward invariance for neural networks in dynamics: In [10] a set-based approach is used to study forward invariance of a specific class of control-affine systems with feedforward neural network controllers. In [11] an ellipsoidal inner-approximation of a region of attraction of neural network controlled system is obtained using Integral Quadratic Constraints (IQCs). In [12], a Lyapunov-based approach is used to study robust invariance of control systems modeled by neural networks. _Contributions:_ In this letter, we propose a dynamical system approach for systematically finding nested families of robust forward invariant sets for nonlinear systems controlled by neural networks. Our method uses _localized_ first-order inclusion functions to construct an embedding system, which evaluates the inclusion function separately on the edges of a hyper-rectangle. Our first result is Proposition 1, which certifies (and fully characterizes for some systems) the forward invariance of a hyper-rectangle through the embedding system's evaluation at a single point. Our main result is Theorem 1, which describes how a single trajectory of the dynamical embedding system can be used to construct a nested family of invariant and attracting hyper-rectangles. However, in many applications, hyper-rectangles are not suitable for capturing forward invariant regions, and a simple linear transformation can greatly improve results. In Proposition 2, we carefully construct an accurate localized inclusion function for any linear transformation on the original system, which we use in Theorem 2 to find a nested family of forward invariant paralleletopes. Finally, we implement the framework in Python, demonstrating its applicability to an \(8\)-dimensional ###### Abstract We consider a framework for bounding the behavior of the uncertain dynamical system (3) using inclusion functions. We first review the notion of inclusion functions in III-A and then use them to construct first-order local bounds for neural network controlled systems in III-B. ###### Contents * 1 Introduction [MISSING_PAGE_POST] * 311 * 311 * 321 * 33 Proofs of Theorem 1.12 * 341 * 35 Proofs of Theorem 1.12 * 36 * 37 Proofs of Theorem 1.12 * 38 Proofs of Theorem 1.12 * 39 Proofs of Theorem 1.12 * 39 Proofs of Theorem 1.12 * 39 Proofs of Theorem 1.12 * 40 * 411 * 421 * 431 * 44 ## 1 Introduction ### 1.1 Introduction In this section, we introduce a framework for bounding the behavior of the uncertain dynamical system (3) using inclusion functions. We first review the notion of inclusion functions in III-A and then use them to construct first-order local bounds for neural network controlled systems in III-B. ### Inclusion Functions Interval analysis aims to provide interval bounds on the output of a function given an interval of possible inputs [13]. Given a function \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\), the function \(\mathsf{F}=\left(\frac{\mathsf{F}}{\mathsf{F}}\right):\mathcal{T}_{\geq 0}^{2n} \to\mathcal{T}_{\geq 0}^{2m}\) is called an _inclusion function_ for \(f\) if \[\mathsf{E}(\underline{x},\overline{x})\leq f(x)\leq\overline{\mathsf{F}}( \underline{x},\overline{x}),\quad\text{for every $x\in[\underline{x},\overline{x}]$}, \tag{4}\] for every interval \([\underline{x},\overline{x}]\subset\mathbb{R}^{n}\), and is an \(\mathcal{S}\)-_localized inclusion function_ if the bounds (4) are valid for every interval \([\underline{x},\overline{x}]\subseteq\mathcal{S}\), in which case we instead write \(\mathsf{F}_{\mathcal{S}}\). Additionally, an inclusion function is 1. _monotone_ if \(\mathsf{F}(\underline{x},\overline{x})\geq_{\mathrm{SE}}\mathsf{F}(\underline{ y},\overline{y})\), for any \([\underline{x},\overline{x}]\subseteq[\underline{y},\overline{y}]\); 2. _thin_ if for any \(x\), we have \(\mathsf{E}(x,x)=f(x)=\overline{\mathsf{F}}(x,x)\); 3. _minimal_ if \(\mathsf{F}\) returns the tightest possible interval, _i.e._ for each \(i\in\{1,\ldots,m\}\), \[\mathsf{E}_{i}^{\min}(\underline{x},\overline{x})=\inf_{x\in[\underline{x}, \overline{x}]}f_{i}(x),\quad\overline{\mathsf{F}}_{i}^{\min}(\underline{x}, \overline{x})=\sup_{x\in[\underline{x},\overline{x}]}f_{i}(x).\] Given the one-to-one correspondence between \(\mathcal{T}_{\geq 0}^{2n}\) and intervals of \(\mathbb{R}^{n}\), an inclusion function is often interpreted as a mapping from intervals to intervals--given an inclusion function \(\mathsf{F}=\left(\frac{\mathsf{F}}{\mathsf{F}}\right):\mathcal{T}_{\geq 0}^{2n} \to\mathcal{T}_{\geq 0}^{2m}\), we use the notation \([\mathsf{F}]=[\underline{\mathsf{F}},\overline{\mathsf{F}}]:\mathbb{R}^{n} \to\mathbb{R}^{m}\) to denote the equivalent interval-valued function with interval argument. In this paper, we focus on two main methods to construct inclusion functions: (1) Given a composite function \(f=f_{1}\circ f_{2}\circ\cdots\circ f_{N}\), and inclusion functions \(\mathsf{F}_{i}\) for \(f_{i}\) for every \(i\in\{1,\ldots,N\}\), the _natural inclusion function_ \[\mathsf{F}^{\mathrm{nat}}(\underline{x},\overline{x}):=(\mathsf{F}_{1}\circ \mathsf{F}_{2}\cdots\circ\mathsf{F}_{N})(\underline{x},\overline{x}) \tag{5}\] provides a simple but possibly conservative method to build inclusion functions using the inclusion functions of simpler mappings; (2) Given a differentiable function \(f\), with an inclusion function \(\mathsf{J}_{x}\) for its Jacobian matrix, the _jacobian-based inclusion function_ \[[\mathsf{F}^{\mathrm{jac}}(\underline{x},\overline{x})]:=[\mathsf{J}_{x}( \underline{x},\overline{x})]([\underline{x},\overline{x}]-\dot{x})+f(\dot{x}), \tag{6}\] for any \(\hat{x}\in[\underline{x},\overline{x}]\), can provide better estimates by bounding the first order taylor expansion of \(f\). Both of these inclusion functions are monotone (resp. thin) assuming the inclusion functions used to build them are also monotone (resp. thin). In our previous work [14], we introduce the open source package npinterval2, which extends the popular package numpy to include an interval data-type. The package automatically builds natural inclusion functions using minimal inclusion functions for standard functions and operations [14, Table 1], and can be used to construct jacobian-based inclusion functions when coupled with a symbolic toolbox like sympy. Footnote 2: The code for npinterval can be found at [https://github.com/qftfactslab/npinterval](https://github.com/qftfactslab/npinterval). ### Localized Closed-Loop Inclusion Functions One of the biggest challenges in neural network controlled system verification is correctly capturing the interactions between the system and the controller. For invariance analysis, it is paramount to capture the stabilizing nature of the controller, which can easily be lost with naive overbounding of the input-output interactions between the system and controller. We make the following assumption throughout. **Assumption 1** (Local affine bounds of neural network).: For the neural network (2), there exists an algorithm that, for any interval \([\underline{\xi},\overline{\xi}]\), produces a local affine bound \((C_{[\underline{\xi},\overline{\xi}]},\underline{d}_{[\underline{\xi}, \overline{\xi}]},\overline{d}_{[\underline{\xi},\overline{\xi}]})\) such that for every \(x\in[\underline{\xi},\overline{\xi}]\), \[C_{[\underline{\xi},\overline{\xi}]}x+\underline{d}_{[\underline{\xi}, \overline{\xi}]}\leq N(x)\leq C_{[\underline{\xi},\overline{\xi}]}x+ \overline{d}_{[\underline{\xi},\overline{\xi}]}.\] Many off-the-shelf neural network verification frameworks can produce the linear estimates required in Assumption 1, and in particular, we focus on CROWN [15]. For ReLU and otherwise piecewise linear networks, one can setup a mixed integer linear program similar to [16], which is tractable for small-sized networks. The bounds from Assumption 1 can be used to construct a \([\underline{\xi},\overline{\xi}]\)-localized inclusion function for \(N(x)\): \[\begin{split}\underline{\mathbb{N}}_{[\underline{\xi}, \overline{\xi}]}(\underline{x},\overline{x})&=C^{+}_{[\underline{ \xi},\overline{\xi}]}\underline{x}+C^{-}_{[\underline{\xi},\overline{\xi}]} \overline{x}+\underline{d}_{[\underline{\xi},\overline{\xi}]},\\ \overline{\mathbb{N}}_{[\underline{\xi},\overline{\xi}]}( \underline{x},\overline{x})&=C^{-}_{[\underline{\xi},\overline{ \xi}]}\underline{x}+C^{+}_{[\underline{\xi},\overline{\xi}]}\overline{x}+ \overline{d}_{[\underline{\xi},\overline{\xi}]},\end{split} \tag{7}\] where \((C^{+})_{i,j}=\max(C_{i,j},0)\), and \(C^{-}=C-C^{+}\). Using the localized first-order bounds of the neural network, we propose a general framework for constructing closed-loop inclusion functions for neural network-controlled systems that capture the first-order stabilizing effects of the controller. First, assuming \(f\) is differentiable, with inclusion functions \(\mathsf{J}_{x},\mathsf{J}_{u},\mathsf{J}_{w}\) for its Jacobians, one can construct a closed-loop Jacobian-based inclusion function \(\mathsf{F}^{\mathsf{c}}\). With the Jacobian bounds evaluated on the input \((\underline{z},\overline{z},\underline{\mathbb{N}}_{[\underline{z}, \overline{z}]}(\underline{z},\overline{z}),\overline{\mathbb{N}}_{[\underline{z },\overline{z}]}(\underline{z},\overline{z}),\underline{w},\overline{w})\), define \[[\mathsf{F}^{\mathsf{c}}_{[\underline{z},\overline{z}]}( \underline{x},\overline{x},\underline{w},\overline{w})]=([\mathsf{J}_{x}]+[ \mathsf{J}_{u}]C_{[\underline{x},\overline{\pi}]})[\underline{x},\overline{x}] \tag{8}\] \[+[\mathsf{J}_{u}][\underline{d}_{[\underline{x},\overline{x}]}, \overline{d}_{[\underline{x},\overline{x}]}]+[\mathsf{R}_{[\underline{z}, \overline{z}]}(\underline{w},\overline{w})],\] where \(\hat{x}\in[\underline{x},\overline{x}]\subseteq[\underline{z},\overline{z}]\), \(\hat{u}=N(\hat{x})\), \(\hat{w}\in[\underline{w},\overline{w}]\), and \([\mathsf{R}_{[\underline{z},\overline{u}]}(\underline{w},\overline{w})]:=-[ \mathsf{J}_{x}]\hat{x}-[\mathsf{J}_{u}]\hat{u}+[\mathsf{J}_{w}]([\underline{w },\overline{w}]-\hat{w})+f(\hat{x},\hat{u},\hat{w})\). In Proposition 2, we provide a more general result which proves that (8) is a \([\underline{z},\overline{z}]\)-localized inclusion function for \(f^{\mathsf{c}}\) (take \(T=I\)). In the case that \(f\) is not differentiable, or finding an inclusion function for its Jacobian is difficult, as long as a (monotone) inclusion function \(\mathsf{F}\) for the open-loop system \(f\) is known, a (monotone) closed-loop inclusion function for \(f^{\mathsf{c}}\) from (3) can be constructed using the natural inclusion approach in (5) with \(\mathsf{N}\) from (7) \[\mathsf{F}^{\mathsf{c}}(\underline{x},\overline{x},\underline{w},\overline{w})= \mathsf{F}(\underline{x},\overline{\mathbb{N}}_{[\underline{x},\overline{x}]}( \underline{x},\overline{x}),\overline{\mathbb{N}}_{[\underline{x},\overline{ \pi}]}(\underline{x},\overline{x}),\underline{w},\overline{w}). \tag{9}\] ## 4 A Dynamical Approach to Set Invariance Using the closed-loop inclusion functions developed in Section 3, we embed the uncertain dynamical system (3) into a larger certain system that enables computationally tractable approaches to verifying and computing families of invariant sets. Consider the closed-loop system (3) with an \(\mathcal{S}\)-localized inclusion function \(\mathsf{F}^{\mathsf{c}}_{S}:\mathcal{T}_{\geq 0}^{2n}\times\mathcal{T}_{\geq 0}^{2q} \rightarrow\mathcal{T}_{\geq 0}^{2n}\) for \(f^{\mathsf{c}}\) constructed via, _e.g._, (8) or (9), with the disturbance set \(\mathcal{W}\subseteq[\underline{w},\overline{w}]\). Then \(\mathsf{F}^{\mathsf{c}}_{S}\)_induces_ an _embedding system_ for (3) with state \((\frac{\overline{x}}{\overline{x}})\in\mathcal{T}_{\geq 0}^{2n}\) and dynamics defined by \[\begin{split}\dot{\underline{x}}_{i}&=\Big{(} \mathsf{E}_{S}(\underline{x},\overline{x},\underline{w},\overline{w})\Big{)}_{i} :=\Big{(}\mathsf{E}_{S}^{\mathsf{c}}(\underline{x},\overline{x}_{i\underline{ x}},\underline{w},\overline{w})\Big{)}_{i},\\ \dot{\overline{x}}_{i}&=\Big{(}\mathsf{E}_{S}(\underline{x },\overline{x},\underline{w},\overline{w})\Big{)}_{i}:=\Big{(}\mathsf{F}_{S}^{ \mathsf{c}}(\underline{x}_{i\underline{x},\overline{x}},\underline{w}, \overline{w})\Big{)}_{i},\end{split} \tag{10}\] where \(\mathsf{E}_{S}:\mathcal{T}_{\geq 0}^{2n}\times\mathcal{T}_{\geq 0}^{2q} \rightarrow\mathbb{R}^{2n}\). One of the key features of the embedding system, which evolves on \(\mathcal{T}_{\geq 0}^{2n}\), is that the inclusion function is evaluated separately on each face of the hyper-rectangle \([\underline{x},\overline{x}]\), represented by \([\underline{x},\overline{x}_{i\underline{x}}]\) and \([\underline{x}_{i\underline{x}},\overline{x}]\) for each \(i\in\{1,\ldots,n\}\). One favorable consequence of this separation is that the neural network verification steps from (8) and (9) are computed separately on each face, _i.e._, the \((C_{[\underline{x},\overline{x}]},\underline{d}_{[\underline{x},\overline{x}]}, \overline{d}_{[\underline{x},\overline{x}]})\) tuple from Assumption 1 is localized separately to each face. As seen in Proposition 1, this meshes nicely with Nagumo's Theorem [3], which allows us to guarantee forward invariance by checking the boundary of the hyper-rectangle through one evaluation of the embedding system (10). **Proposition 1** (Forward invariance in hyper-rectangles).: _Consider the neural network controlled system (3) with the disturbance set \(\mathcal{W}=[\underline{w},\overline{w}]\). Let \(\mathsf{F}^{\mathsf{c}}_{S}\) be a \(\mathcal{S}\)-localized inclusion function for \(f^{\mathsf{c}}\), e.g. (8) or (9), and \(\mathsf{E}_{\mathcal{S}}\) be the embedding system (10) induced by \(\mathsf{F}^{\mathsf{c}}_{S}\). Consider some \([\underline{x}^{*},\overline{x}^{*}]\subseteq\mathcal{S}\). If_ \[\mathsf{E}_{S}(\underline{x}^{*},\overline{x}^{*},\underline{w},\overline{w}) \geq_{\mathrm{SE}}0, \tag{11}\] _then \([\underline{x}^{*},\overline{x}^{*}]\) is a \([\underline{w},\overline{w}]\)-robustly forward invariant set. Moreover, if \(\mathsf{F}^{\mathsf{c}}_{S}\) is the minimal inclusion function, the condition (11) is also necessary for \([\underline{x}^{*},\overline{x}^{*}]\) to be a \([\underline{w},\overline{w}]\)-robustly forward invariant set._ Proof.: For brevity, since \([\underline{x}^{*},\overline{x}^{*}]\subseteq\mathcal{S}\), we drop \(\mathcal{S}\) from the notation. Consider the set \([\underline{x}^{*},\overline{x}^{*}]\), and suppose that \(\mathsf{E}(\underline{x}^ on the \(i\)-th upper face of the hyperrectangle \([\underline{x}^{\star}_{i;\overline{x}^{\star}},\overline{x}^{\star}]\). Since this holds for every \(i\in\{1,\ldots,n\}\), by Nagumo's theorem [3, Theorem 3.1], the closed set \([\underline{x}^{\star},\overline{x}^{\star}]\) is forward invariant since for every point \(x\) along its boundary \(\bigcup_{i}([\underline{x}^{\star},\overline{x}^{\star}_{i;\underline{x}^{ \star}}]\cup[\underline{x}^{\star}_{i;\overline{x}^{\star}},\overline{x}^{ \star}])\), the vector field \(f^{\mathrm{c}}(x,w)\) points into the set, for every \(w\in[\underline{w},\overline{w}]\). Now, suppose that \(\mathsf{E}\) is the embedding system induced by the minimal inclusion function \(\mathsf{F}^{\min}\) for \(f^{\mathrm{c}}\), and \([\underline{x}^{\star},\overline{x}^{\star}]\) is a hyper-rectangle such that \(\mathsf{E}(\underline{x}^{\star},\overline{x}^{\star},\underline{w},\overline{ w})\not\geq_{\mathrm{SE}}0\). Then there exists \(i\in\{1,\ldots,n\}\) such that either \[\mathsf{E}^{\min}_{i}(\underline{x}^{\star},\overline{x}^{\star} _{i;\underline{x}^{\star}},\underline{w},\overline{w}) =\inf_{x\in[\underline{x}^{\star},\overline{x}^{\star}_{i; \underline{x}^{\star}}],w\in[\underline{w},\overline{w}]}f(x,w)<0,\text{ or}\] \[\mathsf{F}^{\min}_{i}(\underline{x}^{\star}_{i;\overline{x}^{ \star}},\overline{x}^{\star},\underline{w},\overline{w}) =\sup_{x\in[\underline{x}^{\star}_{i;\overline{x}^{\star}}, \overline{x}^{\star}],w\in[\underline{w},\overline{w}]}f(x,w)>0.\] If the first case holds, then there exists \(x^{\prime}\in[\underline{x}^{\star},\overline{x}^{\star}_{i;\underline{x}}],w \in[\underline{w},\overline{w}]\) such that \(f^{\mathrm{c}}(x^{\prime},w)<0\) along the \(i\)-th lower face of the hyper-rectangle. If the second case holds, then there exists \(x^{\prime}\in[\underline{x}^{\star}_{i;\overline{x}^{\star}},\overline{x}^{ \star}],w\in[\underline{w},\overline{w}]\) such that \(f^{\mathrm{c}}(x^{\prime},w)>0\) along the \(i\)-th upper face of the hyper-rectangle. Thus, by Nagumo's theorem, the set \([\underline{x}^{\star},\overline{x}^{\star}]\) is not \([\underline{w},\overline{w}]\)-robustly forward invariant, as there exists a point along its boundary such that the vector field \(f^{\mathrm{c}}\) points outside the set. **Remark 1** (Linear systems).: For the special case of the linear system \(\dot{x}=Ax+Bu+Dw\) controlled by a neural network \(u=N(x)\) with piecewise linear activations, ReLU or Leaky ReLU, one can compute the minimal inclusion function using a Mixed Integer Linear Program (MILP) similar to the formulation from [16] as \[\mathsf{E}^{\min}_{i} =\min_{x\in[\underline{x},\overline{x}],w\in[\underline{w}, \overline{w}]}(Ax+BN(x)+Dw)_{i},\] \[\mathsf{F}^{\min}_{i} =\max_{x\in[\underline{x},\overline{x}],w\in[\underline{w}, \overline{w}]}(Ax+BN(x)+Dw)_{i},\] for \(i\in\{1,\ldots,n\}\). When using the minimal inclusion function, Proposition 1 can be used to additionally guarantee that a hyper-rectangle is _not_ forward invariant. The next Theorem shows how monotonicity of the dynamical embedding system can be used to define a family of nested robustly forward invariant sets using the condition from Proposition 1. **Theorem 1** (A nested family of invariant sets).: _Consider the neural network controlled system (3) with the disturbance set \(\mathcal{W}=[\underline{w},\overline{w}]\). Let \(\mathsf{F}^{\mathrm{c}}_{\mathcal{S}}\) be a \(\mathcal{S}\)-localized monotone inclusion function for \(f^{\mathrm{c}}\), e.g. (8) or (9), and \(\mathsf{E}_{\mathrm{S}}\) be the embedding system induced by \(\mathsf{F}^{\mathrm{c}}_{\mathcal{S}}\). Consider \([\underline{x}_{0},\overline{x}_{0}]\subseteq\mathcal{S}\) and let \(t\mapsto\left(\frac{\underline{x}^{(t)}}{\overline{x}(t)}\right)\) be the trajectory of (10) with \(\left(\frac{\underline{x}^{(0)}}{\overline{x}_{0}}\right)=\left(\frac{\underline {x}_{0}}{\overline{x}_{0}}\right)\). If_ \[\mathsf{E}_{\mathcal{S}}(\underline{x}_{0},\overline{x}_{0},\underline{w}, \overline{w})\geq_{\mathrm{SE}}0,\] _then the following statements hold:_ 1. \([\underline{x}(t),\overline{x}(t)]\) _is a_ \([\underline{w},\overline{w}]\)_-robustly forward invariant set for the system (_3_) for every_ \(t\geq 0\)_, and_ \[[\underline{x}(\tau),\overline{x}(\tau)]\subseteq[\underline{x}(t),\overline{ x}(t)],\ \ \text{for every}\ t\leq\tau;\] 2. \(\lim_{t\to\infty}\left(\frac{\underline{x}^{(t)}}{\overline{x}(t)}\right)= \left(\frac{\underline{x}^{\star}}{\overline{x}^{\star}}\right)\)_, where_ \(\left(\frac{\underline{x}^{\star}}{\overline{x}^{\star}}\right)\in\mathcal{T}_{ \geq 0}^{2n}\) _is an equilibrium point of the embedding system (_10_) and_ \([\underline{x}^{\star},\overline{x}^{\star}]\) _is a_ \([\underline{w},\overline{w}]\)_-attracting set for the system (_3_) with region of attraction_ \([\underline{x}(t^{\prime}),\overline{x}(t^{\prime})]\)_._ Proof.: Consider \((\frac{x}{\overline{x}})\leq_{\mathrm{SE}}(\frac{\overline{x}}{\overline{x}})\). This implies that \(\left(\frac{\underline{x}}{\overline{x}_{i;\underline{x}}}\right)\leq_{ \mathrm{SE}}\left(\frac{\underline{x}}{\overline{x}_{i;\underline{x}}}\right)\) and \(\left(\frac{\underline{x}}{\overline{x}}\right)\approx_{\mathrm{SE}}\left( \frac{\underline{x}_{i;\overline{x}}}{\overline{x}^{\star}}\right)\). Since \(\mathsf{F}^{\mathrm{c}}_{\mathcal{S}}\) is monotone, \[\left(\mathsf{E}^{\mathrm{c}}_{\mathcal{S}}(\underline{x}, \overline{x}_{i;\underline{x}},\underline{w},\overline{w})\right)_{i} \leq\left(\mathsf{E}^{\mathrm{c}}_{\mathcal{S}}(\underline{z},\overline{z}_{i; \underline{x}},\underline{w},\overline{w})\right)_{i},\] \[\left(\overline{\mathsf{F}}^{\mathrm{c}}_{\mathcal{S}}(\underline{z}, \overline{x},\overline{w},\overline{w})\right)_{i} \geq\left(\overline{\mathsf{F}}^{\mathrm{c}}_{\mathcal{S}}(\underline{z}, \overline{z},\overline{w},\overline{w})\right)_{i},\] for every \(i\in\{1,\ldots,n\}\). This implies that the embedding system \(\mathsf{E}^{\mathrm{c}}_{\mathcal{S}}\) is monotone w.r.t. the southeast partial order \(\leq_{\mathrm{SE}}\)[17, 18]. Now using [19, Proposition 2.1], the set \(\mathcal{P}_{+}=\{(\frac{\overline{x}}{\overline{x}}):(\frac{\overline{x}}{ \overline{x}})\geq_{\mathrm{SE}}0\}\) is a forward invariant set for \(\mathsf{E}^{\mathrm{c}}_{\mathcal{S}}\). Since \(\left(\frac{\overline{x}}{\overline{x}_{0}}\right)\in\mathcal{P}_{+}\), forward invariance implies \(\left(\frac{\underline{x}^{(t)}}{\overline{x}(t)}\right)\in\mathcal{P}_{+}\) for every \(t\geq 0\). Therefore, using Proposition 1, \([\underline{x}(t),\overline{x}(t)]\) is forward invariant for the closed-loop system (3) for every \(t\geq 0\). Additionally, the curve \(t\mapsto\left(\frac{\underline{x}^{(t)}}{\overline{x}(t)}\right)\) is nondecreasing with respect to the partial order \(\leq_{\mathrm{SE}}\)[19, Proposition 2.1]. This means that for every \(t\leq\tau\), \(\left(\frac{\underline{x}^{(t)}}{\overline{x}(t)}\right)\leq_{\mathrm{SE}}\left( \frac{\underline{x}^{(\tau)}}{\overline{x}(\tau)}\right)\), which implies that \([\underline{x}(\tau),\overline{x}(\tau)]\subseteq[\underline{x}(t),\overline{x}(t)]\). Regarding part (ii), since \(t\mapsto\left(\frac{\underline{x}^{(t)}}{\overline{x}(t)}\right)\) is nondecreasing w.r.t. \(\leq_{\mathrm{SE}}\), for every \(i\in\{1,\ldots,n\}\), the curves \(t\mapsto\underline{x}_{i}(t)\) (resp. \(t\mapsto\overline{x}_{i}(t)\)) are nondecreasing (resp. nonincreasing) w.r.t. \(\leq\) and bounded on \(\mathbb{R}\). This implies that there exists \(\ where \(N^{\prime}(y):=N(T^{-1}y)\) is the neural network from (2), with an extra initial layer with weight matrix \(T^{-1}\) and linear activation \(\phi(x)=x\). There is a one-to-one correspondence between the transformed system (12) and the original system (3), in the sense that every trajectory \(t\mapsto y(t)\) of (12) uniquely corresponds with the trajectory \(t\mapsto\Phi^{-1}(y(t))\) of (3). Given an interval \([\underline{y},\overline{y}]\), the set \(\Phi^{-1}([\underline{y},\overline{y}])=\{T^{-1}y:y\in[\underline{y}, \overline{y}]\}\) defines a paralleletope in standard coordinates. Using the definitions from (8), we construct a localized closed-loop Jacobian-based inclusion function \(\mathsf{G}^{\mathsf{c}}_{\Phi([\underline{x},\overline{y}])}\) as \[[\mathsf{G}^{\mathsf{c}}_{\Phi([\underline{x},\overline{y}])}( \underline{y},\overline{y},\underline{w},\overline{w})] =T([\mathsf{J}_{x}]+[\mathsf{J}_{u}](C^{\prime}_{[\underline{y}, \overline{y}]})T)^{-1}[\underline{y},\overline{y}]\] \[\quad+T[\mathsf{J}_{u}][\underline{d}^{\prime}_{[\underline{y}, \overline{y}]},\overline{d}^{\prime}_{[\underline{y},\overline{y}]}]+T[ \mathsf{R}_{[\underline{x},\overline{y}]}(\underline{w},\overline{w})], \tag{13}\] where \([\mathsf{J}_{\mathbf{s}}]\) are inclusion functions on the Jacobians of the original system evaluated on \((\underline{z},\overline{z},\overline{\mathsf{N}}_{[\underline{z},\overline{ z}]}(\underline{z},\overline{z}),\overline{\mathsf{N}}_{[\underline{z},\overline{z}]}( \underline{z},\underline{z}),\underline{w},\overline{w})\) for \(\mathbf{s}\in\{x,u,w\}\), \((C^{\prime},\underline{d}^{\prime},\overline{d}^{\prime})\) evaluated over \([\underline{y},\overline{y}]\) on the transformed neural network \(N^{\prime}(y)=N(T^{-1}y)\), \(\hat{x}\in[\underline{z},\overline{z}]\), \(\hat{\bar{u}}=N(\hat{x})\), and \(\hat{\bar{w}}\in[\underline{w},\overline{w}]\). **Proposition 2**.: _Consider the neural network controlled system (3) with (monotone) inclusion function (8) and let \(T\in\mathbb{R}^{n\times n}\) be an invertible matrix transforming the system into (12). Then (13) is a \(\Phi([\underline{z},\overline{z}])\)-localized (monotone) inclusion function for \(g^{\mathsf{c}}\)._ Proof.: Using the mean value theorem with the inclusion functions \(\mathsf{J}_{\mathbf{s}}\) for \(\mathbf{s}\in\{x,u,w\}\) valid on the input \((\hat{\underline{z}},\overline{z},\underline{\mathsf{N}}_{[\underline{z}, \overline{z}]}(\underline{z},\overline{z}),\overline{\mathsf{N}}_{[\underline{ z},\overline{z}]}(\underline{z},\underline{z}),\underline{w},\overline{w})\), \[g^{\mathsf{c}}(y,w) \in T[\mathsf{J}_{x}](T^{-1}y-\hat{\bar{x}})+T[\mathsf{J}_{u}](N( T^{-1}y)-\hat{\bar{u}})\] \[\quad+T[\mathsf{J}_{u}](w-\hat{\bar{w}})+Tf(\hat{\underline{z}}, \hat{\bar{u}},\hat{\bar{w}}).\] Using the tuple \((C^{\prime}_{[\underline{y},\overline{y}]},\underline{d}^{\prime}_{[ \underline{y},\overline{y}]},\overline{d}^{\prime}_{[\underline{y},\overline{ y}]})\) from Assumption 1, \[g^{\mathsf{c}}(x,w) \in T([\mathsf{J}_{x}]+[\mathsf{J}_{u}](C^{\prime}_{[\underline{y}, \overline{y}]}T))T^{-1}[\underline{y},\overline{y}]\] \[\quad+T[\mathsf{J}_{u}][\underline{d}^{\prime}_{[\underline{y}, \overline{y}]},\overline{d}^{\prime}_{[\underline{y},\overline{y}]}]+T[ \mathsf{R}_{[\underline{x},\overline{z}]}(\underline{w},\overline{w})],\] implying that (13) is an inclusion function for \(g^{\mathsf{c}}\). Regarding monotonicity, if the Jacobian inclusion functions are monotone, since addition and matrix multiplication are also monotone, (8) is a composition of monotone inclusion functions and is therefore monotone. It is important to note that the inclusion functions for the Jacobian matrices in (13) are obtained in the original coordinates. In principle, one could instead symbolically write \(g\) as a new system and directly apply the closed-loop Jacobian-based inclusion function from (8). In practice, however, these transformed dynamics often have complicated expressions that lead to excessive conservatism when using natural inclusion functions and are not suitable for characterizing invariant sets. In the next Theorem, we link forward invariant hyperrectangles in transformed coordinates to forward invariant paralleletopes in standard coordinates. **Theorem 2** (Forward invariance in paralleletopes).: _Consider the neural network controlled system (3) and let \(T\in\mathbb{R}^{n\times n}\) be an invertible matrix transforming the system into (12). Let \(\mathsf{G}^{\mathsf{c}}_{\mathcal{S}}\) be a \(\mathcal{S}\)-localized inclusion function for \(g^{\mathsf{c}}\), e.g. (13), and let \(\mathsf{E}_{T,\mathcal{S}}\) be the embedding system (10) induced by \(\mathsf{G}^{\mathsf{c}}_{\mathcal{S}}\) and \([\underline{y}_{0},\overline{y}_{0}]\subseteq\mathcal{S}\). If_ \[\mathsf{E}_{T,\mathcal{S}}(\underline{y}_{0},\overline{y}_{0},\underline{w}, \overline{w})\geq_{\mathrm{SE}}0,\] _then the paralleletope \(\Phi^{-1}([\underline{y}_{0},\overline{y}_{0}])\) is a \([\underline{w},\overline{w}]\)-robustly forward invariant set for the neural network controlled system (3)._ **Remark 3**.: 1. _(Evaluation on faces):_ When using (13) as \(\mathsf{G}^{\mathsf{c}}_{\mathcal{S}}\), the neural network verification step from Assumption 1 to find \((C^{\prime},\underline{d}^{\prime},\overline{d}^{\prime})\) is evaluated separately on each face of the \([\underline{y},\overline{y}]\) hyperrectangle, corresponding to each face of the paralleletope \(\Phi^{-1}([\underline{y},\overline{y}])\). 2. _(Nested invariant paralleletopes):_ Using the dynamical approach from Theorem 1, one can obtain a nested family of invariant hyper-rectangles for the transformed system 12. Using Theorem 2, this nested family corresponds to a nested family of paralleletope invariant sets for the original system (3). 3. _(Choice of transformation):_ In practice, there are principled ways of choosing good transformations \(T\). One approach is to take the transformation associated with the Jordan decomposition of the linearization of the system at an equilibrium point, as is done in the example below. ## 6 Numerical Experiments Consider two vehicles \(\mathrm{L}\) and \(\mathrm{F}\) each with dynamics \[\begin{split}\dot{p}^{j}_{x}=v^{j}_{x},&\dot{v}^{j} _{x}=\sigma(u^{j}_{x})+w^{j}_{x},\\ \dot{p}^{j}_{y}=v^{j}_{y},&\dot{v}^{j}_{y}=\sigma(u^{j }_{y})+w^{j}_{y},\end{split} \tag{14}\] for \(j\in\{\mathrm{L},\mathrm{F}\}\), where \(p^{j}=(p^{j}_{x},p^{j}_{y})\in\mathbb{R}^{2}\) is the displacement of the center of mass of \(j\) in the plane, \(v^{j}=(v^{j}_{x},v^{j}_{y})\in\mathbb{R}^{2}\) is the velocity of the center of mass of \(j\), \((u^{j}_{x},u^{j}_{y})\in\mathbb{R}^{2}\) are desired acceleration inputs limited by the nonlinear softmax operator \(\sigma(u)=u_{\text{lim}}\tanh(u/u_{\text{lim}})\) with \(u_{\text{lim}}=20\), and \(w^{j}_{x},w^{j}_{y}\sim\mathcal{U}([-0.01,0.01])\) are uniformly distributed bounded disturbances on \(j\). Denote the combined state of the system \(x=(p^{\mathrm{L}},v^{\mathrm{L}},p^{\mathrm{F}},v^{\mathrm{F}})\in\mathbb{R}^{8}\). We consider a leader-follower structure for the system, where the leader vehicle \(\mathrm{L}\) chooses its control \(u=(u^{\mathrm{L}}_{x},u^{\mathrm{L}}_{y})\) as the output of a neural network \((4\times 10\times 100\times 2\), ReLU activations), and the follower vehicle \(\mathrm{F}\) follows the leader with a \(\mathrm{D}\) controller \[u^{\mathrm{F}}_{\mathbf{d}}=k_{p}(p^{\mathrm{L}}_{\mathbf{d}}-p^{\mathrm{F}}_{ \mathbf{d}})+k_{v}(v^{\mathrm{L}}_{\mathbf{d}}-v^{\mathrm{F}}_{\mathbf{d}}), \tag{15}\] for each \(\mathbf{d}\in\{x,y\}\) with \(k_{p}=6\) and \(k_{v}=7\). The neural network was trained by gathering \(5.7\)M data points from an offline MPC control policy for the leader only, with control limits implemented as hard constraints rather than \(\sigma\). The offline policy minimized a quadratic cost aiming to stabilize to the origin while avoiding four circular obstacles centered at \((\pm 4,\pm 4)\) with radius \(2.25\) each, implemented as hard constraints with \(33\%\) padding and a slack variable. First, a trajectory of the undisturbed system is run until it reaches the equilibrium point \(x^{\star}\approx[0.01,0,0,0,0.01,0,0,0]^{T}\). Then, using sympy, the Jacobian matrices using auto_LiRPA [21] along the interval \([\underline{z},\overline{z}]:=x^{\star}+[-0.06,0.06]^{4}\times[-0.25,0.25]^{2} \times[-0.325,0.325]^{2}\) yielding \((C_{[\underline{z},\overline{z}]},\underline{d}_{[\underline{z},\overline{z} ]},\overline{d}_{[\underline{z},\overline{z}]})\). Then, the Jordan decomposition \(T^{-1}JT=(Df_{x}(x^{\star},N(x^{\star}),0)+Df_{x}(x^{\star},N(x^{\star}),0)C_{[ \underline{z},\overline{z}]})\) yields the transformation \(T\), and the matrix \(J\approx\mathrm{diag}(-6,-6,-4.12,-4.26,-0.93,-0.95,-1,-1)\) is filled with negative real eigenvalues, signifying that the equilibrium \(x^{\star}\) is locally stable. The \(T\)-transformed system (12) is analyzed with the embedding system induced by (13). The interval \([\underline{y}_{0},\overline{y}_{0}]=Tx^{\star}+[-0.1,0.1]^{6}\times[-0.15,0. 15]\times[-0.2,0.2]\) yields \(\Phi^{-1}([\underline{y}_{0},\overline{y}_{0}])\subseteq[\underline{z}, \overline{z}]\), and \(E_{T}(\underline{y}_{0},\overline{y}_{0},\underline{w},\overline{w})\geq \mathrm{se}\ 0\). Thus, using Theorem 2, the paralleletope \(\Phi^{-1}([\underline{y}_{0},\overline{y}_{0}])\) is an invariant set of (14). The embedding system in \(y\) coordinates is simulated forwards using Euler integration with a step-size of \(0.1\) for \(90\) time steps, and at each step the localization \([\underline{z},\overline{z}]=T^{-1}[\underline{y}(t),\overline{y}(t)]\) is refined. Starting from \(\left(\frac{y_{0}}{\overline{y}_{0}}\right)\), the forward-time embedding system converges to a point \(\left(\frac{y^{\star}}{\overline{y}}\right)\), where \(E_{T}(\underline{y}^{\star},\overline{y}^{\star},\underline{w},\overline{w})=0\). The transformed embedding system is also simulated backwards using Euler integration with a step-size of \(0.05\) while the condition \(\left(\frac{y^{\star}(t)}{\overline{y}(t)}\right)\geq_{\mathrm{SE}}0\) is satisfied (call the final time \(t^{\prime}\)). Using Theorems 1 and 2, the collection \(\{\Phi^{-1}([\underline{y}(t),\overline{y}(t)])\}_{t\geq t^{\prime}}\) consists of nested invariant paralleletopes converging to the attractive set \(\Phi^{-1}([\underline{y}^{\star},\overline{y}^{\star}])\) with region of attraction \(\Phi^{-1}([\underline{y}(t^{\prime}),\overline{y}(t^{\prime})])\). The initial paralleletope takes \(0.38\) seconds to verify, and the entire nested family takes \(40.28\) seconds to compute \(95\) paralleletopes. These results are visualized in Figure 1. ## VII Conclusions Using interval analysis and neural network verification tools, we propose a framework for certifying hyper-rectangle and paralleletope invariant sets in neural network controlled systems. The key component of our approach is the dynamical embedding system, whose trajectories can be used to construct a nested family of invariant sets. This work opens up an avenue for future work in designing safe learning-enabled controllers.
2309.12463
Impact of architecture on robustness and interpretability of multispectral deep neural networks
Including information from additional spectral bands (e.g., near-infrared) can improve deep learning model performance for many vision-oriented tasks. There are many possible ways to incorporate this additional information into a deep learning model, but the optimal fusion strategy has not yet been determined and can vary between applications. At one extreme, known as "early fusion," additional bands are stacked as extra channels to obtain an input image with more than three channels. At the other extreme, known as "late fusion," RGB and non-RGB bands are passed through separate branches of a deep learning model and merged immediately before a final classification or segmentation layer. In this work, we characterize the performance of a suite of multispectral deep learning models with different fusion approaches, quantify their relative reliance on different input bands and evaluate their robustness to naturalistic image corruptions affecting one or more input channels.
Charles Godfrey, Elise Bishoff, Myles McKay, Eleanor Byler
2023-09-21T20:11:01Z
http://arxiv.org/abs/2309.12463v2
# Impact of architecture on robustness and interpretability of multispectral deep neural networks ###### Abstract Including information from additional spectral bands (e.g., near-infrared) can improve deep learning model performance for many vision-oriented tasks. There are many possible ways to incorporate this additional information into a deep learning model, but the optimal fusion strategy has not yet been determined and can vary between applications. At one extreme, known as "early fusion," additional bands are stacked as extra channels to obtain an input image with more than three channels. At the other extreme, known as "late fusion," RGB and non-RGB bands are passed through separate branches of a deep learning model and merged immediately before a final classification or segmentation layer. In this work, we characterize the performance of a suite of multispectral deep learning models with different fusion approaches, quantify their relative reliance on different input bands and evaluate their robustness to naturalistic image corruptions affecting one or more input channels. Deep learning, multispectral images, multimodal fusion, robustness, interpretability Further author information: send correspondence to godfrey.cw@gmail.com and eleanor.byler@pnnl.gov. ## 1 Introduction Many datasets of overhead imagery, in particular those collected by satellites, contain spectral information beyond red, blue and green (RGB) channels (i.e. visible light). With the development of new sensors and cheaper launch vehicles, the availability of such multispectral overhead images has grown rapidly in the last 5 years. Deep learning applied to RGB images is at this point a well-established field, but by comparison, the application of deep learning techniques to multispectral imagery is in a comparatively nascent state. From an implementation perspective, the availability of additional spectral bands expands the space of neural network architecture design choices, as there are a plethora of ways one might "fuse" information coming from different input channels. From an evaluation perspective, multispectral imagery provides a new dimension in which to study model robustness. While in a recent years there has been a large amount of research on robustness of RGB image models to naturalistic distribution shifts such as image corruptions, our understanding of the robustness of multispectral neural networks is still limited. Such an understanding will be quite valuable, as an honest assessment of the trustworthiness of multispectral deep learning models will allow for more informed decisions about deployment of these models in high-stakes applications and use of their predictions in downstream analyses. We take a first step in this direction, studying multispectral models operating on RGB and near-infrared (NIR) channels on two different data sets and tasks (one involving image classification, the other image segmentation). In addition, for each dataset/task we consider two different multispectral fusion architectures, early and late (to be described below). Our findings include: 1. Even when different fusion architectures achieve near-identical performance as measured by test accuracy, they leverage information from the various spectral bands to varying degrees: we find that for classification models trained on a dataset of RGB+NIR overhead images, late fusion models place far more importance on the NIR band in their predictions than their early fusion counterparts. 2. In contrast, for segmentation models we observe that both fusion styles resulted in models that place greater importance on RGB channels, and this effect is _more pronounced_ for late fusion models. 3. Perhaps unsurprisingly, these effects are mirrored in an evaluation of model robustness to naturalistic image corruptions affecting one or more input channels -- in particular, early fusion classification models are more sensitive to corruptions of RGB inputs, and segmentation models with either architecture are comparatively immune to corruptions affecting NIR inputs alone. 4. On the whole, our experiments suggest that segmentation models and classification models use multispectral information in different ways. ## 2 Related Work The perceptual score metric discussed in section 4 was introduced in [1], and it can be viewed as a member of a broader family of model evaluation metrics based on "counterfactual examples." For a (by no means comprehensive) sample of the latter, see [2, 3], and for some cautionary tales about the use of certain counterfactual inputs see [4]. For a look at the state of the art of machine learning model robustness, we refer to [5]. The work most directly related to this paper was the creation of the ImageNet-C dataset [6], obtained from the ImageNet [7] validation split by applying a suite of naturalistic image corruptions at varying levels of severity. The original ImageNet-C paper [6] showed that even state-of-the-art image classifiers that approach human accuracy on clean images suffer severe performance degradation on corrupted images (even those that remain easily recognizable to humans). More recent work [8] evaluated the corruption robustness image _segmentation_ models, finding that their robust accuracy tends to be correlated with clean accuracy and that some architectural features have a strong impact on robustness. All of the research in this paragraph deals with RGB imagery alone. There is a limited amount of work on robustness of multispectral models, and most of the papers we are aware of investigate _adversarial robustness_, i.e. robustness to worst case perturbations of images [9, 10, 11] generated by a hypothetical attacker exploiting the deep learning model in question. It is worth noting that there is a lively ongoing discussion about the realism of the often-alleged security threat posed by adversarial examples [12]. The only research we are aware of addressing robustness of multispectral deep learning models to _naturalistic_ distribution shift is [13], which studies the robustness of land cover segmentation models evaluated on images with varying level of occlusion by clouds. ## 3 Datasets, Tasks and Model Architectures The RarePlanes dataset [14] includes 253 Maxar Worldwide-3 satellite scenes including \(\approx 15,000\) human annotated aircraft. Crucially for our purposes this data includes RGB, multispectral and panchromatic bands. In addition to bounding boxes identifying the locations of aircraft, annotations contain meta-data features providing information about each aircraft. Of particular interest is the **role** of an aircraft, for which the possible values are displayed in table 1. Using this information, we create an RGB+NIR image classification data set with the following processing pipeline: beginning with the 8-band, 16-bit multispectral scenes, we apply pansharpening, rescaling from 16- to 8-bit, contrast stretching, and gamma correction (for further details see appendix A). This results in 8-band, 8-bit scenes, from which we obtain the RGB and NIR channels. 1 To extract individual plane images or "chips" from the full satellite images, we clip the image around each plane using the bounding box annotations. We use the plane role sub-attributes (from the right column of table 1) as classification labels, and omit the "Military \begin{table} \begin{tabular}{l c} \hline \hline Attribute & Sub-attribute \\ \hline Civil & \{Large, Medium, Small\} Transport \\ Military & Fighter, Bomber,* Transport, Trainer* \\ \hline \hline \end{tabular} \end{table} Table 1: The RarePlanes role meta-data feature. Sub-attributes marked with \(*\) have fewer than 10 training examples and are omitted from our classification dataset. Trainer" and "Military Bomber" classes, which only have 15 and 6 examples, respectively. The remaining five classes have approximately 14,700 datapoints. We create a train-test split at the level of full satellite images. Note that this this presumably results in a more challenging machine learning task (compared with randomly splitting after creating chips), since it requires a model to generalize to new geographic locations, azimuth and sun elevation angles, and weather. We further divide the training images into a training and validation split, and keep the test images for unseen, hold-out evaluations. The final data splits are spread 74%/13%/13% between training, validation, and test, with class examples as evenly distributed as possible. The RarePlanes data set also includes a large amount of synthetic imagery -- however, the synthetic data only includes RGB imagery. Thus, in our experiments we only use the real data. We train four different types of image classifiers on the RGB+NIR RarePlanes chips, all assembled from ResNet backbones [15]: **RGB**: A standard ResNet34 operating on the RGB channels (NIR is ignored) **NIR**: A ResNet34 operating on the NIR channel alone (RGB is ignored) **early fusion**: RGB and NIR channels are concatenated to create a four channel input image, and passed into a 4 channel ResNet34 **late fusion**: RGB and NIR channels are passed into _separate_ ResNet34 models, and the pentultimate hidden feature vectors of the respective models are concatenated -- the concatenated feature factor is then passed to a final classification head. The two fusion architectures are illustrated in fig. 1. We train these RarePlanes image classifiers using _transfer learning_: rather than beginning with randomly initialized weights, wherever possible we start with ResNet34 weights pre-trained on the ImageNet dataset [7], and then fine-tune with continued stochastic gradient descent to minimize cross entropy loss on RarePlanes. Notably, for models with NIR input, the first layer convolution weights are initialized with the Red channel weights trained on ImageNet. Further architecture, initialization and optimization details can be found in appendix B. All model accuracies lie in the range \(91.8-92.5\%\). We also use the Urban Semantic 3D (hereafter US3D) dataset [16, 17, 18, 19] of overhead 8-band, 16-bit multispectral images and LiDAR point cloud data with segmentation labels. The US3D segmentation labels consist of Figure 1: RGB+NIR fusion architectures. **Top**: early, **Bottom**: late. Braces denote image/feature concatenation. seven total classes, including ground, foliage, building, water, elevated roadway, and two "unclassified" classes, corresponding to difficult or bad pixels. From this data we create an image _segmentation_ dataset via the following procedure: First, the 8-band, 16-bit multispectral images are converted to 8-bit RGB+NIR images with a pipeline similar to the one used for RarePlanes above. The resulting images are quite large, and are subdivided into 27,021 \(1024\times 1024\) non-overlapping "tiles" with associated segmentation labels. Again, train/validation/test splits are created at the level of parent satellite images, and care is taken to ensure that the distributions of certain meta-data properties (location, view-angle and azimuth angle) are relatively similar from one split to the next. Further details can be found in appendix A. Our segmentation models use the DeepLabv3 architecture [20]. To obtain neural networks taking both RGB and NIR images as inputs, we can apply the strategy described in fig. 1 to the _backbone_ of DeepLabv3 (for a more detailed description of DeepLabv3's components see appendix B). More precisely, we consider four different backbones: **RGB**: A ResNet50 operating on the RGB channels (NIR is ignored) **NIR**: A ResNet50 operating on the NIR channel alone (RGB is ignored) **early fusion**: RGB and NIR channels are concatenated to create a four channel input image, and passed into a 4 channel ResNet50 **late fusion**: RGB and NIR channels are passed into _separate_ ResNet50 models, and the resulting feature vectors of the respective models are concatenated -- the concatenated feature factor is then passed to the DeepLabv3 segmentation head. As in the case of our RarePlanes experiments, we fine-tune pre-trained weights, beginning with segmentation models pretrained on the COCO datatset [21] and again initializing both the first-layer R and NIR convolution weights with the R channel weights trained on COCO. In comparison to training the RarePlanes models, this optimization problem is far more computationally demanding, due to the larger tiles, larger networks and more challenging learning objective. We used (data parallel) distributed training on a cluster computer to scale batch size and reduce training and evaluation time. All model validation IoU scores lie in the range \(0.53-0.55\) -- for details and hyper parameters see appendix B. Footnote †: This is not the case for our data sets, which were both derived from 8-band images captured with a single sensor. ## 4 Perceptual Scores of Multispectral Models Given a neural network processing multispectral (in our case RGB+NIR) images, one can ask is which bands the model is leveraging to make its predictions. More generally, we may want to know the relative importance of each spectral band for a given model prediction. This information is of potential interest for a number of reasons: * For many objects of interest, reflectance properties vary widely between spectral bands (for example, plants appear vividly in the NIR band). Depending on the machine learning task and underlying data, this phenomenon could cause a multispectral model to prioritize one of its input channels. * Some spectral bands are less affected by adverse weather or environmental conditions. For example, NIR light can penetrate haze, and NIR imagery is often used by human analysts to help discern detail in smoky or hazy scenes. * In some applications, the different bands included in a multispectral dataset could have been captured by different sensors+. For example, an autonomous vehicle may be equipped with an RGB camera and a thermal IR camera mounted side-by-side. In such a situation, technical issues affecting one sensor could result in image corruptions that only affect a subset of channels, and the performance of the downstream model predictions would depend on the relative importance of the corrupted channels. Footnote †: This is not the case for our data sets, which were both derived from 8-band images captured with a single sensor. A simple baseline for assessing the relative importance of input channels for the predictions of a multispectral model is provided by the **perceptual score** metric [1], computed for RGB and NIR channels as follows: let \(f(x_{\mathrm{RGB}},x_{\mathrm{NIR}})\) be an RGB+NIR model, where \(x_{\mathrm{RGB}}\) and \(x_{\mathrm{NIR}}\) denote the RGB and NIR inputs respectively. Let \[\mathcal{D}=\{(x_{\mathrm{RGB},i},x_{\mathrm{NIR},i},y_{i})\,|\,i=1,\dots,N\} \tag{4.1}\] be the test data set, where the \(y_{i}\) are classification or segmentation labels, and let \(\ell(f(x_{\mathrm{RGB}},x_{\mathrm{NIR}}),y)\) be the relevant accuracy function (0-1 loss for classification, Intersection-over-Union (IoU) for segmentation). The test accuracy of \(f\) is then \[\mathrm{Acc}(f,\mathcal{D})=\frac{1}{N}\sum_{i=1}^{N}\ell(f(x_{\mathrm{RGB},i},x_{\mathrm{NIR},i}),y_{i}). \tag{4.2}\] To assess the importance of NIR information for model predictions, we use a counterfactual dataset \[\mathcal{D}_{\mathrm{NIR},\sigma}=\{(x_{\mathrm{RGB},i},x_{\mathrm{NIR}, \sigma(i)},y_{i})\,|\,i=1,\ldots,N\} \tag{4.3}\] obtained by shuffling the NIR "column" of \(\mathcal{D}\) with a random permutation \(\sigma\) of \(\{1,\ldots,N\}\). In other words, the data points \((x_{\mathrm{RGB},i},x_{\mathrm{NIR},\sigma(i)},y_{i})\) consist of a labelled RGB image \((x_{\mathrm{RGB},i},y_{i})\) together with the NIR channel \(x_{\mathrm{NIR},\sigma(i)}\) of some other randomly selected data point in \(\mathcal{D}\). The **perceptual score of the NIR input is then** \[\mathrm{PS}(f,\mathcal{D},\mathrm{NIR},\sigma):=\frac{\mathrm{Acc}(f, \mathcal{D})-\mathrm{Acc}(f,\mathcal{D}_{\mathrm{NIR},\sigma})}{\mathrm{Acc} (f,\mathcal{D})}. \tag{4.4}\] In words, this is the relative accuracy drop incurred by evaluating \(f\) on the dataset \(\mathcal{D}_{\mathrm{NIR},\sigma}\) -- intuitively, if NIR input is important to \(f\), replacing the NIR channel \(x_{\mathrm{NIR},i}\) with the NIR channel \(x_{\mathrm{NIR},\sigma(i)}\) of some other randomly chosen image will significantly damage the accuracy of \(f\), resulting in a large relative drop in eq. 4.4. The RGB perceptual score \(\mathrm{PS}(f,\mathcal{D},\mathrm{NIR},\sigma)\) is defined analogously, permuting the RGB column instead of the NIR. In practice, we average these metrics over several (e.g. 10) randomly selected permutations \(\sigma\) of \(\{1,\ldots,N\}\), and henceforth \(\sigma\) will be suppressed. In fact [1] defines two variants of perceptual score and refers to the one in eq. 4.4 as "model normalized"; their "task normalized" variant uses majority vote accuracy (i.e. accuracy of a naive baseline) in the denominator instead of \(\mathrm{Acc}(f,\mathcal{D})\). We include both score normalizations for completeness, but note that in all cases the normalization did not change our qualitative conclusions. Figure 2 displays these metrics for RarePlanes classifiers, and shows that from the perspective of perceptual score, early fusion models pay more attention to RGB channels whereas late fusion models pay more attention to NIR. There is a simple heuristic explanation for these results: for both model architectures, RGB channels occupy 75% of the input space dimensions. In contrast, in the late fusion models the RGB and NIR inputs are both encoded as 512-dimensional feature vectors which are concatenated before being passed to the classification head (a 2 layer MLP); hence from the perspective of the classification head, RGB and NIR each account for half of the feature dimensions. Figure 2: Perceptual scores for the RarePlanes multispectral classifiers. The early fusion models have a higher perceptual score for RGB channels (i.e., more reliance on RGB inputs), whereas the late fusion models have higher perceptual score for NIR channels (i.e., more reliance on NIR input). Error bars are obtained from five evaluations of the experiment with independent random number generator seeds. Our measurements of perceptual score for segmentation models on US3D, shown in fig.3 are quite different: they suggest the late fusion model pays even less attention (again, from the perspective of perceptual score) to NIR information than the early fusion model. This finding is in tension with both fig.2 and the heuristic explanation thereof. One challenge encountered in interpreting fig.3 is that due to the computational cost of training we only trained one model for each architecture. We leave a larger experiment allowing for estimation of statistical significance to future work. ## 5 Robustness to Naturalistic Corruptions The perceptual scores presented in the previous section aim to quantify our models' dependence on RGB and NIR inputs. A related question is how robust these models are to naturally occurring corruptions that affect either (or both) of the RGB or NIR inputs. With this in mind, we create corrupted variants of RarePlanes and US3D by applying a suite of image transformations simulating the effects of noise, blur, weather and digital corruptions. This is accomplished using a fork of the code that generated ImageNet-C,[6] with modifications allowing for larger images and more than 3 image channels. Each of the corruptions applied comes with varying levels of severity (1 to 5). Where appropriate, we ensured that these corruptions were applied consistently between the RGB and NIR channels (e.g., snow is added to the same part of the image for all channels). Visualizations of the corruptions considered for a sample RarePlanes chip can be found in appendixA.3. We evaluate each model on the corrupted images. Figure4 shows how accuracy of RarePlanes classifiers degrades with increasing corruption severity. We can see that when all channels in an image are corrupted, there is a similar drop in performance for all of the models considered (i.e., all four channels input to an early fusion model or all three channels input to an RGB model). This suggests that none of the architectures considered provides a significant increase in overall robustness to natural corruptions. For the early and late fusion models, we also test model performance when either RGB or NIR (but not both) inputs are corrupted. For the late fusion model, the effects of corrupting one (but not both) of {RGB, NIR} are more or less equal, while the early fusion model suffers a slightly greater drop in performance when RGB (but not NIR) channels are corrupted. We note that the confidence intervals in the early fusion model overlap; however, these results point in the same general direction as our perceptual score conclusions fig.2: for early fusion models, RGB channels are weighted more heavily in predictions, and hence performance suffers more when RGB inputs are corrupted. Figure 3: Perceptual scores for the US3D multispectral segmentation models. Both early and late fusion models have higher perceptual scores for RGB data, demonstrating that model performance relies more strongly on the RGB inputs. For late fusion models this effect is even more dramatic, suggesting that the NIR input is less important, in contrast to the classification model scores shown in fig.2. Figure 5 shows how performance of US3D segmentation models (as measured by IoU) degrades with increasing corruption severity. When all input channels are corrupted (green line), the models show similar overall drops in performance, with the exception of the NIR-only model. The NIR-only model shows a larger drop in performance at all corruption severities, potentially suggesting that single-channel models are less robust to these kinds of natural corruptions. For both early and late fusion architectures, a greater performance drop is incurred when RGB (but not NIR) channels are corrupted. In fact, corrupting only RGB channels is almost as damaging as corrupting all inputs (both RGB+NIR). Notably in the case of late fusion, there is a large gap in robustness to corruptions of NIR inputs alone or RGB inputs alone. As was the case for the RarePlanes experiments, these corruption robustness results point in the same general direction as the perceptual score calculations in fig. 3. For example, the late fusion model had lower NIR perceptual scores than its early fusion counterpart (i.e., less reliant on NIR inputs), and it is more robust to NIR corruptions than the early fusion model. We reiterate that due to computational costs we only train one US3D segmentation model of each architecture, but with this caveat it does appear that overall robustness in the case where all input channels are corrupted is decreasing from left to right in fig. 5. That is, the ranking of models according to corruption robustness is: early, late, rgb, nir. One potential explanation is that having more input channels (and hence more parameters) provides more robustness, although this would not explain why early fusion models seem to be more robust.++ Another possible explanation is that our models exhibit (positive) correlation between accuracy on clean test Figure 4: Corruption robustness of RarePlanes classifiers. Each subplot corresponds to a model architecture, and each line corresponds to a choice of input (RGB, NIR or both) to corrupt. Accuracy is averaged over all 15 types of corruptions, and confidence intervals are obtained from five evaluations of the experiment with independent random number generator seeds. Figure 5: Corruption robustness of US3D segmentation models. Each subplot corresponds to a model architecture, and each line corresponds to a choice of input (RGB, NIR or both) to corrupt. IoU is averaged over all 15 types of corruptions. data and accuracy on corrupted data, as has been previously observed in the literature on robust RGB image classifiers [23]. Indeed, in fig. 10 we see that the ranking of our US3D segmentation models by test IoU is: early, late (tied), rgb, nir. ## 6 Limitations and Open Questions One limitation of our experiments is that the tasks considered are already tractable by a deep learning model using only RGB images; incorporating additional spectral bands offers at most incremental improvement. It would be interesting to carry out the evaluations of sections 4 and 5 for datasets and models that more obviously benefit from additional multispectral information beyond RGB. These might include ML models designed for tasks involving materials that can not be distinguished by RGB colors alone or environmental conditions in which NIR information is inherently valuable (such as pedestrian detection at night with RGB+NIR). In this work we considered two basic forms of multispectral fusion (early and late), and although these arguably represent two interesting extremes there are many more sophisticated architectural designs for fusing multiple model inputs [24, 25, 24]. Evaluating robustness of image segmentation models to naturalistic corruptions is more complicated than in the case of classification tasks -- In particular, there are some corruptions for which one might consider modifying segmentation labels in parallel with the underlying images (one example is the "elastic transform" corruption used in our experiments). In this work we did not apply any modifications to segmentation labels. Finally, while we apply the same corruption algorithm to both the RGB and NIR channels, in some cases this is not physically realistic, for example snow is in fact _dark_ in infrared channels. For a study of test-time robustness to more physically realistic corruptions of multispectral images, as well as robustness (or lack thereof) of multispectral deep learning models to adversarial corruptions of training data (i.e. data poisoning), we refer to [26]. ## 7 Conclusion This work evaluates the extent to which multispectral fusion neural networks with different underlying architectures 1. pay differing amounts of attention to different input spectral bands (RGB and NIR) as measured by the perceptual score metric and 2. exhibit varying levels of robustness to naturalistic corruptions affecting one or more input spectral bands. We find that the answers to (i) and (ii) correlate as one might expect: paying more attention to RGB channels results in greater sensitivity to RGB corruptions. Interestingly, our experimental results for segmentation models on the US3D dataset contrast with those for classification models on the RarePlanes datsets: In the classification experiments, early fusion models had higher perceptual scores for RGB inputs, and late fusion models had slightly higher perceptual scores for NIR inputs, whereas both types of fusion segmentation models had higher perceptual scores for RGB inputs and the effect was more extreme for late fusion (results for corruption robustness follow this trend). This suggests that classification and segmentation models may make use of multispectral information in quite different ways. ## Appendix A Image Processing ### RarePlanes The RarePlanes dataset includes both 8-bit RGB satellite imagery and 16-bit 8-band multispectral imagery, plus a panchromatic band. One of the goals of this work is to assess the utility of including additional channels as input to image segmentation models (e.g., near-infrared channels). In order to include channels beyond Red, Green, or Blue, we must work from the 16-bit 8-band images. We briefly describe our process for creating 8-bit, 8-band imagery, which consists of pansharpening, rescaling, contrast stretching, and gamma correcting the pixels in each channel independently. Specifically, the multispectral image is pansharpened to the panchromatic band resolution using a weighted Brovey algorithm [27]. The original 16-bit pixel values are rescaled to 8-bit, and a gamma correction is applied using \(\gamma=2.2\)[28]. The bottom 1% of the pixel cumulative distribution function is clipped, and the pixels are rescaled such that the minimum and maximum pixel values are 0 and 255. We note that when applied to the R, G, and B channels of the multispectral image products to generate 8-bit RGB images, this process produces images that are visually similar but _not_ identical to the RGB images provided in Rare Planes. As such, the RGB model presented in this work cannot be perfectly compared to models published elsewhere trained on the RGB imagery included in RarePlanes.SS However, we felt that this approach provided the most fair comparison of model performance for different input channels, since the same processing was applied identically to each channel. Footnote §: We trained identical models on the RarePlanes RGB images and the RGB images produces in this work, and found that the RarePlanes models performed negligibly better, at most a 1-2% improvement in average accuracy. This is likely due to more complex and robust techniques used for contrast stretching and edge enhancement in RarePlanes; unfortunately these processing pipelines are often proprietary and we could not find any published details of the process. ### Urban Semantic 3D US3D builds upon the SpaceNet Challenge 4 dataset (hereafter SN4) [29]. SN4 was originally designed for building footprint estimation in off-nadir imagery, and includes satellite imagery from Atlanta, GA for view angles between 7 an 50 degrees. US3D uses the subset of Atlanta, GA imagery from SN4 for which there exist matched LiDAR observations, and adds additional matched satellite imagery and LiDAR data in Jacksonville, FL and Omaha, NE. The Atlanta imagery is from Worldview-2, with ground sample distances (GSD) between 0.5m and 0.7m, and view angles between 7 and 40 degrees. The Jacksonville and Omaha imagery from Worldview-3, with GSD between 0.3m and 0.4m, and view angles between 5 and 30 degrees. As described below, we train and evaluate models using imagery from all three locations. We note however, that models trained solely on imagery from a single location will show variation in overall performance due to the variations in the scenery between locations (e.g., building density, seasonal changes in foliage and ground cover). US3D includes both 8-bit RGB satellite imagery and 16-bit pansharpened 8-band multispectral imagery. Our process for creating 8-bit, 8-band imagery is similar to the process we used for RarePlanes, the main exception being that we omit pansharpening since it has already been applied to the multispectral images in US3D. The original 16-bit pixel values are rescaled to 8-bit, and a gamma correction is applied using \(\gamma=2.2\). The bottom 1% of the pixel cumulative distribution function is clipped, and the pixels are rescaled such that the minimum and maximum pixel values in each channel are 0 and 255. The US3D images are quite large (hundreds of thousands of pixels on a side) and must be broken up into smaller images in order to be processed by a segmentation deep learning model, a process sometimes called "tiling." Each of the large satellite images (and matched labels) was divided into 1024 pixel x 1024 pixel "tiles" without any overlap, producing 27,021 total images. All tiles from the same parent satellite image are kept together during the generation of training and validation splits to avoid cross contamination that could artificially inflate accuraciesSS. An iterative approach was used to divide the satellite images into training, validation, and unseen hold-out (i.e., test) splits to ensure that the distributions of certain meta-data properties (location (Atlanta, Jacksonville, Omaha), view-angle, and azimuth angle) are relatively similar from one split to the next; in particular, this avoids the possibility that all images from a single location land in a single split. The final data splits included 21,776 tiles in training (70%), 2,102 tiles in validation (8%), and 3,142 tiles in the unseen, hold-out test split (12%). Models with near-infrared (NIR) input use the WorldView NIR2 channel, which covers 860-1040nm. The NIR2 band is sensitive to vegetation but is less affected by atmospheric absorption when compared with the NIR1 band. Segmentation labels are stored as 8-bit unsigned integers between 0 and 255 in TIF files; during training and evaluation we re-index these labels to integers between 0 and 6. We retain the "unclassified" labels during model training and evaluation, but do not include these classes in any metrics that average across all classes. Footnote §: We note this is different from the data split divisions within US3D, which mixes tiles from the same parent image between training, validation, and testing. As in the case of RarePlanes, when this process is applied to the R, G, and B channels of the multispectral image products to generate 8-bit RGB images, it produces images that are visually similar but _not_ identical to the RGB images provided in US3D. As such, the RGB model presented in this work cannot be perfectly compared to models published elsewhere trained on the RGB imagery included in US3D. [1] Again, we felt that this approach provided the most fair comparison of model performance between different input channels, since the same processing technique is applied identically to each channel. Figure 6: RGB corruptions of a RarePlane chip from our test set (severity level 3). ### Applying Corruptions As mentioned above, we modify the code available at github.com/hendrycks/robustness (hereafter referred to as "the robustness library" or simply "robustness"), which was originally designed for \(224\times 224\) RGB images, to achieve the following goals: Figure 7: NIR corruptions of a RarePlane chip from our test set (severity level 3). Note that the motion blur (2nd row, 3rd column) is applied in the same direction as in fig. 6. 1. Arbitrary image resolution and aspect ratio (in fact this feature was already implemented in [30], though we did not discover that repository until after this work was completed). This was essential as the RarePlanes "chips" have varying resolution and aspect ratio, and while all US3D tiles are of shape \(1024\times 1024\), that differs from the \(224\times 224\) shape considered in robustness. 2. Input channels beyond RGB. When applying corruptions to RGB+NIR images, we separate the 4-channel image into two 3-channel images, one containing the RGB channels and the other consisting of three stacked copies of the NIR channel. We then apply corruptions from robustness separately to each of these 3-channel images, with the following consideration: wherever physically sensible, we use the same randomness for both the RGB and NIR input. For example in the case of motion blur, we use the same velocity vector for both -- not doing so would correspond to a physically unrealistic situation where the RGB and NIR sensors are moving in independent directions. On the other hand for corruptions such as shot noise modeling random processes affecting each pixel independently we use independent randomness for RGB and NIR. Corruptions of a RarePlanes test image can be seen in figs. 6 and 7. We do not modify labels in any way. In the case of US3D one could argue that for some corruptions the segmentation labels should be modified. For example, "elastic transform" is implemented by applying localized permutations of some pixels and then blurring. Here it would make sense to apply at the exact same localized permutations to the per-pixel segmentation labels and then possibly blur them using soft labels (where each pixel is assigned to a probability distribution over segmentation classes). One could also imagine using soft label blurring for e.g. zoom or motion blur. Our reason for leaving segmentation labels fixed is pragmatic: in most cases we found where an argument could be made for modifying labels, doing so seemed to require working with num. \(\text{classes}\times H\times W\) soft label tensors (possibly at an intermediate stage), and this would require modifications to multiple components of our pipeline, which (in keeping with standard practice) stores and utilizes labels as 8-bit RGB images with segmentation classes encoded as certain colors. Moreover, we emphasize that for most corruptions most severity levels the unmodified labels fairly reflect ground truth. With all of that said, in the case of elastic transform mentioned above we observed the bizarre experimental accuracy increasing as corruption severity increased -- see fig. 8. Determining whether the results in that figure persist even after more care is taken with segmentation labels is a high priority item for future work. ## Appendix B Model Architecture and Training For the RarePlanes experiments, all models are based on the ResNet34 architecture [15], pretrained on ImageNet (we use the weights from [31]). For the early fusion model, we stack the RGB and NIR input channels to obtain a \(4\times H\times W\) input tensor, and replace the ResNet34 first layer convolution weight, originally of shape \(3\times C\times H\times W\), with a weight of shape \(4\times C\times H\times W\) by repeating the red channel twice (i.e., we initialize the convolution weights applied to the NIR channel with those that were applied to the red channel in the ImageNet pretrained ResNet). For the pure NIR channel, we adopt a similar initialization strategy, but discarding the RGB weights since we only use one-channel NIR inputs (so in this case the first convolution weight has shape \(1\times C\times H\times W\)). For the pure RGB model of course no modifications are required. Figure 8: Results from the experiments summarized in fig. 5, restricting attention to the elastic transform corruption of [6]. In the late fusion model, we take a pure RGB and pure NIR architecture as described above, and remove their final fully connected layers: call these \(f_{\rm rgb}\) and \(f_{\rm mir}\). Given an input of the form \((x_{\rm rgb},x_{\rm mir})\), we use \(f_{\rm rgb}\) and \(f_{\rm mir}\) to compute two 512-dimensional feature vectors \(f_{\rm rgb}(x_{\rm rgb})\) and \(f_{\rm mir}(x_{\rm mir})\). These are then concatenated to obtain a 1024-dimensional feature vector \(\mathrm{cat}(f_{\rm rgb}(x_{\rm rgb}),f_{\rm mir}(x_{\rm mir}))\). Finally, this is passed through a multi-layer perceptron with two 512-dimensional hidden layers. For all training and evaluation, we "pixel normalize" input images, subtracting the a mean RGB+NIR four-dimensional vector, and dividing by a corresponding standard deviation. In the RarePlanes experiments we use the mean and standard deviation of the pretraining dataset (ImageNet) for RGB channels, and a mean and standard deviation computed from our RarePlanes images for the NIR channel. We fine tune on RarePlanes for 100 epochs using stochastic gradient descent with initial learning rate \(10^{-3}\), weight decay \(10^{-4}\) and momentum 0.9. We use distributed data parallel training with effective batch size 256 (128 \(\times\) 2 GPUs). We use a "reduce-on-plateau" learning rate schedule that multiplies the learning rate by 0.1 if training proceeds for 10 epochs without a 1% increase in validation accuracy. We train five models of each fusion architecture with independent random seeds (randomness in play includes new model layers (all models include at least a new final classification layers) and SGD batching). Figure 9 displays test accuracies and class confusion matrices of our trained RarePlanes classifiers. The accuracies of \(\approx 92\%\) are respectable but by no means state-of-the-art. All of our US3D segmentation models are based on DeepLabv3 with a ResNet50 backbone, pretrained on COCO [21] (again obtained from [31]). Our methods for defining RGB+NIR fusion architectures are similar to those described above for image classifiers, the difference being that we modify the ResNet50 backbone. In the case of early fusion and pure NIR models, we only need to modify the first layer convolution weights as described for the RarePlanes classifiers (and of course for the pure RGB model no modifications are necessary). Figure 9: Test accuracies and confusion matrices for RGB+NIR RarePlanes models. For the late fusion segmentation model, we start with pure RGB and pure NIR ResNet50 backbones: call these \(f_{\text{rgb}}\) and \(f_{\text{nir}}\). Given an input of the form \((x_{\text{rgb}},x_{\text{nir}})\), we use \(f_{\text{rgb}}\) and \(f_{\text{nir}}\) to compute two 2048-dimensional feature vectors \(f_{\text{rgb}}(x_{\text{rgb}})\) and \(f_{\text{nir}}(x_{\text{nir}})\). These are then concatenated to obtain a 4096-dimensional feature vector \(\text{cat}(f_{\text{rgb}}(x_{\text{rgb}}),f_{\text{nir}}(x_{\text{nir}}))\) which is then passed through the DeepLabv3 atrous convolution segmentation "head." In the US3D experiments we use the mean and standard deviation of ImageNet for RGB channels, and a mean and standard deviation of the ImageNet R channel for the NIR channel.** We fine tune on US3D for 400 epochs using the Dice loss function optimized with Adam [32] with initial learning rate \(5\times 10^{-4}\) and weight decay \(10^{-5}\) (and PyTorch [33] defaults for all other Adam hyperparameters). We use distributed data parallel training with effective batch size 32 (4 \(\times\) 8 GPUs). We use a "reduce-on-plateau" learning rate schedule that multiplies the learning rate by 0.5 if training proceeds for 25 epochs without a relative 1% increase in validation accuracy. Footnote **: Note that while Torchvision’s DeepLabv3 was pretrained on COCO, not ImageNet, inspection of their preprocessing (tvmodsseg.DeepLabV3_ResNet50_Weights.DEFAULT.transforms) shows that the _ImageNet_ mean and standard deviation were used for normalization! Figure 10 displays test Intersection-over-Union (IoU) of our trained US3D segmentation models. ###### Acknowledgements. The research described in this paper was conducted under the Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy. Figure 10: Intersection-over-Union (IoU) of US3D segmentation models.
2310.20203
Importance Estimation with Random Gradient for Neural Network Pruning
Global Neuron Importance Estimation is used to prune neural networks for efficiency reasons. To determine the global importance of each neuron or convolutional kernel, most of the existing methods either use activation or gradient information or both, which demands abundant labelled examples. In this work, we use heuristics to derive importance estimation similar to Taylor First Order (TaylorFO) approximation based methods. We name our methods TaylorFO-abs and TaylorFO-sq. We propose two additional methods to improve these importance estimation methods. Firstly, we propagate random gradients from the last layer of a network, thus avoiding the need for labelled examples. Secondly, we normalize the gradient magnitude of the last layer output before propagating, which allows all examples to contribute similarly to the importance score. Our methods with additional techniques perform better than previous methods when tested on ResNet and VGG architectures on CIFAR-100 and STL-10 datasets. Furthermore, our method also complements the existing methods and improves their performances when combined with them.
Suman Sapkota, Binod Bhattarai
2023-10-31T06:00:17Z
http://arxiv.org/abs/2310.20203v1
# Importance Estimation with Random Gradient for Neural Network Pruning ###### Abstract Global Neuron Importance Estimation is used to prune neural networks for efficiency reasons. To determine the global importance of each neuron or convolutional kernel, most of the existing methods either use activation or gradient information or both, which demands abundant labelled examples. In this work, we use heuristics to derive importance estimation similar to Taylor First Order (TaylorFO) approximation based methods. We name our methods _TaylorFO-abs_ and _TaylorFO-sq_. We propose two additional methods to improve these importance estimation methods. Firstly, we propagate _random gradients_ from the last layer of a network, thus avoiding the need for labelled examples. Secondly, we _normalize_ the gradient magnitude of the last layer output before propagating, which allows all examples to contribute similarly to the importance score. Our methods with additional techniques perform better than previous methods when tested on ResNet and VGG architectures on CIFAR-100 and STL-10 datasets. Furthermore, our method also complements the existing methods and improves their performances when combined with them. ## 1 Introduction and Background Neural Network Pruning LeCun et al. (1989); Han et al. (2015); Gale et al. (2019); Blalock et al. (2020) is one of the methods to reduce the parameters, compute and memory requirement. This method differs significantly from knowledge distillation Hinton et al. (2015); Gou et al. (2021) where a small model is trained to produce the output of a larger model. Neural Network Pruning is performed at multiple levels; (i) weight pruning Mozer & Smolensky (1989); Han et al. (2015, 2016) removes per parameter basis while (ii) neuron/channel Wen et al. (2016); Lebedev & Lempitsky (2016) pruning removes per neuron or channel basis and (iii) block/group Gordon et al. (2018); Leclerc et al. (2018) pruning removes per a block of network such as residual block or sub-network. Weight pruning generally achieves a very high pruning ratio getting similar performance only with a few percentages of the parameters. This allows a high compression of the network and also accelerates the network on specialized hardware and CPUs. However, weight pruning in a defined format such as N:M block-sparse helps in improving the performance on GPUs Liu & Wang (2023). Pruning network at the level of neurons or channels helps in reducing the parameters with similar performance, however, the pruning ratio is not that high. Furthermore, pruning at the level of blocks helps to reduce the complexity of the network and creates a smaller network for faster inference. All these methods can be applied to the same model as well. Furthermore, pruning and addition of neurons can bring dynamic behaviour of decreasing and increasing network capacity which has found its application in Continual or Incremental Learning settings as well as Neural Architecture Search Gordon et al. (2018); Zhang et al. (2020); Dai et al. (2020); Sapkota & Bhattarai (2022). In this work, we are particularly interested in neuron level pruning. Apart from the benefit of reduced parameter, memory and computation, neuron/channel level pruning is more similar to a biological formulation where the neurons are the basic unit of computation. Furthermore, the number of neurons in a neural network is small compared to the number of connections and can easily be pruned by measuring the global importance LeCun et al. (1989); Hassibi et al. (1993); Molchanov et al. (2016); Lee et al. (2018); Yu et al. (2018). We focus on the global importance as it removes the need to inject bias about the number of neurons to prune in each layer. This can simplify our problem to remove less significant neurons globally which allows us to extend it to unorganized networks other than layered formulation. However, in this work, we focus only on layer-wise or block-wise architectures such as ResNet and VGG. Previous works show that global importance estimation can be computed using one or all of forward/activation Hu et al. (2016), parameter/weight Han et al. (2015b) or backward/gradient Wang et al. (2020); Lubana and Dick (2020); Evici et al. (2022) signals. Some of the previous techniques use Feature Importance propagation Yu et al. (2018) or Gradient propagation Lee et al. (2018) to find the neuron importance. Taylor First Order approximation based methods use both activation and gradient information for pruning Molchanov et al. (2016, 2019). There are also works that improve pruning with Taylor Second Order approximations Singh and Alistarh (2020); Yu et al. (2022); Chen et al. (2022). Although there are methods using forward and backward signals for pruning at initialization Wang et al. (2020), we limit our experiment to the pruning of trained models for a given number of neurons. In this work, we derive a similar importance metric to Taylor First Order (Taylor-FO) approximations Molchanov et al. (2016, 2019) but from heuristics combining both forward and backward signals. The forward signal, namely the activation of the neuron, and the backward signal, the gradient or the random gradient of the output. We also compare the pruning accuracy with a different number of data samples and find that our method performs better than the previous works in most settings. Furthermore, our method of using the random gradient signal on the last layer and gradient normalization is also applicable to previous methods, showing that our approach improves performance on previous methods as well. ## 2 Motivation and Our Method Previous works on global importance based post-training pruning of neurons focus on using forward and backward signals. Since most of these methods are based on Taylor approximation of the change in loss after the removal of a neuron or group of parameters, these methods require input and target value for computing the importance. We tackle the problem of pruning from the perspective of overall function output without considering the loss. **Forward Signal:** The forward signal is generally given by the pre-activation (\(x_{i}\)). If a pre-activation is zero, then it has no impact on the output of the function, i.e. the output deviation with respect to the removal of the neuron is zero. If the incoming connection of a neuron is zero-weights, then the neuron can be simply removed, i.e. it has no significance. If the incoming connection is non-zero then the neuron has significance. Forward signal takes into consideration how data affects a particular neuron. **Backward Signal:** The backward signal is generally given by back-propagating the loss. If the outgoing connection of the neuron is zeros, then the neuron has no significance to the function even if it has positive activation. The gradient(\(\delta x_{i}\)) provides us with information on how the function or loss will change if the neuron is removed. **Importance Metric:** Combining the forward and backward signal we can get the influence of the neuron on the loss or the function for given data. Hence, the importance metric (\(I_{i}\)) of each neuron (\(n_{i}\)) for dataset of size \(M\) is given by \(I_{i}=\frac{1}{M}\sum_{n=1}^{M}x_{i}.\delta x_{i}\), where \(x_{i}\) is the pre-activation and \(\delta x_{i}\) is its gradient. It fulfils the criterion that importance should be low if incoming or outgoing connections are zeros and higher otherwise. **Problem 1:** This importance metric is similar to Taylor-FO Molchanov et al. (2016). However, analysing this, we find that this method produces low importance if the gradient is negative, which to our application is a problem as the function will be changed significantly, even if it lowers the loss. Hence, we make the importance positive either by using the absolute value or the squared value. This gives rise to two of our methods namely: **Taylor-FO-abs : \(I_{i}=\frac{1}{M}\sum_{n=1}^{M}|x_{i}.\delta x_{i}|\)****Taylor-FO-sq : \(I_{i}=\frac{1}{M}\sum_{n=1}^{M}\left(x_{i}.\delta x_{i}\right)^{2}\)****Problem 2:** The goal of neuron removal is to make the least change in the loss value, which requires the label to calculate the loss. However, according to our intuition, the goal should be to make the least change in the output of the function. Having the least change in the function implicitly fulfils the objective of previous methods to have the least change in the loss. This removes the need to have labels for pruning. Furthermore, viewing from the backward signal, the slope/gradient of the output towards the correct label should not be the requirement, as data points getting the same output as the target will not produce any gradient and hence deemed insignificant by previous methods. Figure 1: The number of neurons pruned vs Accuracy plot for (left) ResNet-54 and (right) VGG19 for various data sizes (DS). Methods: tayloro Molchanov et al. (2016), Molchanov_BN and Molchanov_group Molchanov et al. (2019) are the baselines while tayloro_sq and tayloro_abs are _our methods_. rand represents use of random gradient and norm represents use of gradient magnitude normalization. Alternatively, we test the hypothesis that any _random gradient_ should work fine, and the gradient should be _normalized_ to the same magnitude (of \(1\)). Doing so makes the contribution of each data point equal for computing the importance. We find that the use of random gradient also produces similar pruning performance as shown in Experiment Section Figure 1and 2 supporting our hypothesis. Our method of using random gradient can be applied to all the previous methods using gradient signal from output to compute the importance. The implication can be useful in the setting of unsupervised pruning, where a network can be pruned to target datasets without labels. The application of this method can be significant for settings such as Reinforcement Learning, where the input data is abundant but the label is scarce. **Similarity to Molchanov-BN pruning:** Our method is similar to Molchanov - Batch Norm pruning Molchanov et al. (2019) as both of our methods perform neuron/channel level pruning using forward and backward signals. Molchanov-BN has Importance given by \(I_{i}=\sum(\gamma_{i}.\delta\gamma_{i}+\beta_{i}.\delta\beta_{i})^{2}\), where \(\gamma,\beta_{i}\) represents the scaler and bias terms of BN and \(\delta\gamma\) and \(\delta\beta\) represents their gradient respectively. If we consider \(\beta=0\) then \(I_{i}=\sum(\gamma_{i}.\delta\gamma_{i})^{2}\). If we consider the input of batch-norm to be \(x_{i}\) and output after scaling to be \(y_{i}=x_{i}.\gamma_{i}\) then the gradient \(\delta\gamma_{i}=\delta y_{i}.x_{i}\) and the overall importance is \(I_{i}=\sum(\gamma_{i}.\delta y_{i}.x_{i})^{2}\). In our case, we take the importance \(I_{i}=\sum(x_{i}.\delta x_{i})^{2}\). Considering the value with respect to \(\delta y_{i}\), we get our importance to be \(I_{i}=\sum x_{i}.\gamma.\delta y_{i}\). Which differs only by the constant shift term \(\beta\). It turns out that our method produces better accuracy for ResNet style architecture. However, in VGG architecture, our pruning method performs better than Molanchonov-BN and is competitive or better than Molanchonov-WeightGroup when tested in various settings. ## 3 Experiments **Procedure:** First, we compute the importance score for each neuron/channel on a given dataset. Secondly, we prune the \(P\) least important neurons of total \(N\) by importance metric (\(I\)) given by different methods. We measure the resulting accuracy and plot with the number of neurons pruned. We test for Taylor method Molchanov et al. (2016, 2019), and on two of our methods. The pruning is performed on the training dataset and accuracy is measured on the test dataset. Furthermore, to test the performance of different methods on different dataset sizes, we test for \(D\) data points of total \(M\) data points on all the datasets and architectures. The total dataset size for CIFAR-100 is 50K and for STL-10 is 5K. **CIFAR-100:** We test the pruning performance of different methods on CIFAR-100 Krizhevsky et al. (2009) dataset for dataset size \(D\in[2,10,103,50K]\) as shown in the Figure 1 2. We test the model for Resnet-20, ResNet-56 and VGG-19. **STL-10:** ResNet-18 model used on STL-10 Coates et al. (2011) dataset is finetuned on Imagenet pretrained model. We test the pruning performance of different methods on the STL-10 dataset for dataset size \(D\in[2,71,595,5K]\) as shown in Figure 2. ## 4 Conclusion In this paper, we improve on previously proposed Taylor First Order based pruning methods Molchanov et al. (2019) by using random gradients and by normalizing the gradients. We show that our techniques improve on the previous methods as well as our own variation. The knowledge that these methods allow for pruning with random gradient backpropagation is the main contribution of our work. Figure 2: The number of neurons pruned vs Accuracy plot for (left) ResNet-20 on CIFAR-100 and (right) ReNet-18 on STL-10 dataset for various data sizes (DS). Methods: tayloro Molchanov et al. (2016), Molchanov_BN and Molchanov_group Molchanov et al. (2019) are the baselines while tayloro_sq and tayloro_abs are _our methods_. rand represents use of random gradient and norm represents use of gradient magnitude normalization.
2309.08388
AONN-2: An adjoint-oriented neural network method for PDE-constrained shape optimization
Shape optimization has been playing an important role in a large variety of engineering applications. Existing shape optimization methods are generally mesh-dependent and therefore encounter challenges due to mesh deformation. To overcome this limitation, we present a new adjoint-oriented neural network method, AONN-2, for PDE-constrained shape optimization problems. This method extends the capabilities of the original AONN method [1], which is developed for efficiently solving parametric optimal control problems. AONN-2 inherits the direct-adjoint looping (DAL) framework for computing the extremum of an objective functional and the neural network methods for solving complicated PDEs from AONN. Furthermore, AONN-2 expands the application scope to shape optimization by taking advantage of the shape derivatives to optimize the shape represented by discrete boundary points. AONN-2 is a fully mesh-free shape optimization approach, naturally sidestepping issues related to mesh deformation, with no need for maintaining mesh quality and additional mesh corrections. A series of experimental results are presented, highlighting the flexibility, robustness, and accuracy of AONN-2.
Xili Wang, Pengfei Yin, Bo Zhang, Chao Yang
2023-09-15T13:29:53Z
http://arxiv.org/abs/2309.08388v1
# AONN-2: An adjoint-oriented neural network method for PDE-constrained shape optimization ###### Abstract Shape optimization has been playing an important role in a large variety of engineering applications. Existing shape optimization methods are generally mesh-dependent and therefore encounter challenges due to mesh deformation. To overcome this limitation, we present a new adjoint-oriented neural network method, AONN-2, for PDE-constrained shape optimization problems. This method extends the capabilities of the original AONN method [1], which is developed for efficiently solving parametric optimal control problems. AONN-2 inherits the direct-adjoint looping (DAL) framework for computing the extremum of an objective functional and the neural network methods for solving complicated PDEs from AONN. Furthermore, AONN-2 expands the application scope to shape optimization by taking advantage of the shape derivatives to optimize the shape represented by discrete boundary points. AONN-2 is a fully mesh-free shape optimization approach, naturally sidestepping issues related to mesh deformation, with no need for maintaining mesh quality and additional mesh corrections. A series of experimental results are presented, highlighting the flexibility, robustness, and accuracy of AONN-2. keywords: shape optimization, PDE-constrained optimization, direct-adjoint looping, deep neural network, mesh-free ## 1 Introduction Shape optimization has been playing an important role in a large variety of engineering applications, particularly involving in the design of, e.g., aircraft wings [2], high-speed train [3], high-rise building [4], bridge structure [5], acoustic devices [6], heat sink [7] and biomedical devices [8]. Many such problems can be formulated as the minimization of an objective functional defined over a class of admissible domains, which are usually subject to constraints imposed by partial differential equations (PDEs). These PDE-constrained shape optimization problems are notoriously difficult to solve because the dependence of functional on the domain is usually nonconvex. Additionally, even numerically solving the governing PDEs is challenging because the computational domain itself is the unknown variable in the shape optimization problem. A rather straightforward approach for shape optimization is the heuristic method, which regards a shape optimization problem as a general finite-dimensional optimization problem by parameterizing the shape information into discrete optimization variables. In this way, empirical search algorithms such as the genetic algorithm [9], the swarm algorithm [10], and the simulated annealing algorithm [11] can be applied to search the parameterized optimal shape. Heuristic methods are generally easy to implement and good at finding the global optima [12]. However, they usually require a large number of functional evaluations of different shapes, and are therefore computationally expensive, especially for large scale systems [13]. The adjoint methods [14; 15; 16; 17] take advantage of the gradient information of the objective functional to efficiently update the computational domain for finding the optimal shape. This can be done either on the continuous level or after discretization. The adjoint methods are often combined with the moving mesh approach [17] or the level-set model [18], and have been integrated into several shape optimization toolboxes such as cashocs [19] and Fireshape [20]. However, the dependency between the mesh and the shape could severely restricts the flexibility of the shape deformation and the update of the level-set function, consequently affecting the accuracy of approximating the optimal shape. On top of heuristic and adjoint methods, surrogate models can be introduced to alleviate the burden of on-line numerical simulations. A surrogate model is usually constructed off-line so as to directly map design variables to optimization indicators by using, e.g., kriging [3] or polynomial response surface [21]. Still, the construction of a surrogate model typically requires a substantial amount of high-fidelity data obtained through expensive numerical simulations of the governing PDEs. In recent years, deep learning technologies have gradually been introduced to solve shape optimization problems. For example, fully connected neural networks [22; 23; 24; 25], convolutional neural networks [12; 26], or deep neural operators [27] can be applied to improve the approximation accuracy of surrogate models. And generative adversarial networks [28] can be used to generate more desirable shape samples [29; 30; 31; 32], which could help improve the convergence rate or even the optimization results. However, establishing the generative model needs domain-specific data set, which could severely limit the scope of its application. Moreover, the shape optimization problem can be solved through reinforcement learning such as Q-learning [33], proximal policy optimization [34], or deep deterministic policy gradient [4; 35; 36] approaches, in which the reward function relies on the shape and guides the policy to provide an action (e.g. design variation) to update the shape. Nevertheless, the reinforcement learning methods often suffer from the meshing failure during the interaction between the agent and the environment [34], and they usually can only use several parameters to define the shape, leading to very small shape search space. In summary, existing methods for shape optimization are usually mesh-dependent, which often suffer from mesh deformation during the optimization process. The mesh deformation could have significant impact on the mesh quality and require additional mesh correction step [37]. In this paper, to sidestepping issues related to mesh deformation, we propose a new adjoint-oriented neural network method, namely AONN-2, for PDE-constrained shape optimization. The AONN method [1] is designed for solving parametric optimal control problems and is primarily applicable to problems featuring explicit control functions within a fixed domain. And AONN-2 inherits all the advantages of AONN, including the classical direct-adjoint looping (DAL) framework for computing the extremum of an objective functional [38] and the emerging neural network methods for solving complicated PDEs [39; 40; 41; 42; 43]. On top of that, AONN-2 extends the capabilities of AONN by leveraging the shape derivative to optimize the domain's shape, which is represented by a set of discrete boundary points known as shape representing points. In this way, AONN-2 can be applied as a completely mesh-free shape optimization method that is able to naturally avoid issues caused by mesh dependency. The remaining content of this paper is organized as follows. In Section 2, the basic theory of PDE-constrained shape optimization is introduced. The proposed AONN-2 algorithm is presented in Section 3. Then in Section 4, a series of numerical experiments are provided to demonstrate the flexibility, robustness, and accuracy of AONN-2. The paper is concluded in Section 5. ## 2 Basic theory of PDE-constrained shape optimization PDE-constrained shape optimization can be seen as a special class of optimal control problems, and the corresponding control space is a set of shapes. Let \(\Omega\subset\mathbb{R}^{d}\) be a bounded, connected spatial domain with Lipschitz continuous boundary \(\partial\Omega\), and \(\mathbf{x}\in\Omega\) be the spatial coordinates. A PDE-constrained shape optimization problem can be formulated as: \[\begin{cases}\min_{(y,\Omega)\in\mathcal{Y}\times\mathcal{U}}J(y,\Omega),\\ \text{subject to}\quad\mathbf{F}(y,\Omega)=\mathbf{0},\end{cases} \tag{1}\] where \(J:\mathcal{Y}\times\mathcal{U}\longmapsto\mathbb{R}\) is an objective functional, \(\mathcal{Y}\) and \(\mathcal{U}\) are the state space and shape space, respectively, and \(y\) is the solution of the state equation \(\mathbf{F}(y,\Omega)=\mathbf{0}\) defined on \(\Omega\). The state equation is specifically expressed as: \[\begin{cases}\mathbf{F}_{I}(y,\Omega)(\mathbf{x})=\mathbf{0}&\forall\mathbf{x} \in\Omega,\\ \mathbf{F}_{B}(y,\Omega)(\mathbf{x})=\mathbf{0}&\forall\mathbf{x}\in\partial \Omega.\end{cases} \tag{2}\] In shape optimization, modeling of the shape space is necessary. There are several approaches such as level set method and phase field method. In this work, the shape space \(\mathcal{U}\) is modeled as a set of images of diffeomorphic geometric transformations \(\mathbf{T}_{i}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) applied to an initial domain \(\Omega_{0}\subset\mathbb{R}^{d}\)[20], that is \[\mathcal{U}:=\{\Omega=\mathbf{T}_{i}(\Omega_{0}):\mathbf{T}_{i}\in\mathcal{T} _{ad}\}. \tag{3}\] The modeling of the shape space \(\mathcal{U}\) is illustrated in Fig. 1. In this way, the domain's boundary can be explicitly described through \(\partial(\mathbf{T}_{i}(\Omega_{0}))=\mathbf{T}_{i}(\partial\Omega_{0})\). For solving problem (1), the framework of direct-adjoint looping (DAL) is widely adopted [44; 45; 46]. In the process of DAL, an initial domain \(\Omega_{0}\) is given firstly, and then the state equation and adjoint equation are solved based on the current domain to get the state variable and adjoint variable. After that, the shape derivative of functional \(J\) is computed based on the state and adjoint variables to construct descent direction of the functional. At last, the shape of the domain is updated according to the descent direction. By repeating the above process, the optimized shape can be obtained. In the DAL framework, the computation of the shape derivative and the construction of the descent direction are the two key components. The shape derivative of functional \(J\) in the direction of a vector field \(\mathbf{V}\) is defined by: \[\mathrm{d}_{\Omega}J(y,\Omega;\mathbf{V}):=\lim_{t\searrow 0}\frac{J(y_{t}, \Omega_{t})-J(y,\Omega)}{t}, \tag{4}\] where \(\Omega_{t}=\{\mathbf{x}+t\mathbf{V}(\mathbf{x}):\mathbf{x}\in\Omega\}\) and \(y_{t}\) is the solution to \(\mathbf{F}(y_{t},\Omega_{t})=\mathbf{0}\). For efficiently computing the shape derivative, the adjoint equation is usually introduced through constructing the Lagrange functional as: \[\mathcal{L}(y,\Omega,p):=J(y,\Omega)+\left(p,\mathbf{F}(y,\Omega)\right)_{L^{2 }(\Omega)}, \tag{5}\] and \(\mathcal{L}_{y}(y,\Omega,p)=0\) is the adjoint equation, that is \[\mathbf{F}_{y}(y,\Omega)^{*}p=-J_{y}(y,\Omega), \tag{6}\] where \(p\) is the adjoint variable, which is also known as the Lagrange multiplier. With the state variable \(y\) and adjoint variable \(p\), which are acquired by solving the state equation and adjoint equation, respectively, the shape derivative of functional \(J\) in direction \(\mathbf{V}\) can be computed by: \[\mathrm{d}_{\Omega}J(y,\Omega;\mathbf{V})=\mathcal{L}_{\Omega}(y,\Omega,p; \mathbf{V}). \tag{7}\] After obtaining the shape derivative, the shape gradient \(\nabla\hat{J}(\Omega):\partial\Omega\rightarrow\mathbb{R}\), which satisfies \(\mathrm{d}_{\Omega}J(y,\Omega;\mathbf{V})=(\nabla\hat{J}(\Omega),\mathbf{V} \cdot\mathbf{n})_{L^{2}(\partial\Omega)}\), can be naturally employed to construct the descent Figure 1: A set of diffeomorphic geometric transformations \(\mathbf{T}_{i}\) (black arrows) applied to an initial domain \(\Omega_{0}\). direction as \(-\nabla\hat{J}(\Omega)\mathbf{n}\) on \(\partial\Omega\), where \(\mathbf{n}\) is the unit outward normal vector to \(\Omega\), and \(\hat{J}(\Omega)=J(y,\Omega)\) is the reduced form of the objective functional. Unfortunately, as demonstrated in Section 3, this descent direction often exhibits low regularity sometimes. To enhance the regularity of the descent direction, the classical Hilbertian method [47; 48] is frequently adopted. In this method, the regularization equation (8) needs to be solved to get the regularized descent direction \(\mathbf{\Phi}\). \[(\mathbf{\Phi},\mathbf{V})_{[H^{1}(\Omega)]^{d}}=-\mathrm{d}_{\Omega}J(y, \Omega;\mathbf{V})\quad\forall\;\mathbf{V}\in W^{1,\infty}(\mathbb{R}^{d}, \mathbb{R}^{d}). \tag{8}\] Different forms of inner products can be employed in equation (8), and the \(H^{1}\) inner product is used in this work. The strong form of equation (8) is denoted by \(\mathbf{G}(y,p,\mathbf{\Phi},\Omega)=\mathbf{0}\) with Neumann boundary condition, which is specifically expressed as: \[\begin{cases}\mathbf{G}_{I}(\mathbf{\Phi},\Omega):=-\Delta\mathbf{\Phi}+ \mathbf{\Phi}&=\mathbf{0}\qquad\text{in }\Omega,\\ \mathbf{G}_{B}(y,p,\mathbf{\Phi},\Omega):=\partial_{\mathbf{n}}\mathbf{\Phi}+ \nabla\hat{J}(\Omega)\mathbf{n}&=\mathbf{0}\quad\text{on }\partial\Omega.\end{cases} \tag{9}\] Finally, the domain \(\Omega\) can be updated by using \(\mathbf{\Phi}\) as the descent direction of objective functional. In the following section, AONN-2 will be introduced, which is developed based on the aforementioned theory. ## 3 Methodology The adjoint-oriented neural network method (AONN) [1] was proposed to solve parametric optimal control problems without directly solving the complex Karush-Kuhn-Tucker system with various penalty terms. It combines the DAL framework with neural network approaches for solving PDEs. In AONN, the control variable to be optimized is restricted to the form of a function, and without considering parameters, the control variable takes the form of a function \(u(\mathbf{x})\) that is defined on a fixed computational domain \(\Omega\) throughout the optimization process. Specifically, in AONN the solution of the optimal system is approximated by iteratively updating the neural networks related to the state variable \(y\), adjoint variable \(p\) and control variable \(u\). In the context of shape optimization, it becomes necessary to consider how to parameterize and update the shape of the domain, as well as how to solve the PDEs in the changing computational domain. Addressing these challenges is the crucial target of this work. We extend AONN to AONN-2 that can solve PDE-constrained shape optimization problems. Firstly, a shape is represented by discrete points \(\mathbf{x}\) on the shape's boundary, which are named as shape representing points in this paper. Based on this representation, the shape can be updated according to the descent direction \(\mathbf{\Phi}\) as \(\mathbf{x}^{*}=\mathbf{x}+\alpha\mathbf{\Phi}(\mathbf{x})\), where \(\alpha\) is the step length. The schematic diagram about representing and updating the shape is shown in Fig. 2. To obtain the descent direction \(\mathbf{\Phi}\), one needs to solve three PDEs: the state equation (2), the adjoint equation (6) and the regularization equation (9) as exhibited in Section 2. In AONN-2, these PDEs are solved based on physics-informed neural networks (PINNs) [40], which are intrinsically mesh-free. After updating the shape, it only needs to resample the collocation points in the updated domain for solving the three PDEs in the next step. However, in traditional mesh-dependent shape optimization methods, it is necessary to generate internal meshes and deform them, and it usually needs to correct the meshes to guarantee their quality [16, 49]. The whole process of mesh processing is complex and time-consuming. The proposed AONN-2 avoids the mesh processing, and focuses on the movement of the shape representing points. The cost of resampling collocation points is much lower than the mesh processing, and the movement of shape representing points is also more flexible than mesh deformation. For solving the state equation, adjoint equation and regularization equation based on PINNs, the corresponding neural networks are established as \(\tilde{y}(\mathbf{x};\boldsymbol{\theta}_{y})\), \(\tilde{p}(\mathbf{x};\boldsymbol{\theta}_{p})\) and \(\tilde{\boldsymbol{\Phi}}(\mathbf{x};\boldsymbol{\theta}_{\boldsymbol{\Phi}})\) with network parameters \(\boldsymbol{\theta}_{y}\), \(\boldsymbol{\theta}_{p}\) and \(\boldsymbol{\theta}_{\boldsymbol{\Phi}}\) respectively. The related three loss functions are defined as: \[L_{s}(\boldsymbol{\theta}_{y},\Omega)=\left(\frac{1}{N}\sum_{i=1}^{N}|r_{s_{I }}(\tilde{y}(\mathbf{x}_{I}^{i};\boldsymbol{\theta}_{y}),\Omega)|^{2}+\frac{ \lambda_{s}}{M}\sum_{i=1}^{M}|r_{s_{B}}(\tilde{y}(\mathbf{x}_{B}^{i}; \boldsymbol{\theta}_{y}),\Omega)|^{2}\right)^{\frac{1}{2}}, \tag{10}\] \[\begin{split} L_{a}(\boldsymbol{\theta}_{y},\boldsymbol{\theta} _{p},\Omega)=&\left(\frac{1}{N}\sum_{i=1}^{N}|r_{a_{I}}(\tilde{y} (\mathbf{x}_{I}^{i};\boldsymbol{\theta}_{y}),\tilde{p}(\mathbf{x}_{I}^{i}; \boldsymbol{\theta}_{p}),\Omega)|^{2}+\right.\\ &\left.\frac{\lambda_{a}}{M}\sum_{i=1}^{M}|r_{a_{B}}(\tilde{y}( \mathbf{x}_{B}^{i};\boldsymbol{\theta}_{y}),\tilde{p}(\mathbf{x}_{B}^{i}; \boldsymbol{\theta}_{p}),\Omega)|^{2}\right)^{\frac{1}{2}},\end{split} \tag{11}\] \[\begin{split} L_{r}(\boldsymbol{\theta}_{y},\boldsymbol{\theta} _{p},\boldsymbol{\theta}_{\boldsymbol{\Phi}},\Omega)=&\left( \frac{1}{N}\sum_{i=1}^{N}|r_{r_{I}}(\tilde{y}(\mathbf{x}_{I}^{i};\boldsymbol{ \theta}_{y}),\tilde{p}(\mathbf{x}_{I}^{i};\boldsymbol{\theta}_{p}),\tilde{ \boldsymbol{\Phi}}(\mathbf{x}_{I}^{i};\boldsymbol{\theta}_{\boldsymbol{\Phi}} ),\Omega)|^{2}+\right.\\ &\left.\frac{\lambda_{r}}{M}\sum_{i=1}^{M}|r_{r_{B}}(\tilde{y}( \mathbf{x}_{B}^{i};\boldsymbol{\theta}_{y}),\tilde{p}(\mathbf{x}_{B}^{i}; \boldsymbol{\theta}_{p}),\tilde{\boldsymbol{\Phi}}(\mathbf{x}_{B}^{i}; \boldsymbol{\theta}_{\boldsymbol{\Phi}}),\Omega)|^{2}\right)^{\frac{1}{2}}, \end{split} \tag{12}\] Figure 2: Representing and updating shapes. (a) An initial shape is represented by discrete boundary points (i.e. shape representing points). (b) The descent direction \(\boldsymbol{\Phi}\) (blue arrows) is used to guide the moving of shape representing points. (c) The updated shape is attained with retaining the connected relation about the shape representing points. where \(\{\mathbf{x}_{I}^{i}\}_{i=1}^{N}\) denote \(N\) collocation points in the domain \(\Omega\), \(\{\mathbf{x}_{B}^{i}\}_{i=1}^{M}\) denote \(M\) collocation points on the domain's boundary \(\partial\Omega\), which can employ the shape representing points, and \(r_{s}\), \(r_{a}\) and \(r_{r}\) represent the residuals for the state equation, adjoint equation and regularization equation as: \[r_{s}(y,\Omega) =\mathbf{F}(y,\Omega), \tag{13a}\] \[r_{a}(y,p,\Omega) =\mathbf{F}_{y}(y,\Omega)^{*}p+J_{y}(y,\Omega),\] (13b) \[r_{r}(y,p,\mathbf{\Phi},\Omega) =\mathbf{G}(y,p,\mathbf{\Phi},\Omega). \tag{13c}\] According to equation (2), the state residual \(r_{s}\) (13a) is divided to the interior residual \(r_{s_{I}}\) and boundary residual \(r_{s_{B}}\). Similarly, the adjoint residual \(r_{a}\) (13b) is divided to \(r_{a_{I}}\) and \(r_{a_{B}}\), and the regularization residual \(r_{r}\) (13c) is divided to \(r_{r_{I}}\) and \(r_{r_{B}}\). The corresponding weights for the boundary residuals are denoted as \(\lambda_{s}\), \(\lambda_{a}\), and \(\lambda_{r}\), respectively. The updating of the neural network parameters \(\boldsymbol{\theta}_{y},\boldsymbol{\theta}_{p},\boldsymbol{\theta}_{\Phi}\) can be realized by automatic differentiation and back propagation with deep learning libraries, such as PyTorch [50] and TensorFlow [51]. In order to verify the necessity of the regularization of the descent direction in AONN-2, a simple two-dimensional unconstrained shape optimization problem is considered: \[\min_{\Omega}J(f,\Omega):=\int_{\Omega}f\,\mathrm{d}\mathbf{x}, \tag{14}\] where \(f\left(x_{1},x_{2}\right)=x_{1}^{2}+\left(3x_{2}/2\right)^{2}-1\) and the optimal \(\Omega\) is searched for minimizing the integral value. Obviously, the integral value is minimized when the shape's boundary is set to the zero level set of \(f(x_{1},x_{2})\). For regularizing the descent direction, the corresponding regularization equation (15) is solved: \[\begin{cases}-\Delta\mathbf{\Phi}+\mathbf{\Phi}=\mathbf{0}&\text{ in }\Omega,\\ \partial_{\mathbf{n}}\mathbf{\Phi}+f\mathbf{n}=\mathbf{0}&\text{ on }\partial \Omega.\end{cases} \tag{15}\] In this example, the initial shape is set to an unit circle. The comparison of shape updating with and without regularization is shown in Fig. 3. From this figure, we can see that the shape updating without regularization has poor smoothness and may lead to divergence during the iteration process. Thus it is necessary to improve the regularity of descent direction. In summary, the computation process of AONN-2 consists of a looping of five steps: training \(\tilde{y}(\mathbf{x};\boldsymbol{\theta}_{y})\) (state step), training \(\tilde{p}(\mathbf{x};\boldsymbol{\theta}_{p})\) (adjoint step), training \(\tilde{\mathbf{\Phi}}(\mathbf{x};\boldsymbol{\theta}_{\Phi})\) (regularization step), updating the shape representing points (update step), and resampling the collocation points (resample step), which is illustrated in Fig. 4. More specifically, starting with initial neural networks \(\tilde{y}(\mathbf{x};\boldsymbol{\theta}_{y}^{0})\), \(\tilde{p}(\mathbf{x};\boldsymbol{\theta}_{p}^{0})\), \(\tilde{\mathbf{\Phi}}(\mathbf{x};\boldsymbol{\theta}_{\Phi}^{0})\) and an initial domain \(\Omega_{0}\), the state equation is solved through minimizing loss function \(L_{s}(\boldsymbol{\theta}_{y},\Omega_{0})\) to get the solution \(\tilde{y}(\mathbf{x};\boldsymbol{\theta}_{y}^{1})\). Following this, with \(\tilde{y}(\mathbf{x};\boldsymbol{\theta}_{y}^{1})\), the loss function \(L_{a}(\boldsymbol{\theta}_{y}^{1},\boldsymbol{\theta}_{p},\Omega_{0})\) is minimized by solving the adjoint equation to get the solution \(\tilde{p}(\mathbf{x};\boldsymbol{\theta}_{p}^{1})\). After that, with the solutions of state and adjoint equations, the descent direction \(\tilde{\mathbf{\Phi}}(\mathbf{x};\boldsymbol{\theta}_{\Phi}^{1})\) of the objective functional can be obtained by solving the regulation equation through minimizing the loss function \(L_{r}(\mathbf{\theta}_{y}^{1},\mathbf{\theta}_{p}^{1},\mathbf{\theta}_{\mathbf{\Phi}},\Omega_{0})\). Then, \(\tilde{\mathbf{\Phi}}(\mathbf{x};\mathbf{\theta}_{\mathbf{\Phi}}^{1})\) is employed to guide the updating of the shape representing points of \(\Omega_{0}\) to form a new domain \(\Omega_{1}\). Lastly, the collocation points in \(\Omega_{1}\) are resampled. The next iteration begins with the network parameters \(\mathbf{\theta}_{y}^{1},\mathbf{\theta}_{p}^{1},\mathbf{\theta}_{\mathbf{\Phi}}^{1}\) and \(\Omega_{1}\). The updates of the network parameters and the shape are expressed as follows: Figure 4: The schematic diagram about the computation process of AONN-2. It mainly contains state step, adjoint step, regularization step, update step and resample step. Figure 3: Comparison of shape updating with regularization (w/ reg.) and without regularization (w/o reg.). (a) The optimal shape (i.e. the zero level set of \(f\)), which is shown as the blue dotted line. (b) Deformation of each shape representing point of initial shape (w/ reg.). (c) Shapes after once update. (d) Shapes after five updates. \[\mathbf{\theta}_{y}^{k+1} =\operatorname*{arg\,min}_{\mathbf{\theta}_{y}}L_{s}(\mathbf{\theta}_{y}, \Omega_{k}),\] \[\mathbf{\theta}_{p}^{k+1} =\operatorname*{arg\,min}_{\mathbf{\theta}_{p}}L_{a}(\mathbf{\theta}_{y}^{ k+1},\mathbf{\theta}_{p},\Omega_{k}),\] \[\mathbf{\theta}_{\mathbf{\Phi}}^{k+1} =\operatorname*{arg\,min}_{\mathbf{\theta}_{\mathbf{\Phi}}}L_{r}(\mathbf{ \theta}_{y}^{k+1},\mathbf{\theta}_{p}^{k+1},\mathbf{\theta}_{\mathbf{\Phi}},\Omega_{k}),\] and \[\partial\Omega_{k+1}=\partial\Omega_{k}+\alpha\tilde{\mathbf{\Phi}}(\partial \Omega_{k};\mathbf{\theta}_{\mathbf{\Phi}}^{k+1}).\] The whole computation process is presented in Algorithm 1. It is worth mentioning that the step size \(\alpha\) is crucial for the convergence of AONN-2. A large step size may lead to divergence, while a small step size may cause slow convergence. As a compromise, a moderate starting step size \(\alpha_{0}\) can be chosen and a decay strategy with decay factor \(\gamma\) can be adopted. ``` Input: Shape representing points of initial domain \(\Omega_{0}\), initial network parameters \(\mathbf{\theta}_{y}^{0},\mathbf{\theta}_{p}^{0},\mathbf{\theta}_{\mathbf{\Phi}}^{0}\), initial step size \(\alpha_{0}\), decay factor \(\gamma\in(0,1)\), number of iterations \(K\), \(N\) collocation points \(\{\mathbf{x}_{I}^{i}\}_{i=1}^{N}\) in \(\Omega_{0}\) and \(M\) collocation points \(\{\mathbf{x}_{B}^{i}\}_{i=1}^{M}\) on \(\partial\Omega_{0}\) which can use the shape representing points. Output: Optimized domain \(\Omega_{K}\), optimized state \(\tilde{y}(\mathbf{x};\mathbf{\theta}_{y}^{K})\). 1while\(k<K\)do 2\(\mathbf{\theta}_{y}^{k+1}\leftarrow\operatorname*{arg\,min}_{\mathbf{\theta}_{y}}L_{s} (\mathbf{\theta}_{y},\Omega_{k})\): Train the neural network \(\tilde{y}(\mathbf{x};\mathbf{\theta}_{y})\) from the initialization \(\mathbf{\theta}_{y}^{k}\). 3\(\mathbf{\theta}_{p}^{k+1}\leftarrow\operatorname*{arg\,min}_{\mathbf{\theta}_{p}}L_{a }(\mathbf{\theta}_{y}^{k+1},\mathbf{\theta}_{p},\Omega_{k})\): Train the neural network \(\tilde{p}(\mathbf{x};\mathbf{\theta}_{p})\) from the initialization \(\mathbf{\theta}_{p}^{k}\). 4\(\mathbf{\theta}_{\mathbf{\Phi}}^{k+1}\leftarrow\operatorname*{arg\,min}_{\mathbf{\theta}_{ \mathbf{\Phi}}}L_{r}(\mathbf{\theta}_{y}^{k+1},\mathbf{\theta}_{p}^{k+1},\mathbf{\theta}_{\bm {\Phi}},\Omega_{k})\): Train the neural network \(\tilde{\mathbf{\Phi}}(\mathbf{x};\mathbf{\theta}_{\mathbf{\Phi}})\) from the initialization \(\mathbf{\theta}_{\mathbf{\Phi}}^{k}\). 5\(\partial\Omega_{k+1}\leftarrow\partial\Omega_{k}+\alpha_{k}\tilde{\mathbf{\Phi}}( \partial\Omega_{k};\mathbf{\theta}_{\mathbf{\Phi}}^{k+1})\): Update the shape representing points 6 Resample \(N\) collocation points \(\{\mathbf{x}_{I}^{i}\}_{i=1}^{N}\) in \(\Omega_{k+1}\). 7\(\alpha_{k+1}\leftarrow\gamma\alpha_{k}\). 8\(k\gets k+1\). 9 end while ``` **Algorithm 1**Computation process of the AONN-2 algorithm. ## 4 Numerical experiments In this section, a series of numerical experiments are carried out to demonstrate the effectiveness of AONN-2. In these experiments, different types of PDE constraints and objective functionals are considered, and AONN-2 are compared with the shape optimization toolbox Fireshape [20], which is based on the adjoint method with finite element discretization. In AONN-2, ResNet [52] with sinusoid activation function is adopted as the neural network model, and each residual block is comprised of two fully connected layers and a residual connection. Unless otherwise specified, the quasi Monte-Carlo method and uniform sampling method are respectively used to generate the collocation points in the domain and on the boundary by calling the SciPy module [53]. For training the neural networks, the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm with a strong Wolfe line search strategy is employed based on PyTorch [50]. All the training is performed on a server equipped with a Geforce RTX 2080 GPU and based on 64-bit floating point precision. In Fireshape, Limited-Memory BFGS (L-BFGS) method within Rapid Optimization Library (ROL) [54] is used to perform the updating of shape. The codes accompanying this manuscript will be published in GitHub ([https://github.com/SillyWWang/nn4shape](https://github.com/SillyWWang/nn4shape)). ### Model problem constrained by Poisson equation We start with a model problem constrained by the Poisson equation with homogeneous Dirichlet boundary condition [16; 19]. This problem can be expressed by: \[\left\{\begin{aligned} &\min_{y,\Omega}J(y,\Omega):=\int_{ \Omega}y\,\mathrm{d}\mathbf{x},\\ &\text{subject to}\begin{cases}-\Delta y=f&\text{in}\ \Omega,\\ y=0&\text{on}\ \partial\Omega,\end{cases}\end{aligned}\right. \tag{16}\] where \(f(x_{1},x_{2})=2.5(x_{1}+0.4-x_{2}^{2})^{2}+x_{1}^{2}+x_{2}^{2}-1\) according to [19]. The shape derivative of functional \(J\) is: \[\mathrm{d}_{\Omega}J(y,\Omega;\mathbf{V})=\int_{\partial\Omega}(\partial_{ \mathbf{n}}y\partial_{\mathbf{n}}p)\mathbf{n}\cdot\mathbf{V}\mathrm{d}s, \tag{17}\] where \(\mathbf{n}\) is the unit outward normal vector to \(\Omega\) and \(p\) is the solution of the corresponding adjoint equation: \[\left\{\begin{aligned} -\Delta p=1&\text{in}\ \Omega,\\ p=0&\text{on}\ \partial\Omega.\end{aligned}\right. \tag{18}\] And to get the regularized descent direction \(\mathbf{\Phi}\), the following regularization equation needs to be solved: \[\left\{\begin{aligned} -\Delta\mathbf{\Phi}+\mathbf{\Phi}& =\mathbf{0}&\text{in}\ \Omega,\\ \partial_{\mathbf{n}}\mathbf{\Phi}+(\partial_{\mathbf{n}}y\partial_{ \mathbf{n}}p)\mathbf{n}&=\mathbf{0}&\text{on}\ \partial\Omega.\end{aligned}\right. \tag{19}\] In AONN-2, \(y,p\) and \(\mathbf{\Phi}\) are respectively expressed by three neural networks \(\tilde{y}(\mathbf{x};\theta_{y})\), \(\tilde{p}(\mathbf{x};\theta_{p})\) and \(\tilde{\mathbf{\Phi}}(\mathbf{x};\theta_{\mathbf{\Phi}})\), and each of them have 2 residual blocks with 10 neurons per hidden layer. These networks take spatial coordinates \((x_{1},x_{2})\) as input and output \(y,p,\mathbf{\Phi}\) respectively. For solving the three PDEs based on PINNs, 1000 collocation points inside the domain and 500 collocation points on the boundary are sampled, and the boundary collocation points are also used as the shape representing points. In this problem, the initial domain \(\Omega_{0}\) is respectively set to circle, ellipse and rectangle as shown in Fig. 5. As a comparison, Fireshape is employed for solving this problem. In Fireshape, the P1 Lagrange finite element method is used for spatial discretization and the Krylov subspace method in PETSc is adopted to solve the discretized system. In particular, a total of 3120, 1776, and 1876 are used for the discretization of the 3120, 1776 and 1876 respectively. The computational cost of the solver is \(\mathcal{O}(10^{-1})\). Figure 5: Shape optimization results by Fireshape and AONN-2, with three different initial shapes: (a) circle (b) ellipse (c) rectangle. The color represents the magnitude of \(y\). 3872 triangles are respectively used to discrete the computational domain for the cases with initial shapes of circle, ellipse, and rectangle. The maximal iteration number \(K\) in Fireshape and AONN-2 is respectively set to 20 and 50. In order to evaluate the performance of both methods, the optimization result from [19] is adopted as a reference. Starting from different initializations, the results during the whole optimization processes of Fireshape and AONN-2 are displayed in Fig. 5. We observe that when the initial shape is circle as in Fig. 5(a), both Fireshape and AONN-2 can converge to the reference optimized shape. While, in the case with ellipse shape initialization as in Fig. 5(b), the meshes in the middle of the right side of the computational domain are severely compressed in Fireshape, and a small protrusion occurs eventually. In contrast, AONN-2 stably converges to the reference shape. In the last case, a rectangle that is significantly different from the reference shape is set to the initial shape, which makes the optimization process even more difficult. Due to the restriction of moving mesh, Fireshape fails in this situation. Unlike Fireshape, AONN-2 still roughly converges to the reference shape as shown in Fig. 5(c), which indicates that AONN-2 is more flexible with large deformation and more robust to various initial shapes. ### Pipe optimization constrained by Stokes equations The second problem is for two-dimensional pipe optimization [13; 20], which aims to optimize the shape of a pipe to minimize the energy dissipation in the fluid. The objective functional and the constraint due to the Stokes equations are as follows: \[\left\{\begin{aligned} \min_{\mathbf{u},\Gamma_{f}}J(\mathbf{u}, \Omega)&:=\nu\int_{\Omega}\left\|\nabla\mathbf{u}\right\|_{F}^{2} \,\mathrm{d}\mathbf{x},\\ \text{subject to}&\begin{cases}-\nabla p+\nu\Delta \mathbf{u}=\mathbf{0}&\text{in}\ \Omega,\\ \mathrm{div}\,\mathbf{u}=0&\text{in}\ \Omega,\\ \mathbf{u}=(u_{in},0)^{\top}&\text{on}\ \Gamma_{i},\\ \mathbf{u}=\mathbf{0}&\text{on}\ \Gamma_{w}\cup\Gamma_{f},\\ p\mathbf{n}-\nu\partial_{\mathbf{n}}\mathbf{u}=\mathbf{0}&\text{on}\ \Gamma_{o},\\ \mathrm{Vol}(\Omega)=V_{0}.\end{cases}\end{aligned}\right. \tag{20}\] where \(\nu=1/400\) is the reciprocal of the Reynolds number, and the horizontal velocity profile at the inlet is set to \(u_{in}=4(1-x_{2})x_{2}\). The initial domain is illustrated in Fig. 6 and the volume of initial domain is set to \(V_{0}\). In this example, only the free boundary \(\Gamma_{f}\) can be optimized and the no-slip boundary \(\Gamma_{w}\) is kept fixed. Besides, the volume of the domain needs to remain unchanged during optimization. Due to the self-adjoint property of problem (20), the adjoint step can be omitted, and we can easily derive the shape derivative: \[\mathrm{d}_{\Omega}J(\mathbf{u},\Omega;\mathbf{V})=\int_{\Gamma_{f}}\nu( \partial_{\mathbf{n}}\mathbf{u})^{2}\mathbf{n}\cdot\mathbf{V}\mathrm{d}s, \tag{21}\] and regularization equation: \[\left\{\begin{aligned} -\Delta\mathbf{\Phi}+\mathbf{\Phi}& =\mathbf{0}&\text{ in }\Omega,\\ \partial_{\mathbf{n}}\mathbf{\Phi}+\nu(\partial_{\mathbf{n}} \mathbf{u})^{2}\mathbf{n}&=\mathbf{0}&\text{ on }\Gamma_{f},\\ \mathbf{\Phi}&=\mathbf{0}&\text{ on }\Gamma_{i}\cup\Gamma_{w}\cup\Gamma_{o}.\end{aligned}\right. \tag{22}\] In order to apply AONN-2 to this problem, two neural networks are used to represent the state variables, namely the velocity field \(\mathbf{u}\) and pressure field \(p\), and one neural network is used to represent the descent direction \(\mathbf{\Phi}\). The network for the velocity field contains 3 residual blocks with 20 neurons per hidden layer, the network for the pressure field contains 3 residual blocks with 15 neurons per layer, and the network for the descent direction contains 3 residual blocks with 20 neurons per hidden layer. These networks take spatial coordinates \((x_{1},x_{2})\) as the input and \(\mathbf{u},p,\mathbf{\Phi}\) as the output, respectively at the corresponding location. For solving the state equation and regularization equation based on PINNs, 6000 collocation points inside the domain and 1450 collocation points on the boundary are sampled. Among these boundary collocation points, 450 points on \(\Gamma_{f}\) are also used as shape representing points. As a comparison, in Fireshape, the P2-P1 Taylor-Hood finite element method is used for spatial discretization and the Scalable Nonlinear Equations Solvers (SNES) in PETSc are used to solve the discretized system. A total of 1566, 6264 and 25056 triangles are respectively employed to discrete the computational domain. The experiment settings and results are listed in Table 1. From the table, we can see that the values of the optimized objective functionals by using Fireshape with different numbers of triangles are almost the same, which are all higher than the value optimized by AONN-2. Besides, AONN-2 keeps the volume of the domain unchanged during the process of optimization, while Fireshape slightly changes the volume of the domain. To make further comparison, we show the initial and optimized shapes obtained by AONN-2 and Fireshape in Fig. 7. It can be observed from Fig. 7(a) that, when using Fireshape, the meshes are severely squeezed at the junctions of the free boundary and the fixed boundary, despite of the introduction of regularization terms. And as seen in Fig. 7(b), AONN-2 does not suffer Figure 6: The initial shape of the pipe optimization problem. \(\Gamma_{i}\) is inflow boundary and \(\Gamma_{o}\) is outflow boundary. \(\Gamma_{w}\) and \(\Gamma_{f}\) are both no-slip boundaries, and \(\Gamma_{f}\) is free boundary which needs to be optimized to minimize the energy dissipation. from this problem, thanks to its mesh-free property that can lead to lower optimized value. ### Obstacle optimization constrained by Stokes equations Next, we consider the obstacle optimization constrained by Stokes equations in a pipe flow, which can be frequently found in aerodynamic applications. The aim of the obstacle optimization is to search for the shape that minimizes the drag on the obstacle, which is equivalent to search for the shape that minimizes the energy dissipation in the fluid due to the shear forces [17; 55]. We take the same objective functional and PDE constraint as the pipe optimization given in equation (20), with a viscosity value of \(\nu=1/80\). This setting leads to the same formulations of shape derivative and regularization equation. Two different scenarios of the obstacle are established as in Fig. 8. In the first case, the initial shape of the obstacle is a circle with a radius of \(0.5\), placed in the center of the pipe. In the second case, the initial obstacle is located below the right of the pipe center, and there is a fixed circular obstacle at the symmetrical position of the center. In these two cases, the barycenter of the obstacle is not fixed. The geometry parameters of the computational domain are marked in Fig. 8. To implement the AONN-2 algorithm, three neural networks are employed to represent the velocity field \(\mathbf{u}\), the pressure field \(p\), and the descent direction \(\mathbf{\Phi}\), respectively, which are \begin{table} \begin{tabular}{l c c c c} \hline \hline & & Fireshape & & AONN-2 \\ \cline{2-5} Initial volume & 6.3562 & 6.3562 & 6.3562 & 6.3562 \\ Optimized volume & 6.3564 & 6.3556 & 6.3554 & 6.3562 \\ Initial objective & 0.0851 & 0.0851 & 0.0851 & 0.0851 \\ Optimized objective & 0.0785 & 0.0785 & 0.0785 & 0.0778 \\ Shape representation & 1566(triangles) & 6264(triangles) & 25056(triangles) & 450(points) \\ Collocation points \((M,N)\) & - & - & - & (1450,6000) \\ Network parameters & - & - & - & 5665 \\ Iteration number & 36 & 48 & 36 & 20 \\ \hline \hline \end{tabular} \end{table} Table 1: The experiment settings and test results for the pipe optimization. Figure 7: Pipe optimization results by: (a) Fireshape and (b) AONN-2. The color represents the velocity magnitude. all comprised of 2 residual blocks with 15 neurons per hidden layer. Different numbers of collocation points and shape representing points are adopted, which are listed in Table 2. For comparison, Fireshape is also used to solve the problem, with the P2-P1 Taylor-Hood finite element method as the spatical discretization and the Krylov subspace method in PETSc as the linear solver. We use the number of triangles as the indicator of shape representation of finite element. The corresponding experiment settings and results are listed in Table 2. In Fig. 9 and 10 we show the flow fields before and after the optimization, for test case I and II, respectively. By comparing Fig. 9(c) and (d), we can observe that AONN-2 can outperform Fireshape and obtain an optimized obstacle with flatter shape, leading to lower value of the objective functional. And from Fig. 10(c) and (d), it can be found that the position of optimal shape of the obstacle obtained by AONN-2 is relatively close to the boundary, which makes the flow smoother and thus can reduce the energy dissipation. While in Fireshape, the quality of surrounding meshes of the obstacle needs to be guaranteed (e.g. adding regularization terms), which limits the deformation of the obstacle itself. In contrast, the shape deformation in AONN-2 is more flexible, and thus lower value of the objective functional is attained. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Case I & \multicolumn{4}{c}{Fireshape} & \multicolumn{4}{c}{AONN-2} \\ \cline{2-10} Initial objective & 0.4574 & 0.4574 & 0.4574 & 0.4574 & 0.4574 & 0.4574 & 0.4574 & 0.4574 \\ Optimized objective & 0.4264 & 0.4260 & 0.4259 & 0.4258 & 0.4187 & 0.4190 & 0.4188 & 0.4190 \\ Shape representation & 872 & 3488 & 13952 & 55808 & 600 & 600 & 1200 & 1200 \\ Collocation points \((M,N)\) & - & - & - & - & (3800,600) & (3800,6000) & (4400,12000) & (4400,12000) \\ Network parameters & - & - & - & - & 3172 & 5092 & 3172 & 5092 \\ Iteration number & 70 & 83 & 68 & 70 & 30 & 30 & 30 & 30 \\ \hline \hline Case II & \multicolumn{4}{c}{Fireshape} & \multicolumn{4}{c}{AONN-2} \\ \cline{2-10} Initial objective & 0.5507 & 0.5507 & 0.5507 & 0.5507 & 0.5507 & 0.5507 & 0.5507 & 0.5507 \\ Optimized objective & 0.4751 & 0.4744 & 0.4742 & 0.4744 & 0.4121 & 0.3676 & 0.3810 & 0.3638 \\ Shape representation & 1404 & 5616 & 22464 & 89856 & 600 & 600 & 1200 & 1200 \\ Collocation points \((M,N)\) & - & - & - & (4400,6000) & (4400,6000) & (5000,12000) & (5000,12000) \\ Network parameters & - & - & - & - & 3172 & 5092 & 3172 & 5092 \\ Iteration number & 68 & 78 & 86 & 88 & 100 & 100 & 100 & 100 \\ \hline \hline \end{tabular} \end{table} Table 2: The experiment settings and test results for the obstacle optimization. Figure 8: The initial geometries of two cases in the obstacle optimization problem. \(\Gamma_{i}\) is inflow boundary and \(\Gamma_{o}\) is outflow boundary. \(\Gamma_{w}\) and \(\Gamma_{f}\) are no-slip boundaries, where \(\Gamma_{f}\) with red color (the red circle) is the free boundary that needs to be optimized. ### Channel optimization constrained by Naiver-Stokes equations As the final example, the shape optimization constrained by the 2D incompressible Navier-Stokes equations [56] is studied. In this example, an L2-tracking type objective functional is minimized as follows: Figure 10: Obstacle optimization results for test case II by Fireshape and AONN-2. The color represents the velocity magnitude. Figure 9: Obstacle optimization results for test case I by Fireshape and AONN-2. The color represents the velocity magnitude. \[\left\{\begin{aligned} \min_{\mathbf{u},\Gamma_{f}}J(\mathbf{u}, \Omega):=\int_{\Omega}\left\|\mathbf{u}-\mathbf{u}_{d}\right\|^{2}\,\mathrm{d} \mathbf{x},\\ \text{subject to}\left\{\begin{aligned} -\nabla p-(\mathbf{u}\cdot\nabla)\mathbf{u}+\nu \Delta\mathbf{u}&=\mathbf{0}&\text{in }\Omega,\\ \text{div}\,\mathbf{u}&=0&\text{in }\Omega,\\ \mathbf{u}&=(u_{in},0)^{\top}&\text{on }\Gamma_{i},\\ \mathbf{u}&=\mathbf{0}&\text{on }\Gamma_{w}\cup\Gamma_{f},\\ p\mathbf{n}-\nu\partial_{\mathbf{n}}\mathbf{u}&= \mathbf{0}&\text{on }\Gamma_{o},\end{aligned}\right.\end{aligned}\right. \tag{23}\] where \(\nu=1/50\) is the reciprocal of the Reynolds number, \(u_{in}(x_{1},x_{2})=2.5(1+x_{2})(1-x_{2})\), and \(\mathbf{u}_{d}=(u_{in},0)^{\top}\). Two initial shapes are constructed based on \(\overline{\Omega}=[-1,1]\times[-1,1]\) with one piece of its boundary \(\{(x_{1},x_{2})|-0.5\leq x_{1}\leq 0.5,x_{2}=1\}\) being replaced by two Bezier curves as shown in Fig.11(a) and (b), and the optimal shape corresponding to both initial shapes are exactly the same square domain \(\overline{\Omega}\). Under such situation, this problem can be used to test whether the shape optimization method has the ability to turn the curve on the top to a straight line. According to problem (23), the adjoint variables \(\boldsymbol{\lambda}\) and \(q\) are defined by the following adjoint equation: \[\left\{\begin{aligned} -\nu\Delta\boldsymbol{\lambda}-\nabla \boldsymbol{\lambda}\cdot\mathbf{u}+(\nabla\mathbf{u})^{\top}\boldsymbol{ \lambda}+\nabla q&=\mathbf{u}-\mathbf{u}_{d}&\text{in }\Omega,\\ \nabla\cdot\boldsymbol{\lambda}&=0&\text{in }\Omega,\\ q\mathbf{n}-\nu\partial_{\mathbf{n}}\boldsymbol{\lambda}-(\mathbf{u} \cdot\mathbf{n})\boldsymbol{\lambda}&=\mathbf{0}&\text{on }\Gamma_{o},\\ \boldsymbol{\lambda}&=\mathbf{0}&\text{on }\Gamma_{w}\cup\Gamma_{f}\cup\Gamma_{i}.\end{aligned}\right. \tag{24}\] Figure 11: Two initial shapes in the channel optimization problem. A prescribed horizontal velocity profile \(\mathbf{u}=(u_{in},0)^{\top}\) is assigned on the inlet boundary \(\Gamma_{i}\). The flow leaves the domain at the outflow boundary \(\Gamma_{o}\), and the remaining boundaries \(\Gamma_{w}\), \(\Gamma_{f}\) are no-slip walls, where \(\Gamma_{f}\) needs to be optimized to minimize the objective functional. Once the state and adjoint variables are obtained, the descent direction \(\mathbf{\Phi}\) can be calculated by solving the following regularization equation: \[\begin{cases}\begin{aligned} -\Delta\mathbf{\Phi}+\mathbf{\Phi}& =\mathbf{0}&\text{in }\Omega,\\ \partial_{\mathbf{n}}\mathbf{\Phi}+\nabla\hat{J}\mathbf{n}& =\mathbf{0}&\text{on }\Gamma_{f},\\ \mathbf{\Phi}&=\mathbf{0}&\text{on }\Gamma_{i}\cup\Gamma_{w}\cup \Gamma_{o},\end{aligned}\end{cases} \tag{25}\] where \[\nabla\hat{J}=\frac{1}{2}\|\mathbf{u}-\mathbf{u}_{d}\|^{2}+\nu(\partial_{ \mathbf{n}}\mathbf{u})\cdot(\partial_{\mathbf{n}}\boldsymbol{\lambda}). \tag{26}\] In AONN-2, three neural networks that contain 3 residual blocks with 20 neurons per hidden layer are employed to represent the velocity field \(\mathbf{u}\), the adjoint variable \(\boldsymbol{\lambda}\) and the descent direction \(\mathbf{\Phi}\), respectively, and two neural networks that contain 3 residual blocks with 15 neurons per hidden layer are employed to represent the pressure field \(p\) and the adjoint variable \(q\), respectively. For training the neural networks to approximate the solutions of the state, adjoint and regularization equations, 424 and 849 points on the domain boundary, and 4000 and 8000 points inside the computational domain are sampled for the two cases, respectively. Among these boundary collocation points, 75 and 150 points on \(\Gamma_{f}\) are used as the shape representing points for each of the two cases. When using Fireshape, the P2-P1 Taylor-Hood finite element method is applied for spatial discretization and the resultant nonlinear equations are solved with PETSc. The corresponding experiment settings and results are listed in Table 3. As seen from Table 3, AONN-2 is able to achieve lower objective functional values than Fireshape does. Further in Fig. 12, we draw the flow fields before and after the optimization for the two initial shapes. Analogous to the previous examples, due to the restriction of moving mesh, Fireshape usually sacrifices part of the flexibility of the deformation to maintain the mesh quality (e.g. by adding regularization terms). In comparison, AONN-2 allows the computational domain to be squeezed or stretched more freely, thanks to the considerable flexibility provided by the shape representation points on the free boundary. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Case I & \multicolumn{4}{c}{Fireshape} & \multicolumn{4}{c}{AONN-2} \\ \cline{2-9} Initial objective & 6.78 & 6.78 & 6.78 & 6.78 & 6.78 & 6.78 & 6.78 & 6.78 \\ Optimized objective & 0.0036 & 0.0031 & 0.0025 & 0.0024 & 0.0017 & 0.00068 & 0.0010 & 0.0011 \\ Shape representation & 256 & 1024 & 16384 & 65536 & 75 & 75 & 150 & 150 \\ Collocation points \((M,N)\) & - & - & - & - & (424,4000) & (424,4000) & (849,8000) & (849,8000) \\ Network parameters & - & - & - & - & 5013 & 9128 & 5013 & 9128 \\ Iteration number & 4 & 4 & 5 & 5 & 10 & 10 & 10 & 10 \\ \hline Case II & \multicolumn{4}{c}{Fireshape} & \multicolumn{4}{c}{AONN-2} \\ \cline{2-9} Initial objective & 6.60 & 6.60 & 6.60 & 6.60 & 6.60 & 6.60 & 6.60 & 6.60 \\ Optimized objective & 0.0023 & 0.0021 & 0.0018 & 0.0019 & 0.00033 & 0.00031 & 0.00032 & 0.00026 \\ Shape representation & 288 & 1152 & 4608 & 18432 & 75 & 75 & 150 & 150 \\ Collocation points \((M,N)\) & - & - & - & - & (424,4000) & (424,4000) & (849,8000) & (849,8000) \\ Network parameters & - & - & - & - & 5013 & 9128 & 5013 & 9128 \\ Iteration number & 12 & 12 & 16 & 14 & 20 & 20 & 20 & 20 \\ \hline \hline \end{tabular} \end{table} Table 3: The experiment settings and results for the channel optimization. ## 5 Conclusion An adjoint-oriented neural network method for PDE-constrained shape optimization (AONN-2) is proposed in this paper. AONN-2 not only inherits the characteristics from AONN but also takes advantage of shape derivative to optimize the shape represented by the discrete boundary points. Due to the mesh-free nature, AONN-2 can naturally avoid the issues caused by mesh deformation, which are often encountered in mesh-dependent optimization methods. In a series of numerical experiments, we apply AONN-2 to tackle shape optimization problems constrained by the Poisson, Stokes, and Navier-Stokes equations, and compare its performance with the widely-used shape optimization toolbox, Fireshape. The optimization results demonstrate that AONN-2 is able to obtain more desirable results with lower values of objective functionals, and is more robust to various initial shapes. To further improve the performance of AONN-2, we consider to investigate methods that can reasonably determine the initial shape and position of the optimization object. In addition, we plan to extend the applicability of AONN-2 to topology optimization in the future work. **CRediT authorship contribution statement** **Xili Wang:** Conceptualization, Methodology, Programming, Investigation, Writing - Original draft. **Pengfei Yin:** Conceptualization, Methodology, Programming, Investigation, Writing - Original draft. **Bo Zhang:** Conceptualization, Validation, Investigation, Figure 12: Channel optimization results by Fireshape and AONN-2. The color represents the velocity magnitude. Writing - reviewing and editing. **Chao Yang:** Conceptualization, Validation, Writing - reviewing and editing, Supervision. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgments This study was supported in part by the National Natural Science Foundation of China (No. 12131002, No. 62306018), and the China Postdoctoral Science Foundation (No. 2022M710211).
2309.10510
Logic Design of Neural Networks for High-Throughput and Low-Power Applications
Neural networks (NNs) have been successfully deployed in various fields. In NNs, a large number of multiplyaccumulate (MAC) operations need to be performed. Most existing digital hardware platforms rely on parallel MAC units to accelerate these MAC operations. However, under a given area constraint, the number of MAC units in such platforms is limited, so MAC units have to be reused to perform MAC operations in a neural network. Accordingly, the throughput in generating classification results is not high, which prevents the application of traditional hardware platforms in extreme-throughput scenarios. Besides, the power consumption of such platforms is also high, mainly due to data movement. To overcome this challenge, in this paper, we propose to flatten and implement all the operations at neurons, e.g., MAC and ReLU, in a neural network with their corresponding logic circuits. To improve the throughput and reduce the power consumption of such logic designs, the weight values are embedded into the MAC units to simplify the logic, which can reduce the delay of the MAC units and the power consumption incurred by weight movement. The retiming technique is further used to improve the throughput of the logic circuits for neural networks. In addition, we propose a hardware-aware training method to reduce the area of logic designs of neural networks. Experimental results demonstrate that the proposed logic designs can achieve high throughput and low power consumption for several high-throughput applications.
Kangwei Xu, Grace Li Zhang, Ulf Schlichtmann, Bing Li
2023-09-19T10:45:46Z
http://arxiv.org/abs/2309.10510v1
# Logic Design of Neural Networks for High-Throughput and Low-Power Applications ###### Abstract **Neural networks (NNs) have been successfully deployed in various fields. In NNs, a large number of multiply-accumulate (MAC) operations need to be performed. Most existing digital hardware platforms rely on parallel MAC units to accelerate these MAC operations. However, under a given area constraint, the number of MAC units in such platforms is limited, so MAC units have to be reused to perform MAC operations in a neural network. Accordingly, the throughput in generating classification results is not high, which prevents the application of traditional hardware platforms in extreme-throughput scenarios. Besides, the power consumption of such platforms is also high, mainly due to data movement. To overcome this challenge, in this paper, we propose to flatten and implement all the operations at neurons, e.g., MAC and ReLU, in a neural network with their corresponding logic circuits. To improve the throughput and reduce the power consumption of such logic designs, the weight values are embedded into the MAC units to simplify the logic, which can reduce the delay of the MAC units and the power consumption incurred by weight movement. The retiming technique is further used to improve the throughput of the logic circuits for neural networks. In addition, we propose a hardware-aware training method to reduce the area of logic designs of neural networks. Experimental results demonstrate that the proposed logic designs can achieve high throughput and low power consumption for several high-throughput applications.** ## I Introduction Neural networks (NNs) have been successfully applied in various fields, e.g., pattern recognition and natural language processing. In NNs, a large number of multiply-accumulate (MAC) operations need to be performed. Traditional digital hardware platforms such as GPU and TPU use parallel MAC units consisting of multipliers and adders to perform such MAC operations. Due to area constraints, the number of MAC units is limited. Accordingly, MAC units on such platforms have to be reused to implement all the MAC operations in a neural network. Therefore, the throughput of generating classification results on such platforms is not high, which prevents their adoption in extremely high-throughput applications, such as signal compensation in optical fiber communications [1], data collection from physics experiments [2] and malicious packet filtering for network detection [2]. In addition, large power consumption is another issue using such platforms to accelerate NNs, mainly resulting from data movement, e.g., loading weights from external DRAM to MAC units [3]. MAC operations also cause a part of the power consumption. Thus it remains challenging to perform high-throughput tasks on those resource-constrained platforms requiring low power consumption. Various methods, from the software to the hardware levels, have been proposed to address the throughput and power consumption issues in traditional digital hardware platforms. On the software level, pruning [4] and quantization [5] of NNs have been explored to reduce the number of MAC operations and the complexity of performing MAC operations, respectively. In addition, different dataflows, e.g., weight stationary [6], output stationary [7], and row stationary [8], have been proposed to reduce data movement and power consumption. On the hardware level, traditional digital hardware platforms using parallel MAC units are modified, e.g., by inserting multiplexers [9] to improve the throughput, and power/clock gating [10][11] to reduce power consumption. Another perspective to improve throughput and reduce power consumption in accelerating a neural network is to convert it into a logic circuit or a look-up table (LUT)-based design, where weights are embedded. For example, LogicNets [2] quantizes the inputs and outputs of neurons with low bit widths and implements such neurons with LUTs, which are subsequently deployed on FPGAs. NullNets [12] train a neural network to produce binary activations and treat the operations at a neuron as multi-input multi-output Boolean functions. Such Boolean functions of neurons are further considered as truth tables and synthesized with logic synthesis tools. Although converting a neural network into a logic/LUT design is promising to improve throughput and reduce power consumption in accelerating this neural network, the existing methods either incur a large number of LUTs for a high inference accuracy or cannot guarantee the feasibility of logic synthesis of truth tables of neurons due to their complexity. To address this challenge, in this paper, we propose to directly flatten and implement all the operations in a neural network with their corresponding logic circuits and embed weights into such circuits. The key contributions of this paper are summarized as follows. * All operations including MAC and activation functions in a neural network are flattened and implemented with logic circuits. Flip-flops are inserted at the end of neurons in each layer to improve the throughput of such circuits. * In implementing MAC operations with logic circuits, pre-determined weights after training are used to simplify the logic of MAC units to reduce their delay, power and area. Since weights are not required to be moved from external DRAM, power consumption can be reduced significantly. * After the MAC units are simplified with weights in a layer, some logic is shared between the layers. Therefore, the whole neural network circuit is further simplified with EDA tools to reduce area and delay. * Different weight values affect the logic complexity of the resulting simplified MAC units. Accordingly, the traditional training is adjusted to select those weight values leading to smaller circuit sizes. * We present comprehensive evaluations on three high-throughput tasks, demonstrating that the proposed method can achieve high throughput and low power consumption. Furthermore, the proposed method outperforms the state-of-art approaches by achieving an average 2.19% increase in inference accuracy and an average 13.26% increase in throughput while reducing area overheads by an average of 38.76%. The rest of this paper is organized as follows. Section II gives the background and motivation of this work. Section III explains the details of the proposed method. The experimental results are given in Section IV. Section V concludes the paper. ## II Background and Motivation Neural networks have multiple layers of neurons. Synapses connect neurons with individual weights. The weights of a layer form a weight matrix. Computing a layer in an NN requires the multiplication of input data and the weight matrix, incurring a large number of MAC operations. The results of MAC operations are further processed by activation functions, e.g., ReLU. Traditional digital hardware platforms such as GPU and TPU adopt parallel MAC units to accelerate such MAC operations. Due to the area constraint of such platforms, the number of MAC units is limited, so those MAC units have to be reused to perform all the MAC operations. Therefore, the throughput of generating classification results is low and not well suited for extreme-throughput applications. In addition, such platforms suffer from large power consumption due to data movement and MAC computation, limiting their application in resource-constrained platforms, e.g., edge devices. Various techniques have been proposed to address throughput and power consumption issues in accelerating NNs. One of the promising techniques is to convert NNs into LUT-based designs and logic circuits. We will explain their basic concepts as follows. #### Ii-1 LogicNets [2] proposes to map quantized neurons in a neural network to LUTs and implement such LUTs on FPGAs. The concept of LogicNets is shown in Fig. 1, where the \(M\) inputs and one output of a neuron are quantized to \(n\) bits. To convert this neuron into a LUT, the function of neurons can be expressed by a truth table that enumerates all the combinations of input bits (i.e., the fan-in of this neuron) with their corresponding outputs. For an entry in the truth table, the output can be evaluated according to the input bits and predetermined weights. The weights themselves do not appear in the truth table directly. Once the truth table is established, it can be mapped to LUTs and implemented on FPGAs. Since neurons in NNs may have many inputs and their quantization bits should be large enough to maintain a high inference accuracy. As the truth table becomes large, the above mapping may lead to a large number of LUTs. This is because the hardware cost of implementing truth tables of neurons with LUTs grows exponentially with neuron fan-in. For example, implementing a 32:1 truth table requires about a hundred million 6:1 LUTs, which is much larger than even the largest FPGAs available today and makes it impractical for direct mapping to LUTs [2]. Although extreme pruning and quantization can limit the neuron fan-in, this degrades the inference accuracy. #### Ii-2 NullaNets [12] converts the truth table of a neuron into a logic circuit with logic synthesis. To reduce the complexity in logic synthesis, they only evaluate the output values for input combinations in the training dataset, and the output values for the remaining input combinations are set as "don't care". The resulting synthesized logic circuit may not generate the correct output for those input data in the test dataset, degrading the inference accuracy. Besides, even though only partial input combinations are considered in logic synthesis, the logic complexity might still be high, preventing the feasibility of logic synthesis of the truth table of a neuron. Contrary to the previous work in converting NNs into LUTs and logic circuits, the proposed method directly flattens all the operations in a neural network into their corresponding logic circuits with weights embedded in such circuits. ## III Logic Design of Neural Networks In this section, we first introduce the logic implementations of neural networks. Retiming is then used to improve the throughput of the logic designs of NNs. Afterwards, a hardware-aware training technique is proposed to reduce the area overheads of such logic designs. Fig. 1: The concept of implementing a neuron with LUTs in LogicNets [2]. ### _Logic Implementations of NNs with Fixed Weights_ To implement all the operations in NNs with logic circuits, we first use quantization-aware training to train a neural network while maintaining the inference accuracy. 8-bit quantization of weights and input activations is used during this process [13]. In addition, unstructured pruning [14] is further used to remove unimportant weights and reduce computational costs. Afterwards, the neural network is fine-tuned to improve the inference accuracy. After training, the MAC operations in a neural network can be directly implemented with MAC units and the logic realizing the activation functions. The fixed weights after training are used to simplify the MAC operations at neurons. The simplified MAC units are appended with the logic circuit implementing the activation function at a neuron. All the neurons in this network are processed similarly. The resulting logic circuits are concatenated to generate the complete logic of the neural network. Then, flip-flops are inserted at the end of each layer to synchronize data propagation, after which logic redundancy within and across layers is removed by EDA tools. In the following paragraphs, we will introduce each part of the logic design of a neural network. #### Iii-A1 Multiplier To reduce the delay, power and area of multipliers, the fixed weights after training are used to simplify the logic of the multipliers. Fig. 2 illustrates the logic simplification of a 2-bit signed multiplier with the fixed quantized weight - \(2_{d}\) or \(10_{b}\). After this simplification, the delay of the multiplier circuit is reduced by 57.72%, the power consumption is reduced by 68.23%, and the number of transistors is reduced by 60%. All the multipliers at a neuron are processed this way. #### Iii-A2 Adder The simplified multipliers at a neuron will be appended with an adder realizing the addition operation. Due to the weight embedded in multipliers, the resulting logic circuit of the MAC operation can be further simplified. Fig. 3 illustrates the comparison of the MAC unit circuit before and after weight embedding, where the multipliers and the adder are 2-bit and 4-bit, respectively. With weight embedding, the delay of the MAC unit is reduced by 70.07%, the power consumption is reduced by 71.64%, and the number of transistors is reduced by 65%. In this work, we use an 8-bit neural network to maintain the inference accuracy of the hardware implementation. However, the quantization bit of the adder of a MAC unit is larger than 8-bit and increases with the increasing number of inputs at a neuron. In general, the quantization bit of the MAC operation result should be reduced to 8-bit before this result enters the subsequent layer [13]. Two techniques are proposed to maintain the inference accuracy during this process. First, all the addition results at neurons in NNs are profiled to examine the actual bit width to represent such addition results. Fig. 4 presents an example of the distribution of addition results according to the training dataset. By removing the outliers, the maximum values of the remaining results are used to define the output bit width of the adder. Second, a requantizer is used to convert the high-bit value to 8-bit by multiplying it with a scaling factor [13]. By embedding the scaling factor into the requantizer, the delay, power and area of the requantizer circuit can be reduced. #### Iii-A3 The Circuit for Implementing Activation Function An activation function outputs the corresponding activation state by thresholding the input to determine whether the neuron should Fig. 4: Distribution of 1500 samples of addition values from the adder output in a neuron for the OFC task. This example illustrates that after removing the outlier -8256\({}_{d}\) (15-bit), the output bit width of the adder needs only 14-bit. Fig. 3: (a) MAC operations at a neuron; (b) 2-bit signed multipliers simplified with the fixed quantized weights; (c) 4-bit signed adder circuit before logic simplification, where FA is a 1-bit full adder; (d) Circuit of the simplified MAC unit. Fig. 2: Logic circuit of 2-bit signed multiplier. (a) The original circuit; (b) The logic circuit simplified with a fixed quantization weight (decimal: -2, binary: 10). be activated or not activated. The activation function ReLU generates 0 if the input is smaller than 0 and remains the original value if it is larger than 0. The circuit for performing ReLU is generated by describing its function with hardware description languages such as Verilog and synthesizing it by the EDA tool. The logic circuit of each neuron is generated with the techniques described above. The resulting circuits for all neurons are concatenated together to produce the complete logic design of the neural network. The logic circuits of neurons might share some common logic since they result from the simplification of a MAC unit. To remove the logic redundancy, the complete circuit of a neural network will be optimized by EDA tools. ### _Retiming with Cascaded Flip-Flops_ To synchronize data propagation in each layer, flip-flops are inserted at the end of the logic circuit implementing the activation function. To further improve the throughput of the logic circuit of a neural network, cascaded flip-flops are inserted into the original circuits, and retiming technique is then deployed to reduce the maximum delay between flip-flop stages. As a retiming example at a neuron shown in Fig. 5(a), the combinational logic block 1 has a critical path delay of 9, limiting the whole circuit's performance. Assume the clock-to-q delay and the setup time of a flip-flop are 3 and 1, respectively. The minimum clock period of this circuit is equal to 13. In this example, two-stage flip-flops (IFF1 and IFF2) are inserted after the circuit implementation of ReLU. Afterwards, retiming is realized by EDA tools automatically. As shown in Fig. 5(b), with the retiming technique, the performance can be optimized by reallocating IFF1 and IFF2 in the combinational logic block 1, resulting in a minimum clock period equal to 7, which reduces the clock period by 46.15%. ### _Hardware-Aware Training_ Different weight values affect the logic complexity of the resulting simplified multiplier. For example, as shown in Fig 6, the quantized 8-bit weight '107' corresponds to a larger multiplier area of 111 units, while the quantized 8-bit weight '-16' leads to an area overhead of only 16 units. To take advantage of this property, we propose to train the neural network with selected weights that lead to a smaller multiplier area. The weight selection and the modified training are explained as follows. _1) Weight selection:_ We first rank the weight values according to the area of the resulting simplified multipliers. Then, we select the top \(n\) weights that lead to the smallest multiplier area. In the experiments, \(n\) was set to 40. The neural network is trained only with such weight values, and the validation accuracy is verified. If the validation accuracy is much lower than that of the original training, more weight values, e.g., 50, are selected and used to train the neural network. In each iteration, 10 more weight values leading to the small area will be added to the previously selected weight value set and used to train the network. The iteration continues until the validation accuracy recovers almost to that of the original training. _2) Training:_ During training, the weights are forced to take the selected values in the forward propagation. In the backward propagation, the straight-through estimator is applied to skip the selection operation. In each epoch, all weights in the model are traversed and replaced with the closest value from the selected weight set. The training continues until the loss function converges or a given number of epochs is finished. ## IV Experimental Results In this section, we demonstrate the results of the proposed method in terms of accuracy, throughput, power consumption and area overheads in 3 different extreme-throughput applications: _1) Optical Fiber Communications (OFC):_ Transmission of optical signals in fibers suffers from chromatic dispersion (CD). Neural networks are used as nonlinear equalizers to compensate CD, where the input is the optical signal affected by CD, and the output is the compensated optical signal. We use the formulation from [1] for OFC as a 21-input 1-output prediction task. _2) Jet Substructure Classification (JSC):_ In large-scale physics experiments, terabytes of instrumentation data are generated every second. Neural networks with 16 inputs and 5 outputs are employed to filter out the most interesting results [2]. _3) Network Intrusion Detection (NID):_ The neural networks can identify malicious network packets to strengthen network security. For this task, we use the UNSW-NB15 dataset [2], which consists of the packets labeled as either bad (0) or normal (1) with a total of 593 input features. During training, the Adam optimizer was used and the step decay learning rate schedule was set starting from 0.001. The mini-batch size was set to 1024. The quantization-aware training Fig. 5: (a) Insertion of two-stage flip-flops after the original ReLU circuit; (b) Circuit after retiming. Fig. 6: Area of multipliers simplified with 8-bit quantized weights. stops after 300 epochs and the hardware-aware training stops after 100 epochs. Each experiment for a particular task was repeated 10 times and the average is reported. The neural network training was implemented with Pytorch on Nvidia Quadro RTX 6000 GPUs. The logic circuits in implementing NNs were synthesized with Synopsys Design Compiler using the Nangate 45nm open-cell library. The correctness of the circuits is verified by performing simulation and ensuring the same results as the original PyTorch network are returned. Table I demonstrates the performance results. The first and second columns show the names of the neural networks and their structures. "A", "B" and "C" represent different versions of neural networks. The third, fourth and fifth columns represent the inference performance of the original floating-point NNs, the traditional quantized and pruned NNs, and our proposed NNs, respectively. According to these columns, the proposed hardware-aware training method can maintain a low bit error rate (BER) in the application of OFC and high inference accuracy in the applications of JSC and NID. Compared with the baseline, where all the operations in a neural network are flattened without embedding fixed weights, the proposed method can achieve higher throughput and lower power consumption, as shown in the eighth and eleventh columns. The last two columns show the numbers of selected weight values in the hardware-aware training. Compared with the original 8-bit of 256 weight values used for training, we train the NNs with only a small number of weight values (e.g., 70 in OFC-A) to reduce the size of their logic implementations. To demonstrate the advantages of the proposed method, we compared the proposed method with LogicNets [2] in terms of inference accuracy, the number of transistors and maximum frequency. In this comparison, the network size, the pruning ratio, and the quantization bits of the activations were set as the same with LogicNets. The number of LUTs in LogicNets is equivalent to the number of transistors for area comparison. According to the results illustrated in Fig. 8, the proposed method achieves higher accuracy than LogicNets on all tasks with a much smaller number of transistors. In addition, in most cases, the maximum frequency with the proposed method is higher than that of LogicNets, even though LogicNets uses 16nm technology while the proposed method uses 45nm technology. To balance the maximum frequency and the number of inserted flip-flops, we iteratively inserted more flip-flops after implementing the circuit ReLU and used retiming to improve the clock frequency. The results are illustrated in Fig. 8, where the histogram denotes the maximum frequency (MHz), and the line represents the number of flip-flops after retiming. According to this figure, the maximum frequency can be improved significantly by inserting more flip-flops initially and later remaining stable even with more flip-flops inserted into the logic circuit. Fig. 8: Comparison of max frequency before & after FF insertion and retiming. To demonstrate the reduction in power and area with weight embedding and hardware-aware training, we compared the power consumption and area overheads before and after using such techniques. As shown in Fig. 9 (a), for the OFC task, when weights are embedded in the circuit, the power consumption of the synthesized circuit is 137mw. After considering the simplification between different logic, the power consumption is reduced to 115mW. With the hardware-aware training, the power consumption is further reduced to 102mW. According to this figure, weight embedding and hardware-aware training can significantly reduce power consumption and the corresponding area overheads. In the proposed hardware-aware training, a certain number of weights resulting in a small multiplier area were selected to train the network. The inference accuracy with the proposed method can still be maintained, as illustrated in Fig. 10 (a). To demonstrate the tradeoff between the number of selected weights and inference performance, we selected different numbers of weights and used them to train the network. The results are illustrated in Fig. 10 (b-d). According to these figures, with the number of selected weights reduced, the BER in OFC increases and the inference accuracy in JSC and NID decreases. Besides, the area overheads reduce with decreasing number of selected weights. ## V Conclusion In this paper, we have proposed an efficient logic design of neural networks for extremely high-throughput applications. Instead of relying on parallel MAC units to accelerate these MAC operations, all the operations in NNs are implemented through logic circuits with embedded weights. Weight embedding not only reduces the delay of the logic circuit but also reduces power consumption. The retiming technique is used to further improve the circuit throughput. A hardware-aware training method is proposed to reduce the area of logic designs of NNs. Experimental results on three tasks demonstrate the proposed logic design can achieve a high throughput and low power consumption. ## Acknowledgement This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 504518248 and supported by TUM International Graduate School of Science and Engineering (_IGSSE_).
2308.16664
What can we learn from quantum convolutional neural networks?
We can learn from analyzing quantum convolutional neural networks (QCNNs) that: 1) working with quantum data can be perceived as embedding physical system parameters through a hidden feature map; 2) their high performance for quantum phase recognition can be attributed to generation of a very suitable basis set during the ground state embedding, where quantum criticality of spin models leads to basis functions with rapidly changing features; 3) pooling layers of QCNNs are responsible for picking those basis functions that can contribute to forming a high-performing decision boundary, and the learning process corresponds to adapting the measurement such that few-qubit operators are mapped to full-register observables; 4) generalization of QCNN models strongly depends on the embedding type, and that rotation-based feature maps with the Fourier basis require careful feature engineering; 5) accuracy and generalization of QCNNs with readout based on a limited number of shots favor the ground state embeddings and associated physics-informed models. We demonstrate these points in simulation, where our results shed light on classification for physical processes, relevant for applications in sensing. Finally, we show that QCNNs with properly chosen ground state embeddings can be used for fluid dynamics problems, expressing shock wave solutions with good generalization and proven trainability.
Chukwudubem Umeano, Annie E. Paine, Vincent E. Elfving, Oleksandr Kyriienko
2023-08-31T12:12:56Z
http://arxiv.org/abs/2308.16664v2
# What can we learn from quantum convolutional neural networks? ###### Abstract We can learn from analyzing quantum convolutional neural networks (QCNNs) that: 1) working with quantum data can be perceived as embedding physical system parameters through a hidden feature map; 2) their high performance for quantum phase recognition can be attributed to generation of a very suitable basis set during the ground state embedding, where quantum criticality of spin models leads to basis functions with rapidly changing features; 3) pooling layers of QCNNs are responsible for picking those basis functions that can contribute to forming a high-performing decision boundary, and the learning process corresponds to adapting the measurement such that few-qubit operators are mapped to full-register observables; 4) generalization of QCNN models strongly depends on the embedding type, and that rotation-based feature maps with the Fourier basis require careful feature engineering; 5) accuracy and generalization of QCNNs with readout based on a limited number of shots favor the ground state embeddings and associated physics-informed models. We demonstrate these points in simulation, where our results shed light on classification for physical processes, relevant for applications in sensing. Finally, we show that QCNNs with properly chosen ground state embeddings can be used for fluid dynamics problems, expressing shock wave solutions with good generalization and proven trainability. ## I Introduction Quantum computing offers a paradigm for solving computational problems in a distinct way [1; 2]. It has been considered for addressing challenges in chemistry [3; 4], material science [5; 6; 7; 8], high-energy physics [9; 10], optimization and finances [11; 12; 13], and recently, for solving machine learning problems on quantum computers [14; 15; 16; 17; 18]. The latter is the task of quantum machine learning (QML). Quantum machine learning is a rapidly progressing field of research, which comprises different techniques that may offer a speedup [15], as well as a range of other advantages that have not been considered before [19; 20]. To date, this includes examples from supervised learning (represented by classification [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32] and regression [33; 34; 35; 36; 37; 38]), reinforcement learning [39; 40; 41; 42], and unsupervised learning with a strong effort in generative modelling [43; 44; 45; 16]. By far the strongest effort concerns classification [51]. A typical workflow starts with loading classical datasets \(\mathcal{D}=\{\mathbf{x}_{m},y_{m}\}_{m=1}^{M}\) comprising \(M\) data samples. The features \(\mathbf{x}\) can be embedded into parametrized gates or quantum state amplitudes, and by processing the corresponding quantum states one can match class labels \(y_{m}\) from the training subset [16]. The goal is to predict unseen samples. Here, the workflow involves quantum circuits for _embedding_ classical data (known as quantum feature maps \(\tilde{\mathcal{U}}_{\mathbf{x}}(\mathbf{x})\)[52; 36; 53]), which generate a state in Hilbert space of the processing device, \(\mathbf{x}\mapsto|\psi(\mathbf{x})\rangle=\tilde{\mathcal{U}}_{\mathbf{x}}(\mathbf{x})|\psi\rangle\). The states are then processed by variational circuits (aka ansate) \(\hat{\mathcal{V}}_{\mathbf{\theta}}\)[16; 54], and measured for some observable \(\hat{\mathcal{O}}\). This distantly resembles deep learning [55], for the model formed in a quantum latent space, and is referred to as a quantum neural network (QNN) [55; 56; 57]. While QNNs were successfully applied to many proof-of-principal tasks [51; 54], some issues remain to be resolved before seeing their utility in practice. These issues include limited trainability for models of increased size, corresponding to barren plateaus (BPs) of vanishing gradients of QNNs [58; 59], and generally rugged optimization landscape [60; 61]. Also, despite an increased expressive power of QNN-based models [56; 62; 63], they remain heuristic in nature, and a clear separation of quantum and classical model performance can only be established for very peculiar datasets [64; 65]. Another strongly related issue is that high expressivity of quantum models implies limited trainability [66]. This may lead to overfitting in cases where the chosen basis set does not match the required problem [67]. Recently, an increasing number of works considered quantum machine learning models for training on _quantum_ data [68; 69; 70; 71; 72]. In this case a quantum dataset \(\hat{\mathcal{Q}}=\{\hat{\rho}_{\alpha},y_{\alpha}\}_{\alpha=1}^{M}\) corresponds to the collection of \(M\) quantum states \(\hat{\rho}_{\alpha}\) (pure or mixed) coming from some quantum process, and associated labels \(y_{\alpha}\) that mark their distinct class or property. When learning on quantum data, quantum computers have shown excellent results in sample complexity [68; 69; 73; 74]. A striking example corresponds to quantum convolutional neural networks (QCNNs). Introduced in Ref. [75] by Cong _et al._, this type of network was designed to take quantum states, and using _convolution_ (translationally-invariant variational quantum circuits) and _pooling_ (measurement with conditional operations) prepare an efficient model for predicting the labels. This was used for quantum phase classification at increasing scale applied to spin-1/2 cluster model and Ising model in one-dimensional (1D) systems. The results for phase classification were reproduced and extended in other studies [69; 76; 77] showing that few samples are needed for training, generalization is excellent [69], and accuracy is superior to other approaches [77]. Moreover, the scaling of gradients was shown to decrease polynomially in the system size (thus -- efficiently), such that QCNNs avoid barren plateaus [78]. This is in contrast to exponentially decreasing gradient variance for generic deep circuits [58; 59]. Current intuition is that working with quantum data leads to better generalization, as QCNNs inherently "analyze" the structure of entanglement of quantum states [75], and reduced entanglement leads to better gradients [79; 80]. However, despite the success of QCNNs, their full understanding is missing, and leaves them aside of mainstream QML. This hinders the development and use of QCNN-type models beyond explored classification examples. In this work, we aim to demystify the inner workings of quantum convolutional neural networks, specifically highlighting why they are so successful in quantum phase recognition. We show that supplied quantum states (features) can be understood in terms of _hidden_ feature maps -- quantum processes that prepare states depending on classical parameters \(\mathbf{x}\) (being a feature vector or scalar; see Fig. 1). During the mapping process, we observe that the ground state preparation supplies a very beneficial basis set, which allows the building of a nonlinear quantum model for the decision boundary -- the ultimate goal of classification -- that is sharp and generalizes on few data points and with smaller number of measurement shots. This is compared to the rotation-based Fourier embedding, which requires feature engineering for sufficient generalization. We connect the developed QCNN description with error correction and multi-scale entanglement renormalization ansatz (MERA) explanations from Ref. [75], and show that single-qubit observables can be used to "pick up" the most suitable basis functions, while leading to sampling advantage. Motivated by classification, we apply the described QCNN workflow with ground state embedding to solve regression problems motivated by fluid dynamics. This can offer QML tools to deal with critical phenomena with improved generalization. ## II Background ### Quantum neural networks as Fourier-type models Quantum machine learning has evolved from being perceived as a linear algebra accelerator [15] into a versatile tool for building quantum models. The typical workflow corresponds to mapping (embedding) a classical dataset \(\mathcal{D}\) with a feature map as a quantum circuit \(\hat{\mathcal{U}}_{\varphi}(x)\) such that features \(x\mapsto|\psi(x)\rangle=\hat{\mathcal{U}}_{\varphi}(x)|\phi\rangle\) are represented in the latent space of quantum states [53, 37, 57]. The next step corresponds to adapting the generated states with a variational circuit \(\hat{\mathcal{V}}_{\theta}\), and reading out the model as an expectation value of an observable \(\hat{\mathcal{O}}\). The last two steps can also be seen as a measurement adaptation process [81]. Summarizing the workflow, we build QML models as \[f_{\theta}(x)=\langle\varphi|\hat{\mathcal{U}}_{\varphi}(x)^{\dagger}\hat{ \mathcal{V}}_{\theta}^{\dagger}\hat{\mathcal{O}}\hat{\mathcal{V}}_{\theta} \hat{\mathcal{U}}_{\varphi}(x)|\phi\rangle, \tag{1}\] where \(|\phi\rangle\coloneqq|0\rangle^{\otimes N}\) is the computational zero state, and the corresponding approach for instantiating \(f_{\theta}(x)\) is often referred to as the quantum neural network approach. Usually, the embedding is performed by circuits that involve single-qubit rotations \(\hat{R}_{\beta}=\exp[-ix\hat{P}_{\beta}/2]\)[82, 36] (for Pauli matrices \(\hat{P}_{\beta}=\hat{X},\hat{Y},\hat{Z}\)) or evolution of some multi-qubit Hamiltonian \(\hat{G}\) (being the generator of dynamics) such that the map is \(\hat{\mathcal{U}}_{\varphi}(x)=\exp[-ix\hat{G}/2]\). Following this convention, it is easy to see that under the spectral decomposition of \(\hat{G}=\sum_{j}\lambda_{j}|s_{j}\rangle\langle s_{j}|\) (with eigenvalues \(\{\lambda_{j}\}_{j}\) and eigenstates \(\{|s_{j}\rangle\}_{j}\)) the feature map includes complex exponents that depend on \(x\)[83, 53]. Importantly, for the specified structure of the feature map the transformation to the diagonal basis is \(x\)-independent. When accounting for the structure of expectation values, the differences \(\{|\lambda_{j}-\lambda_{j}|\}_{j,j^{\prime}}\) of eigenvalues (spectral gaps) appear as frequencies for the underlying Fourier basis. The action of the variational circuit is then to "weight" the Fourier components, but not the features directly. The resulting model can be written as [53] \[f_{\theta}(x)=\sum_{\omega\in\Omega}c_{\omega}(\theta)e^{i\omega x}, \tag{2}\] where \(c_{\omega}(\theta)\) are coefficients that depend on matrix elements of \(\hat{\mathcal{O}}\) and \(\hat{\mathcal{V}}_{\theta}\), and \(\Omega\) denotes a finite bandwidth spectrum of frequencies generated by the feature map. Its degree depends on the generator \(\hat{G}\) and its eigenvalues. In Ref. [53] the authors mention that rescaling of \(x\) as \(\varphi(x)\) does not change the model qualitatively, and we can see QNNs as Fourier-type models of large size [81]. Within this picture one can even imagine randomized Fourier models that can be treated classically and have similar performance [84]. Looking back into the steps leading to Eq. (2), we highlight that the presented description is by no means a complete guide Figure 1: Visualization for the classification process of physical phases with a quantum convolutional neural network (QCNN). We highlight that input states \(\hat{\varphi}(\mathbf{x})\) for the network come from actual physical processes, e.g. preparation of ground states for spin lattices. The analyzed states depend on underlying classical features \(\mathbf{x}\) of the system, being the physical parameters (externally controlled like magnetic field and temperature, or internal parameters like degree of anisotropy). This can be seen as a _hidden_ feature map (left). The goal of QCNN is then building a model based on a simple few-qubit observable \(\hat{\mathcal{O}}\), with its expectation \(\langle\hat{\mathcal{O}}\rangle(\mathbf{x})\) representing a nonlinear decision boundary with respect to the system features. to QNNs and building of quantum machine learning model in general. We observe that it implies the crucial assumption of the unitarity of the feature map and its form \(\exp(-ix\hat{G}/2)\) that generates exponents as basis functions. Recently, the embeddings based on linear combinations of unitaries (LCU) were proposed that break this assumption [85]. This is represented by the orthogonal Chebyshev feature map \(\hat{\mathcal{U}}_{\varphi}(x)\) such that it generates states of the form \(|\tau(x)\rangle=\sum_{k}c_{k}T_{k}(x)|k\rangle\), where \(T_{k}(x)\) are amplitudes corresponding to Chebyshev polynomials of first-kind and degree \(k\), and \(c_{k}\) are constant factors. Importantly, here the amplitudes are \(x\)-dependent and form the basis for future quantum modelling. Another counter example to the Fourier-type models is the embedding of the type \[\hat{\mathcal{U}}_{\varphi}(x)=\exp\left(-\frac{i}{2}\hat{G}_{0}-\frac{ix}{2} \hat{G}_{1}\right), \tag{3}\] where \([\hat{G}_{0},\hat{G}_{1}]\neq 0\) as generators do not commute. In this case the spectral representation of the generator \(\hat{G}(x)\coloneqq\hat{G}_{0}+x\hat{G}_{1}\) requires a basis transformation \(\hat{W}(x)\) that is \(x\)-dependent. This leads to a feature dependence appearing in the coefficients \(c_{\omega}(\theta,x)\) in Eq. (2), and departs from the Fourier modelling in a more nontrivial way than simply rescaling, \(x\rightarrow\varphi(x)\). In cases of LCU and noncommuting embedding, or indeed any other case that does not fit the evolution embedding, the generated quantum states shall be seen as feature-dependent states \[|\psi(x)\rangle=\sum_{j}\phi_{j}(x)|j\rangle, \tag{4}\] with amplitudes \(\{\phi_{j}(x)\}\) of orthonormal states \(\{|j\rangle\}\) being functions of \(x\), and representing the basis for building quantum models. Given this layout of quantum neural networks, the unsettling question arises: how does the Fourier model description fit the quantum data story and QCNN-based models that are too distinct from what we just described? ### Quantum convolutional neural networks: prior art In the seminal paper by Cong _et al._[75] the authors have put forward the model with convolution and pooling layers, suggested as an analog of classical convolutional neural networks. They have used it for processing quantum states \(|\psi_{\alpha}\rangle\) (or \(\hat{p}_{\alpha}=|\psi_{\alpha}\rangle\langle\psi_{\alpha}|\)) as ground states of spin models. The circuit consists of convolution unitaries \(\{\hat{U}_{i}\}\) and controlled unitaries \(\{\hat{V}_{i}\}\) for pooling (both considered being adjustable), followed by measuring \(\hat{O}\) as a few-qubit observable, while discarding the rest of the qubit register (collapsed on some measurement outcomes). The model then becomes \[f_{\{\hat{U}_{i},\hat{V}_{i},\hat{O}_{i}\}}(|\psi_{\alpha}\rangle)=\langle\psi _{\alpha}|\prod_{i=1}^{1}(\hat{U}_{i}^{\dagger}\hat{V}_{i}^{\dagger})\hat{O} \prod_{i=1}^{L}(\hat{V}_{i}\hat{U}_{i})|\psi_{\alpha}\rangle, \tag{5}\] where we again stress that \(\hat{O}\) corresponds to measuring \(\hat{N}\ll N\) qubits, while tracing out the rest. Here, unitaries \(\{\hat{U}_{i},\hat{V}_{i}\}\) can be varied with \(O(1)\) variational parameters, and layers are translationally invariant. Then the QCNN has only \(O(\log N)\) variational parameters for \(N\)-qubit models, and is trainable [78]. The task is to take the model in Eq. (5) and fit it to label values of the quantum dataset \(\{y_{\alpha}\}\) for each state. This can be achieved by optimizing the mean squared error (MSE) loss \[\mathcal{L}_{\text{MSE}}=\frac{1}{M}\sum_{\alpha=1}^{M}\left\{y_{\alpha}-f_{ \{\hat{U}_{i},\hat{V}_{i},\hat{O}_{i}\}}(|\psi_{\alpha}\rangle)\right\}^{2}. \tag{6}\] The loss can be minimized via the gradient descent or any other method suitable for non-convex optimization. As for the dataset, it was suggested to use ground states of spin-1/2 Hamiltonians. In particular, in most QCNN studies the cluster Hamiltonian with magnetic field and Ising terms was considered [69; 75], corresponding to \[\hat{\mathcal{H}}_{\text{QCNN}}=-J\sum_{i=1}^{N-2}\hat{Z}_{i}\hat{X}_{i+1} \hat{Z}_{i+2}-h_{\text{x}}\sum_{i=1}^{N}\hat{X}_{i}-J_{\text{xx}}\sum_{i=1}^{ N-1}\hat{X}_{i}\hat{X}_{i+1}, \tag{7}\] where the open boundary conditions are considered. This specific Hamiltonian is chosen as an example of non-trivial spin order in 1D, being uniquely related to the measurement-based quantum computing [86]. In Eq. (7) the first term corresponds to the spin-1/2 cluster Hamiltonian with three-body terms, the second term represents the transverse field, and the third term contains Ising interaction terms. The point \(h_{\text{x}}=J_{\text{xx}}=0\) corresponds to a \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetric Hamiltonian that hosts in its ground state a symmetry protected topological (SPT) phase. For \(J_{\text{xx}}=0\) we can study the transition from SPT order at \(J>h_{\text{x}}\) to staggered ferromagnetic order at \(J<h_{\text{x}}\), with \(h_{\text{x}}/J=1\) corresponding to the critical point. In the SPT phase and for the open boundary the ground state is four-fold degenerate, and is characterized by the string order parameter and nonzero topological entanglement entropy [87]. Examples of QCNN-based classification show that by measuring the string order \(\hat{O}=\hat{Z}_{1}\hat{X}_{2}\hat{X}_{3}\).\(\hat{X}_{N-2}\hat{X}_{N-1}\hat{Z}_{N}\) one can distinguish the SPT phase from other phases (open boundary). It is also known that string order implies the utility of states for performing measurement-based quantum computing [86] for which the ground state subspace (cluster states) represent the resource. From the perspective of ground state analysis, the QCNN circuit is designed to measure the effective decision boundary as an expanded string order [75], and the pre-defined QCNN circuit can do this exactly if it satisfies certain criteria (see Ref. [75], section "Construction of QCNN" in Methods), being suggested as guiding principles for building QCNNs. The first guiding principle is referred to as the fixed point criterion, where the exact cluster state \(|\psi_{0}\rangle_{N}\) for \(N\) qubits has to be convoluted and pooled to the \(|\psi_{0}\rangle_{N/3}\) cluster state, with measurements of \(2N/3\) qubits deterministically giving \(0\) bits upon measurement. The second guiding principle is named as the error correction criterion, where pooling layers have to be designed such that errors that commute with global symmetry are fixed during measurements. This procedure is related to MERA. However, from the point of view of QML, it is yet to be understood how does entanglement play a role in this workflow. ## III Model Here, we take the QCNN structure for quantum phase recognition and connect it with the standard QML workflow. For this we note that quantum states \(\ket{\psi_{\alpha}}\) are not abstract quantum "data points", but in fact are the result of quantum processes that correspond to thermalization or ground state (GS) preparation. Specifically, they depend on properties of the underlying Hamiltonian, and its parameters representing features \(x\). Namely, for the cluster-type Hamiltonian used in QCNNs so far these are couplings \(J\), \(J_{\rm xx}\) and magnetic field \(h_{\rm x}\). These system parameters represent a feature vector \(x=\{J,h_{\rm x},J_{\rm xx}\}\) (and each concrete realization is denoted by index \(\alpha\)). Even though we may not have access to them, they define the underlying behavior of the system we attempt to classify. The process of embedding these features as a part of the state preparation we call a _hidden_ feature map (Fig. 1). The circuits for preparing QCNN input as ground states of models \(\ket{\psi_{0}(x)}\) may differ, and can comes both from experiments (e.g. sensors) or specifically designed GS preparation schedule. In the following we assume that there exists a unitary \(\hat{\mathcal{U}}_{\varphi}(x)\) or a completely positive trace-preserving map \(\hat{\mathcal{E}}_{\varphi}(x)\) for the ground state preparation of studied Hamiltonians, and summarize different options in the Appendix A. ### Cluster ground state embedding Let us now consider an example that can shed light on the internal structure of quantum convolutional neural networks. For this we take a spin-1/2 cluster Hamiltonian with periodic boundary. In this case the ground state in the SPT phase is unique and non-degenerate. From the technical perspective this allows working at smaller system size without being harmed by finite size effects, and avoid thermal ensemble state preparation (as pure ground state suffices in this case). Specifically, we consider the Hamiltonian of the form \[\hat{\mathcal{H}}_{\rm t-cluster}=-J\sum_{i=1}^{N}Z_{i}\hat{X}_{i+1}\hat{Z}_{ i+2}-h_{\rm x}\sum_{i=1}^{N}\hat{X}_{i}-h_{\rm z}\sum_{i=1}^{N}\hat{Z}_{i}, \tag{8}\] where we have introduced an additional weak symmetry breaking term with a longitudinal field \(h_{\rm z}\ll h_{\rm x},J\). We mostly care about the \(J/h_{\rm x}\) transition, and for practical reasons keep \(h_{\rm z}=10^{-2}J\). This ensures that we break the degeneracy between states that have \(\mathbb{Z}_{2}\) symmetry. Note that in the Hamiltonian \(\hat{\mathcal{H}}_{\rm t-cluster}\) [Eq. (8)] we use periodic boundary conditions such that \(\hat{X}_{N+1}\equiv\hat{X}_{1}\), \(\hat{Z}_{N+1}\equiv\hat{Z}_{1}\), \(\hat{Z}_{N+2}\equiv\hat{Z}_{2}\) etc. Finally, it is convenient to reparametrize the transverse cluster Hamiltonian in the form \[\hat{\mathcal{H}}(x)=-\cos\left(\frac{\pi x}{2}\right)\sum_{i=1}^{N}\hat{Z}_{ i}\hat{X}_{i+1}\hat{Z}_{i+2}-\sin\left(\frac{\pi x}{2}\right)\sum_{i=1}^{N} \hat{X}_{i}-\varepsilon\sum_{i=1}^{N}\hat{Z}_{i}, \tag{9}\] where we introduced the effective parameters \[x\coloneqq\frac{2}{\pi}\arcsin\left(\frac{h_{\rm x}}{\sqrt{J^{2}+h_{\rm z}^{ 2}}}\right),\ \ \text{and}\ \ \varepsilon\coloneqq\frac{h_{\rm z}}{\sqrt{J^{2}+h_{\rm z}^{2}}}\ll 1. \tag{10}\] We assume that \(x\) changes from \(0\) to \(1\) via control of \(h_{\rm z}\), while \(h_{\rm z}\) is adjusted to keep \(\varepsilon\) constant and small. Next, we prepare the ground state (GS) of \(\hat{\mathcal{H}}(x)\) assuming one of the preparation strategies (Appendix A). The corresponding circuit, which we denote \(\hat{\mathcal{U}}_{\varphi}(x)\), represents our feature map, acting such that \(\hat{\mathcal{U}}_{\varphi}(x)\ket{\psi}=\ket{\psi_{0}(x)}\) being the \(x\)-dependent ground state. Tuning \(x\) from zero to one, we go from the SPT phase with the cluster GS to a trivial product state \(\ket{\psi_{0}(1)}=\ket{+}^{\otimes N}\), through the critical point \(x_{\rm cr}=1/2\) where the GS develops nontrivial correlations. We highlight that one of the possible preparation circuits can be based on the Hamiltonian variational ansatz (HVA) [88]. Due to the favorable dynamical Lie algebra scaling for this model [89] and large gradients for HVA in general [90], the corresponding feature map is efficient, meaning that the depth of the preparation circuit scales at most quadratically in the system size. However, for the purpose of numerical tests it is also instructive to use exact diagonalization, as it allows for clean studies of ground state embedding and its properties irrespective of variational preparation. Finally, keeping in mind the Hamiltonian of interest, we stress that our goal is to distinguish the SPT phase from the trivial phase. For this we need to define the string order parameter \(\hat{\mathcal{O}}\). Given that the 1D cluster model is based on \(N\) 3-body stabilizers \(\{\hat{S}_{i}\}=\{\hat{Z}_{i}\hat{X}_{i+1}\hat{Z}_{i+2}\}_{i=1}^{N}\), the string order corresponds to their product over the periodic boundary, \[\hat{\mathcal{O}}=\prod_{i=1}^{N}\hat{S}_{i}=(-1)^{N}\hat{X}_{1}\hat{X}_{2}...\hat{X}_{N}, \tag{11}\] and it corresponds to the parity operator with the reverted sign. Taking the expectation value of \(\hat{\mathcal{O}}\) is our proxy to the definition of SPT order [86], while other independent checks also include topological entanglement entropy estimation. Figure 2: Analysis of a simple \(N=3\) cluster model. **(a)** Sketch of the spin-1/2 chain with ZXZ couplings, periodic boundary conditions, and assuming the transverse magnetic field. **(b)** Basis functions of the ground state embedding for the \(N=3\) cluster model, shown as squared projections \(\ket{\phi_{i}(x)}^{2}\) on computational basis states \(j=000,100,...,111\). Superscript indices indicate degeneracies for one-hot and two-hot states. **(c)** Products of the basis functions \(\phi_{i}^{\prime}(x)\phi_{j}(x)\) corresponding to antidiagonal components, and their sum that represents the expectation value of the string order operator \(\langle\hat{\mathcal{O}}\rangle(x)\). ### Analysis of the ground state embedding from the QNN perspective Once we have established the system Hamiltonian \(\hat{\mathcal{H}}(x)\), the mapping procedure \(\hat{\mathcal{U}}_{\varphi}(x)\), and the measurement (cost) operator \(\hat{\mathcal{O}}\) [Eq. (11)], we proceed to classification from the point of view of quantum model building. For this, we study the basis set of our feature-dependent ground states \(|\psi_{0}(x)\rangle\). Namely, we assume the embedded states to be decomposed in the computational basis \(|\psi_{0}(x)\rangle\coloneqq\sum_{j}\phi_{j}(x)|j\rangle\), presented as computational \(2^{N}\) basis states \(\{|j\rangle\}\) with corresponding binary/integer representation, weighted by the \(x\)-dependent basis functions \(\{\phi_{j}(x)\}_{j=0}^{2^{N}-1}\). It is instructive to visualize this basis in some form. We choose to project feature states onto the computational basis states (much like in case of QCBMs [45; 48]), reading out probabilities \(|\phi_{j}(x)|^{2}=|\langle j|\psi_{0}(x)\rangle|^{2}\), which can be seen as diagonal matrix elements of the corresponding density operator \(\hat{\rho}_{0}(x)=|\psi_{0}(x)\rangle\langle\psi_{0}(x)|\). We choose a minimal example with \(N=3\), shown in Fig. 2(a), and plot the corresponding squared basis functions in Fig. 2(b). We observe that all basis functions undergo a drastic change exactly at the critical point \(x_{\rm cr}\), introducing an implicit bias for building models with an inherent criticality. We observe different behavior is associated to matrix elements that involve ferromagnetic bitstrings \(j=000\) and \(j=111\), one-hot bitstrings (\(j=100,010,001\)), and two-hot states \(j=110,101,011\). Attlitedly, these basis functions are shown in the computational basis, while the string order lies in the X Pauli plane. We next proceed to show how the required basis functions are "picked up" by the string order observable \(\hat{\mathcal{O}}=(-1)^{3}\hat{X}_{1}\hat{X}_{2}\hat{X}_{3}\). This specific string contains only anti-diagonal matrix elements (of \(-1\) entries), such that the opposite (i.e. bitwise conjugated \(j\leftrightarrow\hat{j}\)) pairs of states are connected. In Fig. 2(c) we plot the product of basis functions \(\phi_{j}^{*}(x)\cdot\phi_{j}(x)\) that are collected when estimating the expectation. One can see that all contributions represent sigmoid-like functions, with slightly increasing or decreasing fronts. Importantly, there are multiple degeneracies, such that one can pick multiple contributions when building the model (thus, success does not depend on specific projections and unique elements). Finally, we see that the sum of all antidiagonal products, being the expectation \(\langle\psi_{0}(x)|\hat{\mathcal{O}}|\psi_{0}(x)\rangle\), represents a sharp decision boundary even at small system size. We proceed to see how this basis analysis plays out in the full QCNN workflow. ### Analysis of QCNN basis transformation from the QNN perspective Next, we want to see how the decision boundary is built by fixed quantum convolutional neural network with pre-defined convolution and pooling introduced in Ref. [75]. We remind that these are built based on the two criteria described previously in the text. To compose the fixed QCNN circuit we impose the criteria using the tools for lattice spin models. Specifically, one can observe that the pure ZXZ cluster Hamiltonian \(\hat{\mathcal{H}}_{\rm ZXZ}=-J\sum\limits_{i=1}^{N}\hat{Z}_{i}\hat{X}_{i+1} \hat{Z}_{i+2}\) can be unitarily transformed into the trivial X Hamiltonian \(\hat{\mathcal{H}}_{\rm X}=-h_{\rm x}\sum\limits_{i=1}^{N}\hat{X}_{i}\), with the transformation generated by the sum of nearest neighbour Ising terms \(\hat{\mathcal{H}}_{\rm ZZ}=J_{\rm x}\sum\limits_{i=1}^{N}\hat{Z}_{i}\hat{Z}_{ i+1}\). This recently introduced procedure is referred to as _pivoting_[91]. We observe that considering the pivot unitary \[\hat{U}_{\rm pivot}=\exp\left(-i\frac{\pi}{4}\sum\limits_{j=1}^{N}(-1)^{j} \hat{Z}_{j}\hat{Z}_{j+1}\right), \tag{12}\] Figure 3: Analysis of the nine-qubit quantum convolutional neural network for the cluster state Hamiltonian. **(a)** QCNN circuit with a fixed predefined structure, motivated by the criteria for SPT phase recognition. It comprises of convolution and pooling layers, which represent unitaries that perform a basis change from the cluster state basis into a product basis (UNPREPARE operation corresponding to pivoting unitary \(\hat{\mathcal{U}}_{\rm pivot}\) and a layer of Hadamards), and conditional operations that ensure that in the lowest order of symmetry breaking terms the product state remains trivial (CORRECT operation). We also label different stages of the protocol (stage A to D) that help understanding of how the underlying basis changes during QCNN processing. **(b)** Visualization of the basis set for \(N=9\) QCNN at stage B, where squared amplitudes projected on computational basis states are shown as a function of the Hamiltonian parameter \(x\). Multiple degenerate solution of the sigmoid type are generated. **(c)** The resulting models \(\langle\hat{\mathcal{O}}\rangle(x)\) of the fixed QCNN that are read out at different stages. Orange curve corresponds to the full QCNN operation with pooling procedure, representing a sharp decision boundary at stage D for \(\bar{N}=1\). This coincides with string order parameter measured full width 9-qubit \(\langle\hat{X}_{1}\hat{X}_{2}\hat{X}_{N}\rangle\) prior to QCNN action, and \(\bar{N}=3\) measurement of the order parameter \(\langle\hat{\mathcal{O}}_{\bar{N}=3}\rangle\simeq\langle\hat{X}_{2}\hat{X}_{3} \hat{X}_{3}\rangle\) at stage C. The case of measuring \(\langle\hat{\mathcal{O}}_{\bar{N}=1}\rangle\simeq\langle\hat{X}_{3}\rangle\) and \(\langle\hat{\mathcal{O}}_{\bar{N}=3}\rangle\) observables in the absence of pooling (i.e. with CORRECT layers being removed) are shown with blue and magenta dashed curves. We note that here decision boundaries can be shifted by \(\pm 1\) and/or trivially scaled by \(1/2\) to yield the comparison where needed. and assuming \(J=h_{\rm x}\), we can perform the Hamiltonian transformation \[\hat{\mathcal{H}}_{\rm X}=\hat{U}_{\rm pivot}\hat{\mathcal{H}}_{\rm ZXZ}\hat{U}_{ \rm pivot}^{\dagger}, \tag{13}\] and similar transformation applies to change the basis of corresponding eigenstates. The corresponding SPT ground state of ZXZ cluster model thus is unitarily connected with the trivial ground state of \(|+\rangle^{\otimes N}\cong\hat{U}_{\rm pivot}|\psi_{\rm(o=SPT)}\rangle\) (up to a global phase). Given the specific angle of \(\pi/2\) two-qubit rotation, we can see pivoting as an effective layer of commuting CZ gates. Indeed, this is how cluster states are prepared in one-dimensional systems [86]. We visualize the steps of the fixed QCNN in Fig. 3(a) as the full circuit for \(N=9\) qubits. The step-by-step description is presented in Appendix B, and here we summarize the main points. The fixed circuit structure represents the QCNN targeting the SPT phase recognition. For this, starting from the ideal cluster state at \(x=0\) the goal is to de-entangle qubits via pivoting [91] (UNPREPARE layer in Fig. 3a), and make sure that the expected value of the string order parameter is maximized. Additionally, assuming defects (biflips \(\hat{X}_{i}\)) in the cluster state generated by the transverse field for \(x>0\), the pooling layer is designed to test values of qubits that are traced out, and correct possible errors on qubits chosen for building the model (CORRECT layer in Fig. 3a). Now, let us analyze the action of the fixed QCNN circuit in terms of building quantum models \(f_{\hat{b}_{\rm block}}(x)\) and track how the basis set of the function changes. We already know that initially (i.e. at stage A, Fig. 3a) the basis corresponds to functions that peak and dip around \(x=x_{\rm cr}\). Once we have performed the first pivot and arrived into the Z basis (stage B), we can plot the basis set, again representing as squared projections on computational basis states. This is shown in Fig. 3(b). We see that the basis consists of functions which qualitatively match the switching-on and -off behavior, and contain many degenerate basis functions. Next, we proceed to the first pooling layer that corrects errors that may have arose on the subset of qubits (here qubits 2, 5, and 8). If we are to stop at the effective 3-qubit model, we shall return to the SPT ground state for this reduced register by applying PREPARE layer and measuring \(\hat{O}_{\mathcal{N}=3}=(-1)\hat{X}_{2}\hat{X}_{5}\hat{X}_{8}\) at stage C in Fig. 3a. We show the corresponding decision boundary as the \(x\)-dependent expectation \(\langle\hat{O}\rangle\) leading to the sharp transition in Fig. 3(c) labeled as "\(\tilde{N}=1\) and 3 pooling". This overlays with the decision boundary based on the full \(N=9\)-qubit string order parameter measured prior to the action of QCNN (blue dashed curve). Next, we continue to another QCNN pooling layer, such that the model is reduced from 9 to 1 qubit. In this case we unprepare the 3-qubit SPT ground state, and perform checks on qubits 2 and 8 such that the state of qubit 5 matches the expected value. Essentially, it corresponds to measuring \(\hat{\mathcal{O}}_{\mathcal{B}=1}=-\hat{X}_{5}\) (stage D in Fig. 3a). We plot the corresponding decision boundary and see that it again matches the sharp transition in Fig. 3(c) labeled as "\(\tilde{N}=1\) and 3 pooling". However, what if there are no pooling layers applied to the input data state, and we measure reduced size operators directly after the convolutional layers? The models for \(\tilde{N}=3\) and \(\tilde{N}=1\) are shown in Fig. 3(c) as dashed violet and magenta curves, labeled with "(no pooling)" tag. We see that in this case the decision boundary is blurred, and cannot offer the same degree of accuracy, mostly due to deviations in the critical region. Finally, let us offer another understanding of the fixed QCNN workflow. The action of 9-3-1 QCNN leads to the model being \(\langle\psi(x)|\hat{\mathcal{H}}_{\rm QCNN}^{\dagger}\hat{X}_{5}\hat{ \mathcal{U}}_{\rm QCNN}|\psi(x)\rangle\), where \(\hat{\mathcal{U}}_{\rm QCNN}\) collect all layers (notice that no mid-circuit measurements are required, and we can simply trace out anything but the middle qubit). We can then see the action of QCNN as measuring an effective "dressed" operator \(\hat{\mathcal{H}}_{\rm QCNN}^{\dagger}\hat{X}_{5}\hat{\mathcal{U}}_{\rm QCNN} \cong\hat{X}_{1}\hat{X}_{2}...\hat{X}_{9}\), which is equal to the string order parameter up to the global phase. This simply means that pooling makes sure we pick up the relevant basis states from our embedding, while building the model on the small subset of qubits. This is very important for trainability, as we reduce the shot noise (smaller number of measurements is required for models based on single-qubit sampling), and still enjoy the access to the full basis. We believe that the very same strategy can be applied for building other QML models, and QCNN-like measurement adaptation can improve solvers beyond classification. ## IV Variational QCNN Training Departing from the fixed QCNN structure, we now allow for adjustable elements and compose a variational quantum convolutional neural network keeping the same overall QCNN layout. ### QCNN trained on cluster ground state embedding We proceed to compose the variational QCNN circuit. The goal is to build a variational ansatz inspired by the fixed QCNN structure to see whether we can recover a suitable basis for classification via training. We start by dealing with the convolutional layer. First, we change the layer of Hadamard Figure 4: **(a)** Parameterized elements of the variational QCNN circuit that substitute the fixed QCNN elements for pivoting and correction layers. Layers are translationally invariant but SU(2) and ZZ unitaries have different angles. **(b)** Examples of the decision boundary for the 9-qubit variational QCNN during the training. We show decision boundaries, starting from random initialization (curve 1; 42% test accuracy), and increasing number of epochs to 25, 50, 100, and 200 (curves 2-5, respectively). gates into a layer of trainable SU(2)\({}_{\theta}\) gates, where we implement an arbitrary one-qubit rotation as a sequence of three fixed-axis rotations, SU(2)\({}_{\theta}=\hat{R}_{\text{X}}(\theta_{1})\hat{R}_{\text{Z}}(\theta_{2})\hat{R}_{ \text{X}}(\theta_{3})\). We also make the pivot unitary adjustable. This corresponds to changing the pivot unitary based on ZZ(\(\pi/2\)) gates to arbitrary angle two-qubit operations ZZ(\(\theta\)). We decide to place an SU(2) layer before and after the pivot layer to allow for more control over the basis set. The same three parameters are used for each of the SU(2)\({}_{\theta}\) gates in a layer to ensure translational invariance, and the same applies for each ZZ(\(\theta\)) gate, as Fig. 4(a) shows. Hence, the convolutional layer acting on the full nine-qubit register has seven independent parameters. We also note that the convolutional layers acting on three qubits depicted in Fig. 3(a) can be cancelled out, and the final Hadamard is swapped out for an SU(2)\({}_{\theta}\) gate. We then turn to the pooling layer, where we again use controlled operations to avoid the use of mid-circuit measurements and show that QCNN effectively corresponds to an efficient basis adaptor. The fixed Toffoli gates are changed into trainable gates, where the target is now acted on by the SU(2)\({}_{\theta}\) unitary. To enable bit flipping of the control gates, we place one-qubit SU(2)\({}_{\theta}\) gates before each control qubit, and the corresponding SU(2)\({}_{\theta}^{\dagger}\) gate after each control qubit. We again keep translational invariance by ensuring each trainable Toffoli gate within the pooling layer has the same parameters. Therefore, the pooling layer has nine independent parameters. The input state is the \(x\)-dependent GS of the cluster Hamiltonian \(|\psi(x)\rangle\). The QCNN circuit is applied on this state, and a final measurement \(\hat{O}=\langle\hat{X}_{5}\rangle\) determines the output of the model. Hence our model \(f_{\theta}(x)\) can be written in the form given by Eq. (5). Our goal is to perform the binary classification of the two phases corresponding to the symmetry-protected topological order (class A) and the staggered ferromagnetic order (class B). To train this model, we sample training data points \(\{x_{\alpha}\}\) uniformly between 0 and 1, with labels determined by measuring the SPT order parameter, and assigning \(y_{\alpha}\) labels \(-1\) and 1 to class A and class B, respectively. Our loss function is the MSE as defined in Eq. (6), and we minimize this loss using stochastic gradient descent. Specifically, we use the Adam optimizer with a learning rate of 0.05. We then test the trained model on data \(\{x_{\beta}\}\) randomly sampled from a normal distribution. This is to ensure proper testing of the trained QCNN around the critical point, where classification is most challenging. When testing the binary classification, we evaluate the model for each \(x_{\beta}\). If \(f_{\theta}(x_{\beta})<0\), then we place \(|x_{\beta}\rangle\) in class A, corresponding to the SPT phase, and if \(f_{\theta}(x_{\beta})>0\), then \(|x_{\beta}\rangle\) is in class B, corresponding to the trivial phase. Hence even if \(f_{\theta}(x_{\beta})\) is not close to the actual test label \(y_{\beta}\), as long as the sign of the model output equals the sign of the test label, the data point \(x_{\beta}\) is considered correctly classified. Test accuracy is the measure of the proportion of test data points that are placed into the correct class. This is demonstrated in Fig. 4(b), where we trained a model on 4 training points (shown as dots) and tested it on 100 test data points, from which we plotted the curves \(f_{\theta}(x)\). Curve 1 depicts the QCNN output at initialization, i.e. with random untrained parameters. Even at this initial stage, we can observe a small jump around \(x=0.5\), hinting at some change of behaviour at the critical point. Hence even with non-optimized QCNN parameters, the GS feature map contains sufficient information to indicate certain critical behavior. Curves 2 and 3 in Fig. 4(b) are plotted for the QCNN model trained for 25 and 50 epochs, respectively. We can see that these decision boundaries do not match what we require (the fixed QCNN boundary seen in Fig. 3c), but the shape of the curves and the behaviour at criticality is becoming more accurate with increased training. So much so, that the test accuracy for curve 3 in Fig. 4(b) is almost at 100%; the values of \(f_{\theta}(x_{\beta})\) are incorrect, but most of the test points still fall on the correct side of 0, which is sufficient for the binary classification task. Next, curve 4 (Fig. 4b) is plotted for the trained QCNN model after 100 epochs. At this stage 100% accuracy has been achieved, but the boundary still does not coincide with the fixed QCNN boundary. Finally, after 200 training epochs we obtain curve 5. With enough training we do recover the correct, sharp decision boundary. Therefore, the GS feature map with a suitably chosen variational QCNN performs very well for state classification, generalizing to unseen data even with very few training data points. ### QCNN trained on rotation-based embedding We have seen that we can train a QCNN model to perform effectively the classification of ground states for the chosen Hamiltonians. But what happens if we go from the GS feature map to a more generic embedding protocol? We demonstrate this by using rotation-based embedding which leads to the conventional Fourier-type quantum model. We apply Pauli rotation gates acting on the zero state as the new feature map (Fig. 5a). The resulting input state reads \(|\psi(x)\rangle=\prod_{i=1}^{N}\hat{R}_{\text{Y}}^{i}(\eta x)|\delta\rangle\), where \(\eta\) is a fixed parameter. Changing \(\eta\) changes the frequency range that the model has access to, and this manifests itself in the basis sets depicted in Fig. 5(b). As we did for the GS feature map to produce Fig. 3(b), we Figure 5: **(a)** Here, we substitute the ground state embedding by the rotation-based embedding with Fourier basis. \(\eta\) represents a parameter that defines the frequency range. **(b)** Basis sets shown for the Fourier feature map with \(\eta=2\) (top panel) and \(\eta=8\) (bottom panel), visualized as squared projections on the computational basis, \(|\psi(x)\rangle^{2}\). The state is taken after the embedding is performed, just before the variational QCNN circuit. **(c)** Decision boundary coming from QCNN with the Fourier feature map based on \(\hat{R}_{\text{Y}}(\eta x)\) rotations, shown for \(\eta=2,4,6,8\). The corresponding test accuracy for each embedding is shown on the right, ranging from 65% (\(\eta=8\)) to 100% (\(\eta=2\) and 4). prepare \(|\psi(x)\rangle\) before projecting onto the computational basis states to measure the \(x\)-dependent diagonal components of the input density operator, \(|\phi_{j}(x)|^{2}\). In contrast to the GS basis, we do not observe any rapid change in behaviour at the critical point; rather, this Fourier embedding induces a sinusoidal behaviour in each of the basis functions. There is a significant difference observed as you increase \(\eta\). At \(\eta=2\) there are fewer "dominant" functions (top panel in Fig. 5b), meaning that low frequency models can be constructed. However, at \(\eta=8\) the basis looks more oscillatoric, with many peaks appearing due to the high frequency of the feature map (bottom panel in Fig. 5b). These feature-map-induced basis changes have a clear impact on the QCNN classification, as Fig. 5(c) shows. We train the QCNN circuit as we did previously, with the same training and test data set, loss and optimizer choices. We can see that for a low-frequency Fourier map (\(\eta=2,4\)), the QCNN is roughly able to find the fixed QCNN boundary. The transition is less sharp at the critical point, and \(f_{\theta}(x)\) does not quite reach -1/+1 for \(x\) close to 0/1, but the decision boundary gives a clear separation between the two phases, and hence ensures 100% accuracy on the classification task. However, as \(\eta\) increases further, the impact of the high-frequency basis set becomes clearer, as it becomes harder to pick the basis functions for fitting the correct boundary. This culminates in the result for \(\eta=8\); the many "dominant" sinusoidal peaks in the basis cause the QCNN boundary to also resemble a sinusoidal curve. Naturally, this has an impact on test accuracy. In this case many test points are situated in the upper left and bottom right quadrants of the plot, indicating incorrect classification. What does the study of QCNN models with Fourier embedding tell us? One take-home message is the importance of setting the frequencies. We can also see it as the necessity of feature pre-processing, or in the context of quantum kernels adjusting the kernel bandwidth [92]. In cases which allow feature engineering, QCNNs with rotation-based embeddings may still form relatively high-performing models. However, for tasks that do not have this option the generalization can be poor. We also stress that our analysis considers the 'test accuracy' as a success measure for classification typically adopted in quantum machine learning [51]. However, there are various measures beyond the test accuracy that can provide a stronger separation between models based on different embeddings. For instance, this includes precision recall scores, confusion matrix, receiver operating characteristic curves, and many others. In this sense, our result show that even for largely forgiving metrics the separation is significant, and is only expected to grow for other parameters and training regimes. ### Generalization Next, we study the generalization property for classification for different embeddings. Specifically, we show that QCNNs with the ground state embedding can generalize well (have small generalization error [69]) as compared to the rotation-based embedding. The results so far assumed access to an infinite number of shots when measuring expectation values, enabling us to evaluate \(f_{\theta}(x)\) exactly. The separation in the performance between the GS and Fourier embeddings is even more apparent when we move to the finite shot regime. The error \(\epsilon\) in the estimation of \(\langle\hat{O}\rangle\) is given by \(\epsilon=\sqrt{\text{var}(\hat{O})/N_{s}}\), where \(\text{var}(\hat{O})=\langle\hat{O}^{2}\rangle-\langle\hat{O}\rangle^{2}\) and \(N_{s}\) is the number of shots. Fig. 6(a) depicts the decision boundaries for the GS feature map and the Fourier feature map (\(\eta=2\)), with the blue and red curves representing \(f_{\theta}(x)\) in the infinite shot limit (ground state and Fourier embeddings, respectively). The shaded blue and red area shows the region [\(f_{\theta}(x)\)-var(\(f_{\theta}(x)\)), \(f_{\theta}(x)\)+var(\(f_{\theta}(x)\))] for both types of embedding. We can see that for both embeddings, the variance away from the critical point is relatively low compared to the variance around the critical point. In these low-variance regions, an accurate measurement of \(\langle\hat{O}\rangle\) can be found even with very few shots. Towards the critical point the variance becomes increasingly large, and many more measurement shots are required. We note that the shaded regions for the QCNN with Fourier embedding are much larger than those of the QCNN with GS embedding. The Fourier model has a region of error that spans both sides of the \(f_{\theta}(x)=0\) threshold for a much wider range of \(x\). This manifests itself in the results seen in Fig. 6(b). For this, we consider a minimal scenario for showing the separation between the two QCNN model types. Specifically, we train a model assuming access to an infinite number of shots, but when calculating the test accuracy we measure \(f_{\theta}(x)\) by averaging over a fixed number of measurements. We repeat this procedure 10 times for both the GS and Fourier embeddings, and we plot the average test accuracy across the 10 trials as a function of number of shots. We see that test accuracies still remain high for both feature maps, but a real separation is observed for low numbers of measurement shots. Even for points away from the critical point, the high variance of the Fourier QCNN model means that 10 shots is not enough to evaluate \(f_{\theta}(x)\) with any real degree of accuracy, making the model susceptible to incorrect classifications. Finally, 10,000 shots are required to return to the average test accuracy across the 10 trials in the infinite shot regime (99.6%). Meanwhile, Figure 6: **(a)** Decision boundaries for the two types of embeddings (ground state and Fourier rotation-based), for the finite number of shots. Shaded area shows the corresponding variance, where large variance leads to less accurate results. **(b)** Generalization measure as a function of the number of measurement shots. the GS QCNN model only struggles with points very close to the critical point in the low shot regime. In fact, 100 shots is already enough to return to the 100% test accuracy found across all 10 trials with an infinite number of shots. These results further demonstrate the benefit of the GS feature map. The sharp transition at criticality ensures that errors arising from access to a finite number of shots have a minimal impact on classification. We expect the separation in generalization between the two feature maps to be even more apparent if we also train the circuit with a finite number of shots for each expectation value estimation. This is likely to transform the smooth decision boundary into a noisy transition line, and the lower sharpness of the Fourier QCNN would be further exposed in this setting. Even more importantly, considering cases where data is limited (e.g. when only few runs of quantum hardware can be analyzed due to time constraints), the generalization on few training examples and read out with few samples can largely yield the classification for actual devices. ### Solving regression tasks with QCNNs Given what we have learnt from classification with quantum convolutional neural networks, we suggest to investigate the performance of QCNNs for specific regression tasks. In particular, we test it for problems containing sharp transitions, using QCNN using both ground state feature map and Fourier feature map. To solve a regression problem we consider a target function \(f(x)\) that describes a relationship we want to find in a dataset. A trial function or "surrogate model" \(f_{\theta}(x)\) can express a family of functions dependent on what the variational parameters \(\theta\) are. The model can be trained by minimizing the MSE loss function \[\mathcal{L}(\theta)=\frac{1}{M}\sum_{\alpha=1}^{M}\left\{f_{\theta}(x_{\alpha })-f(x_{\alpha})\right\}^{2}, \tag{14}\] where \(\{x_{\alpha}\}_{\alpha=1}^{M}\) is the set of training points and \(M\) is the cardinality of this set. Similarly to the previous section we build the quantum model \(f_{\theta}(x)\) based on QCNN expectation value, and additionally include classical variational parameters to rescale and shift the expectation. First, we consider an example coming from fluid dynamics. As a target function for training we choose a solution of Burgers equation, assuming the regime that features a critical behavior. The corresponding partial differential equation reads [93] \[\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=\nu\frac{ \partial^{2}u}{\partial x^{2}}, \tag{15}\] with the boundary conditions \[u(0,x) =-2\frac{\nu}{\phi(0,x)}\frac{d\phi}{dx}+4,\quad x\in[0,2\pi], \tag{16}\] \[u(t,0) =u(t,2\pi),\quad t\in[0,T],\] (17) \[\phi(t,x) =\exp\left\{-\frac{(x-4t)^{2}}{4\nu(t+1)}\right\}+\exp\left\{- \frac{(x-4t-2\pi)^{2}}{4\nu(t+1)}\right\}, \tag{18}\] the solution can be found as \(u(t,x)=-2[\nu/\phi(t,x)]d\phi/dx+4\). We rescale this solution and consider the initial time \(t=0\), setting the target function as \(f(x)=[u(0,\pi+1/2-x)-4]/3\) with \(\nu=0.05\). We remind that our goal is to learn the known solution of the differential equation, thus performing regression by QNN training, also known as quantum circuit learning [36]. Additionally, we note that one can potentially adopt the workflow to match a physics-informed approach with feature map differentiation [37], and we leave the question of differentiating hidden feature maps for future works. We perform regression selecting \(M=21\) equally-spaced training points over \([0,1]\), for both the ground state embedding based on cluster Hamiltonian (same as in the previous section) and the Fourier feature map based on rotations \(\hat{R}^{\prime}_{Y}(2\pi x)\) for each qubit \(i\). Training is performed via Adam optimizer. The results are as shown in Fig. 7(a). We observe that the original QCNN with GS embeddings is able to represent the transition with high degree of generalization, and the Fourier model experiences oscillations while roughly representing the trend. We assign an accuracy metric for each model, corresponding to the percentage of test points for which the square error difference between the trained function and target function is with an error bound \(\epsilon\). The test points are chosen as 100 uniformly spaced points and error bound as \(\epsilon=0.05\). The accuracy of the QCNN model with cluster Hamiltonian embedding is then equal to 91%, mostly due to slightly different scaling of tails, and the Fourier-based QCNN has only 62% test accuracy due to poor generalization. We conclude that ground state based embedding with the known region of criticality and critical exponents can be translated to problem in other domains of physics, offering distinct model-building capabilities with good generalization. The second example that we consider corresponds to a transfer function of a single degree of freedom oscillator with damping [94]. Specifically, the transfer function reads \[H(x)=\frac{1}{\sqrt{\left(1-x^{2}\right)^{2}+\left(2\zeta x\right)^{2}}}, \tag{19}\] Figure 7: **(a)** Viscous flow solution of the Burgers equation learnt by the quantum convolutional neural network with the ground state embedding and rotation-based Fourier embedding. Test accuracy, calculated as percentage of points within square error less than 0.05, is estimated to be 91% for the GS model and 62% for the Fourier model. **(b)** Transfer function of a single degree of freedom oscillator learnt by QCNNs with the two feature maps, together with corresponding accuracies. where \(x\) denotes the frequency of the oscillator normalized by its natural frequency, and \(\zeta\) is the damping ratio. We rescale the transfer function as \(f(x)=H(1.6[x+0.1])\) to match the range of QCNN-based models, and set \(\zeta=0.05\) such that the transition behaviour occurs centred around 0.5. Regression is performed the same way as the viscous Burgers case, as well as the test accuracy estimation. The results are as shown in Fig. 7(b). We observe that both the ground state feature map and Fourier feature map are able to approximately fit the shape, with the GS embedding outperforming Fourier due to better generalization at the end point regions. Overall, from the use of QCNNs for regression we can see that it is important for the basis functions from the choice of encoding to match the problem. This is particularly important for embedding choices based on ground states, where all the basis functions have a strong particular behaviour at the critical point. Additionally, one can make the GS feature map become variational as well, such that the critical region and sharpness are altered as needed. Once a suitable embedding choice is set, the QCNN can be used to represent the target function efficiently and learnt with proven trainability [78] -- an important advantage over fully-connected quantum neural networks. Finally, we highlight that there are many physical problems based on quantum measurements that go beyond classification and assume inference of some system properties, thus favoring regression. For instance, this can be estimation of magnetization and specific heat for magnetic materials, charge order in superconductors etc. In this case we suggest that QCNNs can play an important role due to their naturally-emerging basis set with implicit bias and high generalization. ## V Discussion There are several crucial aspects to be recalled and discussed. First, we stress that QCNNs are indeed very successful in solving problems for cases where input states are generated with the basis set that suits classification. Here, we can say that the advantage in both machine learning and quantum machine learning is underpinned by data -- while functions that represent decision boundaries are not necessarily difficult to fit, the generalization comes from embeddings that are natural to considered problems (e.g. classification of physical system behaviour). Thus, we suggest that similar advantage in terms of generalization can be attained when solving regression tasks for physics-informed problems. We made the observation that QCNNs working with quantum data are typically built on feature maps with an implicit bias [95] that favors the problem. Given that QCNN circuits have a number of independent adjustable parameters at the post-processing stage, which is not prohibitively large, they are practically trainable and in conjunction with geometric QML [71, 72, 96] approaches they can provide superior performance in terms of both classification, anomaly detection [97], and regression. Second aspect concerns addressing problems with multiple features. This assumes embedding multivariate functions as decision boundaries. To date, mostly few-variable cases were considered for physical systems with the ground state embedding [69], while rotation-based feature maps were routinely used for datasets with a large number of features [51]. Once we want to use ground state embeddings for multiple features, we face a task of assigning each feature \(x_{\ell}\) to some Hamiltonian parameter. One option is to use different type of Hamiltonian terms (interaction range, coupling etc) to encode features \(x_{\ell}\), but this implies that critical behavior for different features is distinct. Another option is composing Hamiltonians with inhomogeneous couplings, for instance with 1D cluster models (each described by \(x_{\ell}\)) collated into a ladder. How can we use QCNNs to process data with many features? This question has to be answered for extending the use of QCNN beyond few physical examples. Third aspect concerns the type of measurement adaptation circuit. If we consider pooling being a way to adopt the measurement basis from few-body to effectively \(N\)-body operators, can we do it with only a limited gate set? And if we follow this route, is it sufficient to use matchgate circuits, which offer both control and classical simulability [98]? Or can we do the adaptation with Clifford-only circuits [99]? If this is the case, the advantage of QCNN has to come directly from data, assuming that embedding of features corresponds to non-trivial quantum processes that cannot be probed otherwise. For such processes one can consider a highly beneficial scenario where pre-training of QCNNs is performed _in silico_ classically for limited number of states. Then, we can run QCNNs as measurement adaptation circuits for physical devices with imperfections, performing classification with small number of samples and enjoying the corresponding advantage. Another important question regarding the power of QCNNs and possible embeddings arises when we consider extensions from simple one-dimensional models and go into two dimensions. Namely, every feature can be in principle embedded into ground states of 2D spin models with distinct topological properties. In this case, the entanglement structure, correlations, and nature of phase transition change. And so does a critical exponent of the transition [87]. For instance, considering the toric code model or models with fractional statistics, one can enrich the basis for describing critical phenomena [100]. Can we match them with corresponding behaviour in hydrodynamical systems, thus ensuring quantum "simulation" of phase transitions in fluid dynamics? Can we extend it to chaotic systems? The development of QCNN understanding to embeddings with long-range topological order may well offer capabilities in classification and regression beyond currently explored states with limited entanglement. Finally, QCNNs have been shown to have a strong connection to error correction, where pooling layers can be used to perform syndrome extraction and denoising [75]. At the same time, we suggested to look at the process via explicit modelling paradigm. Can we extend it to some of the known error correction protocols, explaining them as "sharpening" of models and feature engineering? These are the questions that can be addressed adopting the proposed strategy. Conclusion In this work we have developed a better understanding of quantum convolutional neural networks, specifically highlighting why they are so successful in quantum phase recognition. We have shown that supplied quantum states (features) can be understood in terms of _hidden_ feature maps -- quantum processes that prepare states depending on classical parameters \(\mathbf{x}\). During the mapping process, we observed that the ground state preparation supplies a very beneficial basis set, which allows the building of a nonlinear quantum model for classification with decision boundaries that are sharp and generalize from only few data points and measurement samples. We show that single-qubit observables in QCNNs can be used to "pick up" the most beneficial basis functions, while leading to sampling advantage. The developed understanding also opens another perspective on quantum sensing aided by convolution-based quantum processing. Finally, motivated by classification, we applied the QCNN-based workflow with cluster model ground state embedding to solve problems in fluid dynamics (viscous Burgers equation) and wave physics (damped oscillator). This suggests quantum convolutional neural networks as a provably trainable tool for data-driven modelling of critical phenomena with quantum computers. ## Appendix A Ground state preparation Here, we briefly summarize different strategies to prepare QCNN inputs as ground states or low-temperature ensembles of states. First, we note one can prepare \[\ket{\psi_{0}(x)}=\mathcal{T}\left\{\exp\big{[}-i\int ds\hat{\mathcal{H}}(s;x) \big{]}\right\} \tag{10}\] using an appropriate adiabatic preparation path for Hamiltonian \(\hat{\mathcal{H}}(x)\) (here \(\mathcal{T}\) denotes time-ordering) [101]. This however depends on the properties of the Hamiltonian and gap closing. Second, one can consider the effective thermal state preparation process \[\hat{\rho}_{\text{fl}}[\hat{\mathcal{H}}(x)]=\lim_{\beta\to\infty}\frac{\exp(- 2\beta\hat{\mathcal{H}}(x)}{\text{tr}\{\exp(-2\beta\hat{\mathcal{H}}(x))\}}, \tag{11}\] where \(\beta\) is an effective inverse temperature. This can be simulated approximately with quantum imaginary time evolution (QITE) techniques [102]. Third, given that cluster model can be overparametrized in linear (or at most quadratic) depth, we can assign a ground state preparation unitary with pre-optimized angles \(\Phi_{\text{GS}}\) such that there is a unitary \(\hat{\mathcal{U}}_{\Phi_{\text{GS}}}[\hat{\mathcal{H}}(x)]\) that prepares \(\ket{\psi_{0}(x)}\) up to sufficient pre-specified precision. We suggest to use the QAOA-type preparation [89], which corresponds to the Hamiltonian Variational Ansatz (HVA) [88] for the considered cluster Hamiltonian. Let us consider \(J_{\text{xx}}\) for brevity, and use the transverse field cluster model as an example. The feature map for the ground state embedding can be composed as \[\ket{\psi_{0}(x)}=\prod_{d=1}^{D}\left(e^{-i\Phi_{d_{1}}^{(\text{opt})}(x) \sum_{i}\hat{x}\hat{x}_{i,1}\hat{x}_{i,2}}e^{-i\Phi_{d_{2}}^{(\text{opt})}(x) \sum\hat{x}\hat{x}}\right)\ket{\psi}, \tag{12}\] assuming that we apply sufficient number of layers \(D\), and optimal angles \(\{\Phi_{d_{1,2}}^{(\text{opt})}(x)\}\) are recovered for all system features \(x\). We note that this is possible for \(D\) that enables overparametrization [103], and was shown to give exact ground state preparation for integrable models [104, 89]. ## Appendix B Fixed QCNN analysis We visualize the steps of fixed QCNN in Fig. 3(a) of the main text as the full circuit for \(N=9\) qubits. Let us go step by step aiming to understand the underlying machine learning operations. Starting with the prepared ground state for the pure cluster model (\(x=0\) point), we observe that pivoting brings the system to \(\ket{+}^{\otimes N}\) and the layers of Hadamard gates make it the computational zero state \(\ket{\psi}\). We call this as UNPREPARE operation layer. Thus at \(x=0\) we satisfied the second QCNN criterion (discussed in Section II.B of the main text), meaning that at the pooling stage (Fig. 3a, stage B) we would get \(0\) bit measurements deterministically on \(2N/3\) qubits, while keeping \(\tilde{N}=N/3\) qubits (we choose the pool of qubits \(2,5,8\)). The criterion \(1\) (i.e. fixed point one) can be satisfied by re-preparing the cluster state now on the three selected qubits by applying Hadamard and \(\hat{U}_{\text{pivot}}^{\dagger}\) layer. We get the smaller version of \(\ket{\psi_{0}}_{N/3}\). Next we need to consider the case of \(x>0\), where the deviation from the zero magnetic field generates "errors" as bitflips. Namely, operations \(\{\hat{X}_{i}\}\) can be applied to on any qubit line on top of ideal symmetry protected topological (SPT) ground state \(\ket{\psi_{0}(0)}\), and propagate through the circuit. That is what we want to correct when performing the pooling procedure, trying to keep the clean copy of SPT phase as far as possible. We observe that \(\hat{U}_{\text{pivot}}\hat{X}_{i}\ket{\psi_{0}}=\hat{Z}_{i-1}\hat{X}_{i}\hat{Z} _{i+1}\hat{U}_{\text{pivot}}\ket{\psi_{0}}\) and \(\hat{\Pi}^{\otimes N}\hat{U}_{\text{pivot}}\hat{X}_{i}\ket{\psi_{0}}=\hat{X}_ {i-1}\hat{Z}_{i}\hat{X}_{i+1}\hat{\Pi}^{\otimes N}\hat{U}_{\text{pivot}}\ket{ \psi_{0}}\), and these are the effective errors that we need to correct. Given that in the absence of errors pivoting and X-to-Z basis map generates \(\ket{\psi}\), we required to correct states of the type \(\ket{0...01010...0}\) (and cyclic shifts), and make sure that after pooling \(\ket{0}^{\otimes N/3}\) state is recovered. To avoid mid-circuit measurements we suggest to use the recipe of deferred measurements and assign controlled operations that are compatible with measuring (and tracing out/discarding) qubits that are not pooled through. This can be achieved by applying Toffoli gates CCX\({}_{i,j,k}:=\mathbb{1}_{i,j,k}-\ket{1}_{i}\bra{1}\ket{1}_{i}\bra{1}\ket{1}_{i} \bra{1}\ket{1}_{i}\bra{1}\hat{X}_{k}\), where \(i\) and \(j\) are control qubit indices, and \(k\) represents a target. We also define the bit flipped version of this gate CCX\({}_{i,j,k}^{(01)}:=\hat{X}_{i}\text{CCX}_{i,j,k}\hat{X}_{i}\), with the \(i\)-th control now corresponding to \(0\) state. This gate is denoted by open circle when conditioned on \(0\) and not \(1\). Our idea is detecting the patterns of bits in next-nearest neighbors, such the when distance-one flipped pair is detected, several layers of Toffolis correct the qubits \(2\), \(5\), and \(8\) (chosen as targets; see Fig. 3a, CORRECT layers). One can the check that errors propagating to measurement do not impact the observable, in the lowest order.
2310.20349
A Low-cost Strategic Monitoring Approach for Scalable and Interpretable Error Detection in Deep Neural Networks
We present a highly compact run-time monitoring approach for deep computer vision networks that extracts selected knowledge from only a few (down to merely two) hidden layers, yet can efficiently detect silent data corruption originating from both hardware memory and input faults. Building on the insight that critical faults typically manifest as peak or bulk shifts in the activation distribution of the affected network layers, we use strategically placed quantile markers to make accurate estimates about the anomaly of the current inference as a whole. Importantly, the detector component itself is kept algorithmically transparent to render the categorization of regular and abnormal behavior interpretable to a human. Our technique achieves up to ~96% precision and ~98% recall of detection. Compared to state-of-the-art anomaly detection techniques, this approach requires minimal compute overhead (as little as 0.3% with respect to non-supervised inference time) and contributes to the explainability of the model.
Florian Geissler, Syed Qutub, Michael Paulitsch, Karthik Pattabiraman
2023-10-31T10:45:55Z
http://arxiv.org/abs/2310.20349v1
A Low-cost Strategic Monitoring Approach for Scalable and Interpretable Error Detection in Deep Neural Networks ###### Abstract We present a highly compact run-time monitoring approach for deep computer vision networks that extracts selected knowledge from only a few (down to merely two) hidden layers, yet can efficiently detect silent data corruption originating from both hardware memory and input faults. Building on the insight that critical faults typically manifest as peak or bulk shifts in the activation distribution of the affected network layers, we use strategically placed quantile markers to make accurate estimates about the anomaly of the current inference as a whole. Importantly, the detector component itself is kept algorithmically transparent to render the categorization of regular and abnormal behavior interpretable to a human. Our technique achieves up to \(\sim\)96% precision and \(\sim\)98% recall of detection. Compared to state-of-the-art anomaly detection techniques, this approach requires minimal compute overhead (as little as 0.3% with respect to non-supervised inference time) and contributes to the explainability of the model. ## 1 Introduction Deep neural networks (DNNs) have reached impressive performance in computer vision problems such as object detection, making them a natural choice for problems like automated driving [1]. However, DNNs are known to be highly vulnerable to faults. For example, even small changes to the input such as adding a customized noise pattern that remains invisible to the human eye, can stimulate silent prediction errors [8]. Similarly, modifying a single out of millions of network parameters, in the form of a bit flip, is sufficient to cause severe accuracy drops [14]. Because DNNs are being deployed in safety-critical applications such as autonomous vehicles (AVs), we need efficient mechanisms to detect errors that cause such silent data corruptions (SDC). Beyond the functional part, trust in the safety of the application requires that the error detectors are interpretable by the user, so that he/she can develop an intuitive understanding of the regular and irregular behavior of the network [2]. In an AV, for example, a user who does not trust an automated perception component due to its opaque decision-making, will not trust a black-box fault monitor either. Therefore, it is important to build interpretable error detectors for DNNs. The goal of error detection is to supervise a small, yet representative subset of activations - during a given network inference - for comparison with a previously extracted fault-free baseline. This leads to three key challenges: **(1)** How can one compress the relevant information into efficient abstractions? **(2)** How can one efficiently perform the anomaly detection process, for complex patterns? **(3)** Can the anomaly detection decision be understandable to a human, so that insights are gained about the inner workings of the network? Unfortunately, no existing approach satisfactorily addresses all three of the above challenges (Sec. 2). This paper presents an solution using a monitoring architecture that taps into the intermediate activations only at selected strategic points and interprets those signatures in a transparent way, see Fig. 1. Our approach is designed to detect SDC-causing errors due to input corruptions _or_ hardware faults in the underlying platform memory. Our main observation that underpins the method is that an SDC occurs when a fault _either_ increases the values of a few activations by a large margin (referred to here as an _activation peak shift_), or the values of many activations each by a small margin (_activation bulk shift_). As Fig. 2 shows, the former is observed typically for platform faults, while the latter is observed for input faults. We then use discrete quantile markers to distill the knowledge about the variation of the activation distribution in a given layer. Conceptually, within a faulty layer, we can expect a large change of only the top quantiles for activation peak shifts, and small changes of the lower and medium quantiles for bulk shifts (Fig. 2). This idea allows us to produce discriminative features for anomaly detection from a small number of monitored elements, with a single detector. Figure 1: Monitoring architecture for quantile shift detection. In summary, we make the following contributions in this paper: * We demonstrate that even for complex object detection networks, we can identify anomalous behavior from quantile shifts in only a few layers. * We identify minimal sets of relevant features and discuss their universality across models. * We efficiently differentiate input and hardware fault classes with a single detector. * We show that the anomaly detection process can be achieved with algorithmically transparent components, such as decision trees. The article is structured as follows: Sec. 2 discusses related work, while Sec. 3 describes our experimental setup. We present our method in Sec. 4, and the results of our evaluation in Sec. 5. Figure 2: **(a)** The feature map appearance is slightly changed with noise and massively affected by the memory FI. **(b)** Noise causes a small shift of multiple quantiles from the affected layer onwards (activation bulk shift). **(c)** The layer with the memory FI shows a large shift of the maximum quantile (activation peak shift), which then propagates to other quantiles. ## 2 Related Work There are three main categories of related work. **Image-level Techniques**: Input faults can be detected from the image itself (i.e., before network inference), in comparison with known fault-free data, resulting for example in specialized blur detectors [15]. However, these techniques do not necessarily relate to SDC in the network, as image-level corruptions may be tolerated by the model. **Activation Patterns**: Methods to extract activation patterns range from activation vectors [5] to feature traces [24, 25]. However, these techniques do not scale well to deeper models as they result in a massive number of monitored features and large overheads. Zhao et al. [26] attempt to reduce the monitoring effort by leveraging only activations from selected layers and compressing them with customized convolution and pooling operations. This leads to a rather complex, non-interpretable detector component, and the selection of monitored layers remains empirical. **Anomaly Detection** techniques establish clusters of regular and anomalous data to efficiently categorize new input. In single-label problems, such as image classification, fault-free clusters are typically formed by samples that belong to the same individual label [12], suggesting that those samples also share common attributes in the space of intermediate activations. This technique does not generalize to multi-label problems though, such as object detection, as many objects (in the form of bounding boxes and labels) are represented in the same image. More abstracted clustering rules such as the maximum activation range per layer have been proposed [4, 18]. However, these detectors omit more subtle errors within the activation spectrum, for example resulting from input faults. In other work [24, 25, 26], a secondary neural network is trained to perform the detection process. This comes at the cost that the detector then does not feature algorithmic transparency [2] and hence the anomaly decision is not understandable to a human. The same limitations are found in the context of detector subnetworks that are trained to identify adversarial perturbations [19]. **Summary**: We see that none of the prior techniques satisfactorily address the challenges outlined earlier. We present a new technique to overcome this problem in this paper. ## 3 Experimental Setup and Preliminary Study **Models and Datasets:** We use the three classic object detection networks Yolo(v3), Single Shot Detector (SSD), and RetinaNet from the _open-mmlab_[3] framework, as well as the two standard image classification networks ResNet50 and AlexNet from _torchvision_[21]. Object detection networks are pretrained on Coco [20] and were retrained on Kitti [6], with the following AP50 baseline performances: Yolo+Coco: 55.5%, Yolo+Kitti: 72.8%, SSD+Coco: 52.5%, SSD+Kitti: 66.5%, RetinaNet+Coco: 59.0%, RetinaNet+Kitti: 81.8%. Image classification models were pretrained on ImageNet [17], providing accuracies of 78.0% (ResNet) and 60.0% (AlexNet) for the test setup. The data was split in a ratio of 2:1 for detector training and testing. All models are run in _Pytorch_ with the IEEE-standard FP32 data precision [16]. **Fault Modes:** Input faults are modeled using _torchvision_[21] transform functions and are applied in three different magnitudes to the raw RGB images. We select three perturbation patterns that are popular in computer vision benchmarks such as ImageNet-C [11] for our analysis: i) _Gaussian noise_ due to low lighting conditions or noise in the electronic signal of the sensor device. Low (0.1), medium (1), and high (10) noise is tested. ii) _Gaussian blur_, reflecting for example a camera lens being out of focus. We choose a kernel size of \((5,9)\) and a symmetric, variable standard deviation \((0.3,1,3)\). iii) _Contrast reductions_ simulate poor lighting conditions or foggy weather. We adjust the contrast by a factor between zero (no contrast, gray image) and one (original image). The selected models have different vulnerabilities to input faults, for example, the two image classification models ResNet and AlexNet are highly sensitive to contrast adjustments, but are rather robust to noise and blur faults. For the remaining models, the trend is reversed. Hardware faults are modeled as single bit flips in the underlying memory and injected using _PytorchAlfi_[9]. Such flips can occur randomly either in the buffers holding temporary activation values (_neuron_ fault), or in dedicated memory which holds the parameters of the network (_weight_ faults). We group both neuron and weight faults into a single class _memory_ fault. This approach is in line with previous work [23, 24, 25, 13, 7, 18, 4]. We target all convolutional layers. **Fault Metrics:** First, detectable uncorrectable errors (DUE) can occur when invalid symbols such as _NaN_ or _Inf_ are found among the activations at inference time. During fault injection, we observe DUE events only for memory faults, with rates \(<\)1% across all models. DUEs can be generated also at the detector stage, in the process of adding up feature map sums that contain platform errors. The rates for such events vary between 0.2% and 5.1% with our method. While DUE errors may affect the system's availability, they are considered less critical as they are readily detectable and there is no need for further activation monitoring [7]. In this article, we are concerned therefore only with silent data corruption (SDC), events that lead to a silent alteration of the predicted outcome. For image classification networks, this is represented by a change in the top-1 class prediction. For object detection systems, we use an asymmetric version of the IVMOD metric [23] as SDC criterion, i.e., an image-wise increment in the FP or FN object numbers is counted as SDC. Each experiment was done with a subset of 100 images of the test data set, running 100 random FIs on each image individually. For hardware faults, SDC rates are typically low (\(\sim\)1\(-\)3%) since drastic changes will result only from bit flips in the high exponential bits of the FP32 data type [18, 7]. Therefore, an additional 500 epochs with accelerated FI only into the three highest exponential bits are performed for both flavors of memory faults. Overall, the faulty and fault-free data is found to be balanced at a ratio of about 2:1. ## 4 Model **Notational Remarks:** We use the range index convention, i.e., a vector is given as \(\mathbf{x}=(x_{i})=(x^{i})\), a matrix reads \(\mathbf{A}=(A_{ij})\), and similarly for higher-dimensional tensors. **Monitoring Approach:** Let us denote a four-dimensional activation tensor that represents an intermediate state of a convolutional neural network as \(\mathbf{T}=(T_{n,c,h,w})\in\mathds{R}^{N\times C\times H\times W}\), where \(N\) is the sample number, \(C\) the number of channels, \(H\) the height, and \(W\) the width. We list \(n\) as running global sample index, where samples may be further grouped in batches. An output tensor of a specific layer \(l\in[1,\dots L]\) shall be given as \(\mathbf{T}^{l}\), with \(L\) being the total number of monitored layers of the model. Subsets of a tensor with fixed \(n,c\) are called feature maps. Our monitoring approach first performs the summation of individual feature maps and subsequently calculates quantile values over the remaining kernels, see Fig. 1, \[\left(F_{n,c}\right)^{l} =\sum_{h,w}(T_{n,c,h,w})^{l}, \tag{1}\] \[\left(q_{n}\right)_{p}^{l} =\left(Q_{p}((F_{n,c})^{l})_{n}\right). \tag{2}\] Here \(Q_{p}\) is the quantile function for the percentile \(p\) which acts on the \(n\)-th row of \((F_{n,c})^{l}\). In other words, \(Q_{p}\) reduces the kernel dimensions \(c\) to a set of discrete values where we use the 10-percentiles, i.e., \(p\in[0,10,20,30,\dots,90,100]\). The result is a quantile value set, \(q_{p}\), for a given image index \(n\) and layer \(l\). Note that both the summation and the quantile operations (and hence the detector) are invariant under input perturbations such as image rotations. **Supervised Layers:** We intercept the output activations of all convolutional layers, as those layers provide the vast majority of operations in the selected computer vision DNNs. Yet, the same technique can be applied to any neural network layer. **Reference Bound Extraction:** Applied to a separate data set \(D_{\text{bnds}}\), the above technique is used pre-runtime to extract reference bounds which represent the minimum and maximum feature sums during fault-free operation: \[\begin{split} q_{p,\min}^{l}&=\min_{n\in D_{\text{ bnds}}}\left((q_{n})_{p}^{l}\right),\\ q_{p,\max}^{l}&=\max_{n\in D_{\text{bnds}}}\left(( q_{n})_{p}^{l}\right),\end{split} \tag{3}\] For \(D_{\text{bnds}}\), we randomly select 20% of the training data [4]. **Anomaly Feature Extraction:** For a given input during runtime, Eqs. (1) to (2) are used to obtain the quantile markers of the current activation distribution. Those are further processed to a so-called _anomaly feature vector_ which quantifies the similarity of the observed patterns with respect to the baseline references of Eq. 3, \[q_{p}^{l}\rightarrow\frac{1}{2}\left(f_{\text{norm}}(q_{p}^{l},q_{p,\min}^{l}, q_{p,\max}^{l})+1\right). \tag{4}\] Here, \(f_{\rm norm}\) normalizes the monitored quantiles to a range of \((-1,1)\) by applying element-wise (\(\epsilon=10^{-8}\) is a regularization offset) \[f_{\rm norm}(a,a_{\rm min},a_{\rm max})=\begin{cases}\tanh\left(\frac{a-a_{\rm max }}{|a_{\rm max}|+\epsilon}\right)&\text{if $a\geq a_{\rm min}$},\\ \tanh\left(\frac{a_{\rm min}-a}{|a_{\rm min}|+\epsilon}\right)&\text{if $a<a_{\rm min }$}.\end{cases} \tag{5}\] Intuitively, the result of Eq. 5 will be positive if \(a\) is outside the defined minimum (\(a_{\rm min}\)) and maximum (\(a_{\rm max}\)) bounds (approaching \(+1\) for very large positive or negative values). The function is negative if \(a\) is within the bounds (lowest when barely above the minimum), and will become zero when \(a\) is of the order of the thresholds. In Eq. 4, a shift brings features to a range of \((0,1)\) to facilitate the interpretation of feature importance. Finally, all extracted features are unrolled into a single anomaly feature vector \(\mathbf{q}=((q^{l})_{p})=[q^{1}_{0},q^{2}_{0},\ldots q^{L}_{0},q^{1}_{10}, \ldots q^{L}_{100}]\), that will be the input to the anomaly detector component. **Anomaly Detector:** We use a decision tree [10] approach to train an interpretable classifier, leveraging the _sklearn_ package [22]. The class weights are inversely proportionally to the number of samples in the respective class to compensate for imbalances in the training data. As a measure of the split quality of a decision node we use the Gini index [10]. To avoid overfitting of the decision tree, we perform cost-complexity pruning [22] with a factor varying between \(1\times 10^{-5}\) and \(2\times 10^{-5}\), that is optimized for the respective model. To investigate fault class identification, we study three different detector modes with varying levels of fault class abstractions and quantify each mode \(x\in\{cls,cat,sdc\}\) by precision, \(\rm P_{x}=TP_{x}/(TP_{x}+FP_{x})\) and recall \(\rm R_{x}=TP_{x}/(TP_{x}+FN_{x})\). Here we abbreviated true positives (TP), false positives (FP), and false negatives (FN). In the class mode (\(cls\)), we consider only those detections as true positives where the predicted and actual fault modes (see Sec. 3) coincide exactly. Cases where SDC is detected correctly but the fault class does not match will be counted as either FP or FN in this setting. In the category mode (\(cat\)), those SDC detections are considered true positives where the predicted and actual fault class fall into the same category of either _memory fault_ or _input fault_\(=\{noise,blur,contrast\}\). That means, fault class confusions within a category will not reduce the performance in this mode. The final precision and recall values for the class and category mode are given as the average over all classes or categories, respectively. Finally, in the mode \(sdc\), we consider all cases as true positives where SDC was correctly identified regardless of the specific fault class. This reflects a situation where one is only interested in the presence of SDC overall, rather than the specific fault class. ## 5 Results ### Detector Performance **Error Detection:** Tab. 1 shows the precision, recall, and decision tree complexity for the studied detectors and models. When all extracted features are leveraged by the decision tree classifier (referred to as _full_ model), the average class-wise detection precision varies between 93.9% (ResNet) and 97.5% (RetinaNet+Kitti), while the recall is between 97.1% (RetinaNet+Coco) and 99.1% (Yolo+Kitti). If only the fault category needs to be detected correctly, we find \(\text{P}_{\text{cat}}>95\%\) and \(\text{R}_{\text{cat}}>94\%\). Correct decisions about the presence of SDC only are done with \(\text{P}_{\text{sdc}}>96\%\) and \(\text{R}_{\text{sdc}}\geq 98\%\). Across models, we observe (not shown in Tab. 1) that the most common confusion are false positive noise detections, leading to a reduced precision in the individual _noise_ class (worst case 75.8% for ResNet). The recall is most affected by memory faults (lowest individual class recall 90.6% for RetinaNet+Coco). The detection rates of the full model in Tab. 1 outperform the ones reported in the comparable approach of Schorn et al. [25] (using feature map tracing) \begin{table} \begin{tabular}{c|c c c|c c c|c} \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**P(\%)**} & \multicolumn{3}{c|}{**R(\%)**} & \multicolumn{2}{c|}{**DT**} \\ \cline{2-9} & \(\text{P}_{\text{cls}}\) & \(\text{P}_{\text{cat}}\) & \(\text{P}_{\text{sdc}}\) & \(\text{R}_{\text{cls}}\) & \(\text{R}_{\text{cat}}\) & \(\text{R}_{\text{sdc}}\) & \(\text{N}_{\text{ft}}/\text{N}_{\text{l}}\) \\ \hline \hline **Yolo+Coco** & & & & & & & \\ full & 95.8 & 96.4 & 96.1 & 98.2 & 98.6 & 98.4 & 825/75 \\ red (avg) & 93.3 & 94.6 & 93.4 & 97.4 & 96.3 & 96.7 & 2/2 \\ \hline **Yolo+Kitti** & & & & & & & \\ full & 97.3 & 97.5 & 97.4 & **99.1** & 99.3 & 99.2 & 825/75 \\ red (avg) & 92.6 & 92.1 & 92.0 & 97.3 & 96.4 & 96.8 & 3/2 \\ \hline **SSD+Coco** & & & & & & & \\ full & 96.6 & 97.2 & 96.6 & 98.2 & 98.5 & 98.3 & 429/39 \\ red (avg) & 95.2 & 96.3 & 94.9 & 96.5 & 94.5 & 95.9 & 3/3 \\ \hline **SSD+Kitti** & & & & & & & \\ full & 96.0 & 97.1 & 96.2 & 98.4 & 98.7 & 98.6 & 429/39 \\ red (avg) & 92.8 & 94.6 & 92.1 & 98.0 & 97.7 & 98.2 & 2/2 \\ \hline **RetinaNet+Coco** & & & & & & & \\ full & 96.6 & 95.7 & 96.9 & 97.1 & 94.9 & 98.0 & 781/71 \\ red (avg) & **96.6** & 96.6 & 96.5 & 97.0 & 94.6 & 98.2 & 2/2 \\ \hline **RetinaNet+Kitti** & & & & & & & \\ full & **97.5** & 97.3 & 97.5 & 98.6 & 98.2 & 98.7 & 781/71 \\ red (avg) & 96.2 & 96.6 & 95.9 & **98.6** & 97.8 & 98.9 & 2/2 \\ \hline **ResNet+Imagenet** & & & & & & & \\ full & 93.9 & **98.3** & **97.6** & 98.1 & **99.6** & **99.4** & 583/53 \\ red (avg) & 92.1 & **97.6** & **96.7** & 98.3 & **99.6** & **99.5** & 3/3 \\ \hline **AlexNet+Imagenet** & & & & & & & & \\ full & 96.1 & **98.3** & 97.3 & 98.4 & 99.2 & 99.0 & 55/5 \\ red (avg) & 93.2 & 96.8 & 95.0 & 98.0 & 99.0 & 98.8 & 4/3 \\ \hline \end{tabular} \end{table} Table 1: Precision (\(P\)), Recall (\(R\)), and decision tree (DT) complexity - given as the number of used features (\(\text{N}_{\text{ft}}\)) and monitored layers (\(\text{N}_{\text{l}}\)) - for different setups. Every detector was retrained 10 times with different random seeds and the averages across all runs are given. Errors are shown when relevant to the given rounding. We list both the classifiers making use of all extracted quantiles (_full_) and the averaged reduced (_red_) detector models, where guided feature reduction was applied, see Fig. 3. Best-in-class detectors are highlighted in each column. and the blur detection in Huang et al. [15] in terms of precision and recall. When using alternative metrics (not shown in Tab. 1) for comparison with other detector designs, we find that our method achieves class-wise misclassification rates ranging between 0.7% and 2.0%, depending on the model, which is on par with the results for example in Cheng et al. [5]. Similarly, the calculated class-wise true negative rates vary between 99.6% and 99.8%, reaching or exceeding the classifier performance in Zhao et al. [26]. Note that all mentioned references are limited to image classification networks. **Feature Reduction:** The number of monitored features can be drastically reduced without significantly affecting the detection performance. This means that many quantiles represent similar information and further distillation can be applied. For feature reduction, we follow two steps: First, all quantile features of the full model are ranked according to their Gini importance [22] in the decision tree. Then, we retrain the classifier with a successive number of features, starting from the most important one only, to the two most important ones, etc. A reduced model is accepted as efficient if it recovers at least 95% of both the precision and recall performance of the original model with all features. Fig. 3 shows the results of the feature reduction. Inspecting performance trends from larger to smaller feature numbers, we observe that the detection rate stagnates over most of the elimination process, before dropping abruptly when the number of used features reduces beyond a limit. On average, the number of monitored features and layers that are required to maintain close-to-original performance (as defined above) are as few as 2 to 4 and 2 to 3, respectively. For a model like Yolo, this means that only 2 out of the 75 convolution layers have to be supervised. The average characteristics of the resulting detector models is shown in Tab. 1 as reduced (_red_) model. ### Minimal Monitoring Features **Minimal Feature Search:** The feature reduction process in Sec. 5.1 demonstrates that only few strategic monitoring markers are needed to construct an Figure 3: Precision and recall of class-wise SDC detection when reducing the number of monitored features (average of 10 independent runs). efficient detector model. In this section, we elaborate further to what extent the model can be compressed, and which features are the most relevant. We apply the following strategy, starting from a full classifier model using all quantile features: 1) Apply the feature reduction technique described in Sec. 5.1 to identify minimal monitoring features that maintain at least 95% of the original precision and recall. This combination of features is added to a pool of minimal model candidates. 2) A new instance of the full model is initiated and all feature candidates from the pool are eliminated. Return to the first step to find alternative candidates until a certain search depth (we choose 24) is exhausted. **Universal Trends:** The identified minimal feature combinations are shown in Fig. 4. We find that just 2 features from 2 different layers are sufficient to constitute an efficient error detector for all studied models except for AlexNet (4 features from 3 layers). Almost universally, one of the monitored layers needs to be among the very last layers of the deep neural network. Since memory faults are injected randomly across the network, errors in the last layers would go unnoticed otherwise. Only for SSD models, it turns out that most of the SDC faults occur in earlier layers, so that a supervision of the last layers is less crucial to achieve a similar statistical detection performance. We observe that it is favorable to supervise a higher percentile (e.g., \(q_{100}\)) in the later layers, especially in more shallow networks (AlexNet and SSD). This is because in shallow networks, peak shifts have a shorter propagation path and hence it is more important to intercept faults directly. This can only be achieved by the highest percentiles. In models with ReLU activation functions (all except Yolo here), the minimum quantile does not serve as a meaningful peak shift monitor as negative activations are clipped automatically. A second monitoring marker should to be set in the first half of the network layer stack. This helps to identify input faults (which are interceptable from the very first layer) and discriminate them from memory faults. Either a low or high percentile can be chosen for supervision. **Explainability:** Given the above generalizable trends and the fully transparent nature of the classifier, we can make statements about the inner workings of the DNN that correlate a given input with an anomalous or regular outcome. Those statements can be interpreted intuitively by a human as a proxy of a decision, and hence qualify as an explanation [2]. ### Overhead We measure the average inference time per image when running the supervised model on random input, using the _Torch profiler_[21]. The profiled overall self compute time in Fig. 5 is shared between CPU and GPU. Compared to the feature map tracing method of Schorn et al. [24, 25], the quantile operation introduces additional compute, but at the same time saves the time of storing large tensors, due to the compression of many feature sums into only a few quantiles. Between these two opposing trends, full quantile monitoring turns out to be _faster_ than feature map tracing for all the studied models except for the shallow AlexNet, as shown in Fig. 5. If only selected layers are monitored to create a reduced classifier, the overhead can be decreased significantly. We find that the impact of minimal quantile monitoring on the overall inference time is between +0.3% and +1.6% for all studied object detection DNNs. For the image classification networks, on the other hand, quantile monitoring imposes a more significant overhead of +10.7% (ResNet) and +53.8% (AlexNet). This is because those networks have a much smaller number of parameters such that the relative impact of quantile extraction with respect to the total number of operations is higher. Across all models, minimal quantile monitoring is \(>10\%\) faster than feature map tracing. In absolute numbers, the respective saving in inference time can be up to \(\sim\)10ms, which is a significant improvement for applications operating at real-time, for example object detection in a self-driving vehicle. Figure 4: Minimal combinations of features as identified by the search process in Sec. 5.2. All combinations in (a)-(e) constitute a reduced classifier model with at least 95% of the performance of the respective full model. Inset numbers designate the percentile numbers (or combinations thereof if multiple combinations are equally valid). ### Comparison with Other Detector Approaches Alternative to a decision tree, we can deploy a linear machine learning model for error detection (similar to [24]). We study the feasibility of doing so in this section. For this setup, we select Yolo+Kitti to train a classifier for 1000 epochs using the Adam optimizer and cross entropy loss. A batch size of 100 and learning rates optimized between \(1\times 10^{-4}\) and \(5\times 10^{-3}\) were chosen. In the simplest form, with a multi-layer-perceptron, the algorithmic transparency is preserved and we find \(\mathrm{P_{cls}}=86.0\%\) and \(\mathrm{R_{cls}}=95.7\%\). If more hidden linear layers are added, higher detection rates can be achieved at the cost of explainability. For example, including one extra hidden layer with 64 neurons [24], we find a performance of \(\mathrm{P_{cls}}=88.9\%\) and \(\mathrm{R_{cls}}=96.3\%\), with three such extra layers we obtain \(\mathrm{P_{cls}}=91.7\%\) and \(\mathrm{R_{cls}}=95.1\%\). Compared to decision trees, however, this strategy suffers from more complex hyperparameter tuning and large training times. Therefore, decision trees are a better fit for our use case. ## 6 Summary and Future Work In this paper, we show that critical silent data corruptions in computer vision DNNs (originating either from hardware memory faults or input corruptions) can be efficiently detected by monitoring the quantile shifts of the activation distributions in specific layers. In most studied cases, it is sufficient to supervise two layers with one quantile marker each to achieve high error detection rates up to \(\sim\)96% precision and \(\sim\)98% recall. We also show that the strategic monitoring location can be associated with the concept of intercepting bulk and peak activation shifts, which gives a _novel, unifying perspective on the dependability of Figure 5: Average inference time per image accumulated over CPU and GPU. We compare the original inference, reduced and full quantile monitoring, and feature map tracing (method of [24, 25]). In the setup, we run 100 random images with a batch size of 10 (with GPU enabled) and repeat 100 independent runs. System specifications: Intel® CoreTM i9-12900K, Nvidia GeForce RTX 3090. _DNNs_. Due to the large degree of information compression in this approach, the compute overhead of the approach is in most cases only between 0.3% and 1.6% compared to the original inference time, and outperforms the comparable state of the art. In addition, we show that the method contributes to the model's explainability as the error detection decision is interpretable and transparent. For future work, we can further guide the search for optimized minimal feature combinations, for example, by taking into account specifics of the model architecture. **Acknowledgement:** We thank Neslihan Kose Cihangir and Yang Peng for helpful discussions. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 956123. This work was partially funded by the Federal Ministry for Economic Affairs and Climate Action of Germany, as part of the research project Safe-Wahr (Grant Number: \(19A21026C\)), and the Natural Sciences and Engineering Research Council of Canada (NSERC).
2309.06212
Long-term drought prediction using deep neural networks based on geospatial weather data
The problem of high-quality drought forecasting up to a year in advance is critical for agriculture planning and insurance. Yet, it is still unsolved with reasonable accuracy due to data complexity and aridity stochasticity. We tackle drought data by introducing an end-to-end approach that adopts a spatio-temporal neural network model with accessible open monthly climate data as the input. Our systematic research employs diverse proposed models and five distinct environmental regions as a testbed to evaluate the efficacy of the Palmer Drought Severity Index (PDSI) prediction. Key aggregated findings are the exceptional performance of a Transformer model, EarthFormer, in making accurate short-term (up to six months) forecasts. At the same time, the Convolutional LSTM excels in longer-term forecasting.
Alexander Marusov, Vsevolod Grabar, Yury Maximov, Nazar Sotiriadi, Alexander Bulkin, Alexey Zaytsev
2023-09-12T13:28:06Z
http://arxiv.org/abs/2309.06212v6
# Long Term Drought Prediction using Deep Neural Networks based on Geospatial Weather Data ###### Abstract The accurate prediction of drought probability in specific regions is crucial for informed decision-making in agricultural practices. In particular, for long-term decisions it is important to make predictions for one year ahead. However, forecasting this probability presents challenges due to the complex interplay of various factors within the region of interest and neighboring areas. In this study, we propose an end-to-end solution based on various spatio-temporal neural networks to address this issue. Considered models focus on predicting the drought intensity based on Palmer Drought Severity Index (PDSI) for subregions of interest, leveraging intrinsic factors and insights from climate models to enhance drought predictions. Comparative evaluations demonstrate the superior accuracy of Convolutional LSTM and Transformer models compared to baseline Gradient Boosting and Logistic Regression solutions. The two former models achieved impressive \(ROC\ AUC\) scores from 0.90 to 0.70 for forecast horizons from 1 to 6 months, outperforming baseline models. The transformer showed superiority for shorter horizons, while ConvLSTM took the lead for longer horizons. Thus, we recommend selecting the model in this way for long-term drought forecasting. To ensure the broad applicability of considered models, we conduct extensive validation across regions worldwide, considering different environmental conditions. Moreover, we also run several ablation and sensitivity studies to challenge our findings and provide additional information on an appropriate way to solve the problem at hand. keywords: weather, climate, drought forecasting, deep learning, long-term forecasting + Footnote †: journal: Journal of Forecasting and Forecasting ## 1 Introduction Drought forecasting is one of the crucial problems nowadays [1]. Indeed, monitoring and forecasting droughts hold significant importance due to their high occurrence across diverse terrains [2]. These natural climate events are costly and have far-reaching impacts on populations and various economic sectors [3]. Additionally, it is worth noting that global climate change will likely increase the probability of droughts [4]. So, drought forecasting is an essential but complicated task since the drought's beginning, ending, and duration are often hard to predict [5]. A quantitative measure of the drought is needed to solve the task described above. However, it is well-known that drought depends on many other climatic factors such as temperature and precipitation [6]. Hence, the drought measure (indices) can be calculated in many ways. Ones of the most fundamental and mostly known indices [7] are the Standardized Precipitation Index (SPI) [8] and Palmer Drought Severity Index (PDSI) [9]. In our studies, we use monthly PDSI for several reasons. First of all, our forecasting horizon is quite long - 12 months. According to recent work [10], PDSI is a good choice for long-term forecasting. Also, PDSI has a long recording history and allows us to consider the effect of global warming, which is quite a significant problem in our days [11]. More and more machine learning algorithms are involved in solving drought forecasting tasks. Regarding classic approaches, in [5] prominent stochastic models, AutoRegressive Integrated Moving Average (ARIMA) and multiplicative Seasonal AutoRegressive Integrated Moving Average (SARIMA), were exploited to predict drought via SPI index. Multivariate regression having a history of PDSI and some other global climate indexes on input was used to predict the PDSI index in South-Central Oklahoma [10]. Gradient boosting algorithm shown effectiveness for geospatial data [12; 13] and can handle imbalanced problems [14] encountered in drought prediction. Besides, this algorithm has proved its power in drought classification in Turkish regions [15]. Authors [16] used a logistic regression model for binary drought classification task via SPI index. Moving on to deep learning methods, the authors in [17] proposed to use a recursive multistep neural network and direct multistep neural network for drought forecasting via the SPI index. One of the most prominent approaches for time series data is Recurrent Neural Network (RNN) [18]. It is known that RNNs are often better than classical approaches [19]. Indeed, research [20] shows that Long Short-Term Memory (LSTM) [21] outperforms ARIMA for long-term forecasting of SPI index, but in short-term prediction, ARIMA has quite well results. To exploit the advantages of ARIMA and LSTM, authors of [22] proposed to use a hybrid ARIMA-LSTM model. The approaches described above used only historical (temporal) information about the data. Nevertheless, besides temporal information, the data has spatial dependencies. One of the most famous methods to account for both temporal and spatial information is ConvLSTM [23]. This method applies to many tasks, such as earthquake prediction [24]. Authors of [7] used ConvLSTM for short-term drought forecasting (8 days) of satellite-based drought indices - Scaled Drought Condition Index (SDCI) and SPI. Another idea is to use Transformer architectures. Initially developed for Natural Language Processing (NLP) tasks [25] now these models are widely used in a wide variety of domains, e.g., processing video frames in Computer Vision (CV) [26]. Consequently, adopting attention-based architectures to spatio-temporal modeling is a natural idea. One of the most known and modern weather transformers are EarthFormer [27] and FourCastNet [28]. These models solve different regression tasks (e.g., precipitation nowcasting) and show superior performance for various spatio-temporal problems. We adopted these architectures to our task. The applications of deep learning models in conjunction with geospatial data extend beyond drought forecasting and can be effectively employed in diverse weather scenarios. Unlike computationally intensive numerical weather prediction methods based on solving partial differential equations, recent research by [29] demonstrates the enhanced capabilities of transformer-based models in delivering accurate medium-range predictions for various upper-air and surface variables. Furthermore, [30] successfully combined multiple deep-learning models to rectify short-term wind predictions derived from conventional numerical algorithms. In the realm of meteorological forecasting complexity, [31] harnessed the power of Generative Adversarial Networks (GANs) and Variational Autoencoder GANs (VAE-GANs) to generate spatial forecasts of precipitation in the UK. Similarly, in the context of the contiguous United States, [32] showcased the superior performance of a 3D convolutional network in short-term precipitation predictions. The existing literature highlights the strengths of deep learning models for generating short-term and medium-term forecasts, spanning from durations of several hours to several days or even weeks. This diverse range of applications underscores the adaptability and potential of deep learning in enhancing meteorological prediction methodologies. Besides weather forecasting problems, deep learning models for geospatial data have found many applications. To begin with, this type of data could be transformed into images and then used to forecast traffic with CNNs, as in [33]. Naturally, one would exploit spatial dependencies to predict epidemic propagation, as in the case of the Madrid region and COVID-19 in the paper [34]. More recently, [35] reviewed various machine learning methods for forecasting dengue risk, including CNNs and RNNs. Next, in the paper by [36], geospatial LSTMs were exploited to predict invasive species outbreaks. Finally, various convolutional and recurrent neural network architectures were successfully exploited in crop yield forecasts, thus immediately providing economic value for the agricultural sector. Our paper considers different classic (Gradient boosting, logistic regression) and deep-learning (ConvLSTM, EarthFormer, and FourCastNet) methods for long-term drought forecasting using PDSI values. The main contribution of our research are: 1. Deep learning models solve the problem of medium (up to 12 months) drought forecasting. We formulate the problem as the prediction of the probability of a drought. 2. _Medium-term_ forecasting (up to 6 months ahead) is better to provide via **EarthFormer** meanwhile _Long-term_ forecasting is quite suitable for **ConvLSTM**. So, we recommend to use them accordingly. 3. Surprisingly, we observed pretty good performance of Logistic regression and Gradient Boosting models, especially in _Short-term_ forecasting, if correct features are used. Detailed explanations and intuition on achieved results are in 4.1 section. The list of key novelties of our work is the following: 1. We adopted and explored traditional spatio-temporal methods (ConvLSTM, gradient boosting, logistic regression) as modern approaches (EarthFormer, FourCastNet). According to our knowledge, we are the first to use these models for long-term forecasting of PDSI values. 2. Unlike many previous works, which tried to predict PDSI value directly (regression task), we treated the drought forecasting task as a binary classification task. We chose \(-2\) as threshold according to [10]. All PDSI values below \(-2\) are interpreted as a drought. So, now a model report a probability of a drought, making it more suitable for estimation of financial risk for particular 3. We provided extensive experiments using freely accessible data from diverse regions and continents. ## 2 Data We have used publicly available geospatial data from Google Earth Engine [37]. Specifically, for obtaining the PDSI data, we employed the Terra-Climate Monthly dataset [38]. Our PDSI data encompasses a comprehensive range of climatic values spanning from 1958 to 2022, covering the entirety of Earth. To test the consistency of considered models, we look at regions across continents and various climate zones - from the state of Missouri and Eastern Europe to India. The considered regions are depicted in Figure 1. Their characteristics are given in Table 1. The input data is downloaded as a tif file and later restructured as a 3D tensor, with one dimension representing the time scale (monthly intervals). In contrast, the other two dimensions correspond to spatial coordinates (x and y) of the grid. The resolution of the regional dataset varies, where grid Figure 1: Regions chosen for PDSI forecast dimensions range from 30 by 60 up to 128 by 192 (spatial resolution of a single cell of TerraClimate data is roughly 5 km), accommodating different levels of granularity and spatial detail. ## 3 Methods We compare deep learning approaches, including Convolutional LSTM and novel transformer models, such as FourCastNet from Nvidia and Earth-Former from Amazon, with classic methods, including baseline model, gradient boosting, and logistic regression. ### Baseline As a global baseline and a coherence check, we took the most prevalent class from train data and compared it with actual targets from the test subset. We also checked a rolling baseline - i.e., the most frequent class from recent history (from 6 to 24 months), but the results were almost indistinguishable from the global baseline, so we kept them out of our tables and graphs. ### Basic methods: Logistic regression and Gradient boosting Both logistic regression and gradient boosting cannot work with raw data itself as it is. Hence, we created a data module that treated each grid cell as an individual value and transformed our task into a typical time series forecasting problem. To benefit from spatial correlations, we incorporate values from neighboring cells. For example, if we consider a 3x3 neighborhood, this includes eight additional time series. It is important to note that for "edge" cells, some of the neighboring cells may contain all zeros due to data limitations. \begin{table} \begin{tabular}{l c c c c} \hline Region & Span, & \% of normal & \% of drought & Spatial \\ & months & PDSI \(\geq-2\) & PDSI \(\leq-2\) & dimensions, km \\ \hline Missouri, USA & 754 & 74.91 & 25.09 & 416x544 \\ Madhya Pradesh, India & 754 & 70.66 & 29.34 & 512x768 \\ Goias, Brazil & 754 & 68.97 & 31.03 & 640x640 \\ Northern Kazakhstan & 742 & 68.70 & 31.30 & 256x480 \\ Eastern Europe & 742 & 66.28 & 33.72 & 352x672 \\ \hline \end{tabular} \end{table} Table 1: Regions’ summary statistics Logistic RegressionLogistic regression is usually the natural choice for tasks with linear dependence or as a baseline model. The novel research [39] shows that time series linear models are a good choice. Gradient boostingWe adopted the gradient boosting of decision trees, implemented using the well-established library XGBoost [40]. XGBoost, renowned for its speed and efficiency across a wide range of predictive modeling tasks, has consistently been favored by data science competition winners. It operates as an ensemble of decision trees, with new trees aimed at rectifying errors made by existing trees in the model. Trees are successively added until no further improvements can be made. ### Transformer-based methods FourCastNetFourCastNet (short for Fourier ForeCasting Neural Network) is a weather forecasting model developed by authors from Nvidia [28]. It constitutes part of their Modulus Sym deep learning framework for solving various applied physics tasks. It combines Adaptive Fourier Neural Operator (AFNO) from [41] with Vision Transformer, and the model is both computationally, reducing spatial mixing complexity to \(O(N\log N)\), and memory efficient, where \(N\) is the sequence length. In the original papers, authors produced high-resolution short-term forecasts of wind speed and precipitation. We modified the last layer of the model to switch it from regression to classification task and evaluate it for the long-term forecasting problem. EarthFormerVanilla transformers have \(O(N^{2})\) complexity, and consequently, it is hard to apply them to weather spatio-temporal data because of their large dimensionality. Authors [27] propose to use the "divide and conquer" method: they divide original data into non-intersecting parts (called cuboids) and use a self-attention mechanism on each cuboid separately in parallel. Such an approach allows significantly reduced complexity. The authors introduced EarthFormer for regression tasks, so we adopted it for a classification problem. ### ConvLSTM This model draws inspiration from the work presented in the paper by Kail et al. (2021) [24], and it represents a modified version of the Convolutional LSTM architecture proposed by Shi et al. (2015) [23]. To capture temporal dependencies, we adopt Recurrent Neural Networks (RNNs). Specifically, we employ Long Short-Term Memory (LSTM), a type of RNN that utilizes an additional state cell to facilitate long-term memory retention. We extend the traditional one-dimensional hidden states of LSTM to two-dimensional feature maps, enabling grid-to-grid transformations, which are crucial for our task. We process grids of PDSI values with various dimensions, ranging from 9x16 to 40x200. To exploit spatial dependencies among drought severity values, we incorporate Convolutional Neural Networks (CNNs), well-suited for image and two-dimensional signal processing, including geospatial data. This approach combines the strengths of both RNN and CNN architectures by passing information through the RNN component as a feature map obtained using CNN operations. This integration allows us to capture and utilize temporal and spatial information within the following architecture. Details of architectureConvolutional LSTM follows the pipeline: 1. We represent data as a sequence of grids: for each cell, we specify a value of PDSI for a particular month; the input grid at each time moment has dimension \(grid_{h}\times grid_{w}\) (varying from 50x50 to 200x200 for different regions of interest). 2. We pass the input grid through a convolutional network to create an embedding of grid dimensionality size with 16 channels. As an output of LSTM at each time moment, we have a hidden representation (short-term memory) of size \(h\times grid_{h}\times grid_{w}\), cell (long-term memory) representation of a similar size, and the output of size \(h\times grid_{h}\times grid_{w}\). 3. We transform the output to the size \(1\times grid_{h}\times grid_{w}\) using convolution \(1\times 1\) to receive probabilities of the drought for each cell as a final prediction (or to \(k\times grid_{h}\times grid_{w}\), where \(k>2\) is a number of classes of drought conditions that we are trying to predict). As an additional hyperparameter, we vary the forecasting horizon - i.e., we can forecast PDSI for the next month or \(k\)-th month. ## 4 Results Description of experimentFor a binary classification problem, we arbitrarily set a drought threshold as \(-2\), which can be justified by PDSI bins from Table 2 and later adjusted for further experiments. For each cell in a considered region, the problem to solve is a binary classification problem with the true value is the indicator that PDSI exceeds \(-2\). By construction, the model outputs the probability of a serious drought from the model output. For validation purposes, we have divided our dataset into the train (\(70\%\)) and test (\(30\%\)) subsets for all of the five world regions (Missouri, Northern Kazakhstan, Madhya Pradesh, Eastern Europe, and Goias). The model is trained via a train subset of the data, the quality metrics are calculated via an independent test subset. The validation is out-of-time: the test subset follows the training subset in time. Evaluation procedureWe use \(ROC\,AUC\), \(PR\,AUC\) and \(F1\) score to evaluate the model. During validation for early stopping and hyperparameter optimization, we chose \(ROC\,AUC\) score. All these scores are the medians - i.e., concatenating spatial prediction, we receive a temporal vector at every cell of spatial prediction. Next, we receive a single score for each cell, so we end up with a grid full of metrics, and finally, we compute a median. Higher values of all scores correspond to better models. \(ROC\,AUC\) scores has the perfect value being \(1\) and the value for a random prediction being \(0.5\). Compared methodsFor our main experiments, we have exploited the performance of baseline (most frequent class from historical data), XGBoost (gradient boosting of random forests), LogReg (logistic regression), Convolutional LSTM (ConvLSTM), FourCastNet (transformer model from Nvidia) and Eartformer (transformer model from Amazon). XGBoost and LogReg are two basic algorithms often used as strong baselines. The last three are variants of neural networks that showed strong performance in various geospatial problems. They represent two dominating architectures in geospatial modeling: ConvLSTM is a combination of recurrent and convolutional neural networks, FourCastNet and Eartformer ### Main results Analysis of resultsOur findings indicate that ConvLSTM model outperformed the competitors, including the gradient boosting tree algorithm, achieving an impressive \(ROC\,AUC\) score of \(0.9\) for a one-month prediction. Notably, ConvLSTM exhibits a gradual decline in performance, reaching \(0.6-0.65\) for longer forecasting horizons (ranging from \(1\) month to \(12\) months). The standard gradient boosting approach initially yielded a similar \(ROCAUC\) score of \(0.9\) but sharply dropped to \(0.5\) as the forecasting horizon extended. EarthFormer outperforms other approaches for shorter horizons and falling short of ConvLSTM at longer horizons of 9-12 months. These results are visually depicted in Figure 2. Additionally, we present the results for six-month prediction by regions in Figure 4, where the variation in scores for different geographies across models is visible. _Why does transformer fail in long-term prediction?_ Our assumption of such behavior is the permutation-invariance of the attention mechanism. Despite positional encoding, transformers can't effectively extract temporal information from long input sequences. Since a long input sequence is essential for long-term forecasting, transformers don't show good results. Similar results for different time-series tasks were observed in [39]. ConvLSTM naturally extracts temporal information via LSTM. _Why is logistic regression impressive?_ Since Gradient boosting results are almost identical to Logistic regression, we want to discuss only Logistic regression performance. First of all, the power of linear models was already shown in [39], where they beat modern Transformer architectures on almost all datasets. In our experiments, linear models are worse than other models on _Long-term_ prediction, but on _Short-term_ scale, we can see quite comparable results. We tried different history lengths, but our results show that it is enough to take only the nearest to the future horizons element. Our intuition is that the nearest future predictor variables are closely (particularly linearly) related to the history element. For example, PDSI in July is close to PDSI in August but far away from December. Hence, linear models are good at _Short-term_ predictions but poor at _Long-term_ forecasting. \begin{table} \begin{tabular}{l l} \hline PDSI value & Drought severity class \\ \hline 4.00 and above & Extreme wet spell \\ 3.00-3.99 & Severe wet spell \\ 2.00-2.99 & Moderate wet spell \\ 1.00-1.99 & Mild wet spell \\ -1.00 to 0.99 & Normal \\ -1.00 to -1.99 & Mild dry spell \\ -2.00 to -2.99 & Moderate dry spell \\ -3.00 to -3.99 & Severe dry spell \\ -4.00 and below & Extreme dry spell \\ \hline \end{tabular} \end{table} Table 2: Classification of various PDSI values, source [42] \begin{table} \begin{tabular}{r r r r r r} \hline \hline Horizon, months & 1 & 3 & 6 & 9 & 12 \\ \hline \multicolumn{5}{l}{Median ROC AUC:} \\ \hline Baseline & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 \\ LogReg & 0.886 & 0.774 & 0.640 & 0.546 & 0.518 \\ XGBoost & 0.878 & 0.754 & 0.628 & 0.568 & 0.542 \\ FourCastNet & 0.881 & 0.711 & 0.624 & 0.561 & 0.536 \\ EarthFormer & **0.948** & **0.840** & 0.690 & 0.556 & 0.480 \\ ConvLSTM & 0.887 & 0.802 & **0.693** & **0.650** & **0.617** \\ \hline \multicolumn{5}{l}{Median PR AUC:} \\ \hline Baseline & 0.355 & 0.354 & 0.354 & 0.355 & 0.356 \\ LogReg & 0.766 & 0.598 & 0.470 & 0.394 & 0.360 \\ XGBoost & 0.752 & 0.574 & 0.44 & 0.39 & 0.372 \\ FourCastNet & 0.776 & 0.546 & 0.455 & 0.402 & 0.382 \\ EarthFormer & **0.880** & 0.650 & 0.514 & 0.438 & 0.362 \\ ConvLSTM & 0.772 & **0.689** & **0.565** & **0.505** & **0.452** \\ \hline \multicolumn{5}{l}{Median F1:} \\ \hline Baseline & 0.645 & 0.646 & **0.646** & **0.645** & **0.644** \\ LogReg & **0.846** & **0.698** & 0.480 & 0.226 & 0.094 \\ XGBoost & 0.836 & 0.674 & 0.480 & 0.366 & 0.292 \\ FourCastNet & 0.831 & 0.603 & 0.460 & 0.366 & 0.314 \\ EarthFormer & 0.816 & 0.604 & 0.448 & 0.226 & 0.178 \\ ConvLSTM & 0.784 & **0.698** & 0.600 & 0.558 & 0.543 \\ \hline \hline \end{tabular} \end{table} Table 3: Median Metrics vs Forecast Horizon, binary classification; best values are in bold, second best are underlined Figure 2: Median metrics: ROC-AUC, PR-AUC, F1 from left to right for different forecast horizons averaged over five considered regions, binary drought classification ### Predictions and errors for a particular region To assess the performance of the Convolutional LSTM algorithm (which proved to be the most stable and promising for drought forecasting), we focused on the region of Missouri state, where we will run several ablation studies. As an illustration, the spatial distribution of \(ROCAUC\) scores is depicted in Figure 3. Notably, we observed a non-uniform distribution of \(ROCAUC\) values across the cells within the region. The standard deviation of the scores is substantial, and individual values range from those close to random predictors (\(ROC\ AUC=0.6\)) to near-perfect scores approaching 1.0. This variability highlights the diverse predictive capabilities of our algorithm across different spatial locations within Missouri. \begin{table} \begin{tabular}{r c c c c c} \hline \hline Region & Northern & Eastern & Madhya & Goias & Missouri \\ & Kazakhstan & Europe & Pradesh & & \\ \hline median ROC AUC: & & & & & \\ \hline Baseline & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 \\ LogReg & 0.60 & 0.63 & 0.61 & 0.61 & **0.75** \\ XGBoost & 0.59 & 0.64 & 0.59 & 0.61 & 0.71 \\ FourCastNet & 0.60 & 0.55 & 0.65 & 0.61 & 0.69 \\ EarthFormer & 0.54 & **0.69** & **0.84** & **0.65** & 0.73 \\ ConvLSTM & **0.71** & 0.68 & 0.71 & 0.60 & **0.75** \\ \hline median PR AUC: & & & & & \\ \hline Baseline & 0.37 & 0.43 & 0.35 & 0.39 & 0.23 \\ LogReg & 0.46 & **0.67** & 0.24 & **0.55** & 0.43 \\ XGBoost & 0.46 & 0.63 & 0.20 & 0.54 & 0.37 \\ FourCastNet & 0.46 & 0.55 & 0.36 & 0.54 & 0.37 \\ EarthFormer & 0.47 & 0.22 & **0.77** & 0.52 & **0.59** \\ ConvLSTM & **0.55** & 0.65 & 0.57 & 0.54 & 0.50 \\ \hline median F1: & & & & & \\ \hline Baseline & 0.63 & 0.57 & 0.65 & **0.61** & **0.77** \\ LogReg & 0.42 & 0.62 & 0.35 & 0.41 & 0.60 \\ XGBoost & 0.45 & 0.58 & 0.30 & 0.54 & 0.53 \\ FourCastNet & 0.42 & 0.39 & 0.46 & 0.51 & 0.52 \\ EarthFormer & 0.03 & 0.18 & **0.80** & 0.55 & 0.65 \\ ConvLSTM & **0.65** & **0.64** & 0.64 & 0.52 & 0.56 \\ \hline \hline \end{tabular} \end{table} Table 4: Median Metrics vs Region, binary classification, 6 months horizon; best values are in bold, second best are underlined #### 4.2.1 Performance Evaluation for Cropped Region Description of experimentAs is typical with ROC AUC maps, the worst predictions are found on the edges and some corners. We have observed that this behavior is consistent regardless of the region being studied. Consequently, making predictions for a larger region and cropping the desired region of interest may be advantageous. We have conducted a study to test this hypothesis for the attached map, and the results are shown in Table 5. Analysis of resultsBased on the findings of this specific experiment, we deduce that cropping approximately 40-50% of the initially selected region maximizes our score. In other words, choosing a region that is initially 1.6-2 times larger than our target region is advisable. However, the precise amount of zoom required for optimal results has to be determined through further experiments. Next, we conducted several similar experiments to investigate how the model Figure 3: Spatial distribution of ROC AUC for 6 month forecast, Missouri, ConvLSTM \begin{table} \begin{tabular}{l c c c c c} \hline \hline Percent of map cropped & 0 & 10 & 20 & 30 & 40 \\ median ROC AUC & 0.7525 & 0.7592 & 0.7665 & 0.7749 & 0.7834 \\ \hline Percent of map cropped & 50 & 60 & 70 & 80 & 90 \\ median ROC AUC & 0.7886 & 0.7899 & 0.7880 & 0.7838 & 0.7825 \\ \hline \hline \end{tabular} \end{table} Table 5: ROC AUC score vs crop percentage, 6 month forecast, ConvLSTM model for Missouri predictions change with the decrease in the total squared area. For this, we took the same geographic region, Missouri, and examined various combinations of history length and forecast horizon. We trained a new checkpoint for each variant of history length, forecast horizon, and region area (varying from entire state to about a quarter of it). The results are summarized in Table 6, and in more detail, they are presented in Tables 8, 9, 10, 11. We observed that predicting at a smaller area with pre-trained model from a larger area generally works better. However, the degree of improvement is marginal, usually not exceeding 0.5-1%. Figure 4 presents the evolution of spatial maps. convex [44], and Gradient boosting is an ensemble per se [15]. Obtained ensembles can also be used to access the uncertainty of predictions by machine learning models improving the decision-making process [45]. ## 5 Conclusion Droughts have emerged as severe natural disasters that significantly impact the economy, influencing sectors such as agriculture and the well-being of populations. The frequency and intensity of droughts have been amplified worldwide due to climate change, as exemplified by the summer of 2022 in the Northern Hemisphere. Accurate prediction of future drought occurrences and adequate preparation for their consequences is crucial in mitigating the adverse impacts of climate change. Our research study has shown substantial improvements in forecasting capabilities for PDSI indicator, including the Palmer Drought Severity Index (PDSI), by combining convolutional neural networks and recurrent neural networks. This Convolutional LSTM model surpasses the performance of standard algorithms, such as gradient boosting, and even more advanced transformer models for long-term predictions. By enhancing our understanding and predictive abilities related to droughts, we can proactively address the challenges of climate change. We hope that the insights gained from this study will contribute to better preparedness and mitigation strategies to alleviate the adverse effects of droughts on the economy and society.
2309.09108
Neural Network-based Fault Detection and Identification for Quadrotors using Dynamic Symmetry
Autonomous robotic systems, such as quadrotors, are susceptible to actuator faults, and for the safe operation of such systems, timely detection and isolation of these faults is essential. Neural networks can be used for verification of actuator performance via online actuator fault detection with high accuracy. In this paper, we develop a novel model-free fault detection and isolation (FDI) framework for quadrotor systems using long-short-term memory (LSTM) neural network architecture. The proposed framework only uses system output data and the commanded control input and requires no knowledge of the system model. Utilizing the symmetry in quadrotor dynamics, we train the FDI for fault in just one of the motors (e.g., motor $\# 2$), and the trained FDI can predict faults in any of the motors. This reduction in search space enables us to design an FDI for partial fault as well as complete fault scenarios. Numerical experiments illustrate that the proposed NN-FDI correctly verifies the actuator performance and identifies partial as well as complete faults with over $90\%$ prediction accuracy. We also illustrate that model-free NN-FDI performs at par with model-based FDI, and is robust to model uncertainties as well as distribution shifts in input data.
Kunal Garg, Chuchu Fan
2023-09-16T22:59:09Z
http://arxiv.org/abs/2309.09108v1
# Neural Network-based Fault Detection and Identification for Quadrotors using Dynamic Symmetry ###### Abstract Autonomous robotic systems, such as quadrotors, are susceptible to actuator faults, and for the safe operation of such systems, timely detection and isolation of these faults is essential. Neural networks can be used for verification of actuator performance via online actuator fault detection with high accuracy. In this paper, we develop a novel model-free fault detection and isolation (FDI) framework for quadrotor systems using long-short-term memory (LSTM) neural network architecture. The proposed framework only uses system output data and the commanded control input and requires no knowledge of the system model. Utilizing the symmetry in quadrotor dynamics, we train the FDI for fault in just one of the motors (e.g., motor \(\#2\)), and the trained FDI can predict faults in any of the motors. This reduction in search space enables us to design an FDI for partial fault as well as complete fault scenarios. Numerical experiments illustrate that the proposed NN-FDI correctly verifies the actuator performance and identifies partial as well as complete faults with over \(90\%\) prediction accuracy. We also illustrate that model-free NN-FDI performs at par with model-based FDI, and is robust to model uncertainties as well as distribution shifts in input data. ## I Introduction Safety-critical systems are those where violation of safety constraints could result in loss of lives, significant property damage, or damage to the environment. In real-life applications, many cyber-physical control systems are safety-critical, including autonomous cars, unmanned aerial vehicles (UAVs), and aircraft, where safety pertains to keeping the autonomous agent in a predefined safe set away from obstacles and other agents in its environment. In this context, safe control requires finding a control policy that keeps the system within the safe region at all times. As autonomous systems become more complex (thus increasing the likelihood of faults [1]), it becomes necessary to explicitly consider the possibility of faults in their actuators which can make it difficult (or even impossible in certain cases) to keep the system safe. Many real-world flight incidents have been attributed to actuator failures such as runaway, sticking, and floating [2]. Such failures have been studied in the field of fault-tolerant control (FTC), which has been applied extensively to applications such as aircraft [3, 4, 5, 6], and spacecraft attitude controls [7, 8]. There is a long history of work in control theory dealing with adaptation to system faults and verification of safe controllers. Due to space limits, we only discuss common FTC techniques for actuator faults. Classical methods include robust control [9]; more recent works include robust MPC [10] and quantitative resilience [11]. In this paper, we focus on safe fault-tolerant control, where the salient issue is in ensuring that the system will avoid entering an unsafe set despite actuator faults such as loss of control authority or unknown input disturbances. FTC methods are, in general, classified into two categories: active and passive. Active FTC uses detection techniques and a supervisory system to detect the fault and modify the control structure accordingly. Passive FTC relies on a robust compensator to reduce the effects of faults. For a more in-depth explanation of the passive and active FTC theory, see [2]. Many FTC approaches have been presented in the literature to accommodate actuator faults. Fuzzy logic control [12] and data-driven approaches such as [1] are used for compensating unknown nonlinear dynamics and actuator faults in multi-agent systems. Feedback linearizing control [13, 5], model predictive control [3, 10], sliding mode control [4], and adaptive sliding mode control [6] have been implemented on various nonlinear systems under faults such as a quadrotor subject to one or more motor failures. Adaptive control [7] and robust adaptive control [8] were studied for linear systems under actuator fault, model uncertainty, and external disturbance. Adaptive fuzzy FTC is presented in [14] for actuator faults in Markov jump systems. However, FTC-based approaches can be conservative without accurate identification of the source of the fault. Thus, it is essential to design a highly reliable fault detection and identification (FDI) framework that can predict a fault with high accuracy. There is a plethora of work on FDI; we refer interested readers to the survey articles [15, 16, 17] that discuss various approaches of FDI used in the literature. In particular, the residual-based method has been used very commonly in prior work, where the expected state or output (under the commanded input and a known system model) and the actual state or output of the system are compared for fault detection. The authors in [18] study FDI for a linear parameter-varying (LPV) system and use an observer-based FDI that uses the residual data. Such _residual_ information requires the knowledge of the system model, and thus, is model-dependent. Another example of a model-based approach is [19] where model-based output-residual are used instead of state-residuals. The work in [20] is capable of handling partial faults but not complete faults in an actuator. Most of the work on adaptation-based FDI uses a linearized model for the system [21, 22]. Neural network (NN)-based verification and system monitoring have been successfully used for FDI. The architecture of the considered NN is very important for such verification problems. Fault detection using system trajectory data can be interpreted as anomaly detection in time-series data, and hence, long-short-term memory (LSTM)-based NNs become a natural choice for FDI as they can effectively handle time-series data [23, 24, 25]. There is some work on using LSTM-based FDI, e.g., [26, 27], but it is limited to a very narrow class of faults. As noted in [28], prior work on neural network-based model-free FDI relies on reconstruction of the model using artificial neural networks (e.g., [29]), or generating the residual information using Kalman filtering (see e.g., [30]). The method in [28] also estimates a reduced-order model of the system as an intermediate step. The main disadvantage of model-based FDI methods is that their performance can degrade significantly due to model uncertainties or imperfections in the used model for designing the FDI mechanism and the actual system model. To overcome this limitation, in this paper, we design a _model-free_ FDI mechanism that only uses the output of the system and the commanded input to the system, and does not use the residual information. The paper's contributions are summarized below: * We present a _truly_ model-free approach, where we do not require to either learn the system model or create a reduced-order representation of the model. Instead, we use the system output and the commanded input as the features of a neural network, which directly predicts whether there is an actuator fault. * We consider a variety of partial fault scenarios and leverage the symmetry in quadrotor dynamics to reduce the search space for training and design an NN-FDI trained on the failure of just one motor that is capable of predicting fault in any of the quadrotor motors. * We illustrate through numerical experiments that the model-free FDI mechanism performs at par (and even better in some cases than) the model-based mechanisms. * We also illustrate the robustness of the proposed method against modeling uncertainties and demonstrate through numerical examples that while the performance of the model-based FDI mechanism drops significantly under model uncertainties, the performance of the designed model-free approach remains the same. ## II Problem formulation We start by presenting the quadrotor dynamics. The quadrotor dynamics can be written compactly as: \[\dot{x} =f(x)+g(x)u, \tag{1a}\] \[y =\rho(x), \tag{1b}\] for state \(x\in\mathcal{X}\), control input \(u\in\mathcal{U}\), and state and control sets \(\mathcal{X}\subset\mathbb{R}^{n}\) and \(\mathcal{U}\subset\mathbb{R}^{m}\), respectively. Here, \(\rho:\mathbb{R}^{12}\rightarrow\mathbb{R}^{6}\) is the output map consisting of the position and the attitude vector of the quadrotor, i.e., \(y=(p_{x},p_{y},p_{z},\phi,\theta,\psi)\). Such an output model is realized using a 6DOF Inertial Measurement Unit (IMU) output. The 6-DOF quadrotor dynamics are given in [31] with \(x\in\mathbb{R}^{12}\) consisting of positions, velocities, angular positions and angular velocities, and \(u\in\mathbb{R}^{4}\) consisting of the thrust at each of four motors : \[\dot{p}_{x} =\big{(}c(\phi)c(\psi)s(\theta)+s(\phi)s(\psi)\big{)}w\] \[\quad-\big{(}s(\psi)c(\phi)-c(\psi)s(\phi)s(\theta)\big{)}v+uc( \psi)c(\theta) \tag{2a}\] \[\dot{p}_{y} =\big{(}s(\phi)s(\psi)s(\theta)+c(\phi)c(\psi)\big{)}v\] \[\quad-\big{(}c(\psi)s(\phi)-s(\psi)c(\phi)s(\theta)\big{)}w+us( \psi)c(\theta)\] (2b) \[\dot{p}_{z} =w\ c(\psi)c(\phi)-u\ s(\theta)+v\ s(\phi)c(\theta)\] (2c) \[\dot{u} =r\ v-q\ w+g\ s(\theta)\] (2d) \[\dot{v} =p\ w-r\ u-g\ s(\phi)c(\theta)\] (2e) \[\dot{w} =q\ u-p\ v+\frac{U_{1}}{m}-g\ c(\theta)c(\phi)\] (2f) \[\dot{\phi} =r\frac{c(\phi)}{c(\theta)}+q\frac{s(\phi)}{c(\theta)}\] (2g) \[\dot{\theta} =q\ c(\phi)-r\ s(\phi)\] (2h) \[\dot{\psi} =p+r\ c(\phi)t(\theta)+q\ s(\phi)t(\theta)\] (2i) \[\dot{r} =\frac{1}{I_{zz}}\big{(}U_{2}-pq(I_{yy}-I_{xx})\big{)}\] (2j) \[\dot{q} =\frac{1}{I_{yy}}\big{(}U_{3}-pr(I_{xx}-I_{zz})\big{)}\] (2k) \[\dot{p} =\frac{1}{I_{xx}}\Big{(}U_{4}+qr(I_{zz}-I_{yy})\Big{)} \tag{2l}\] where \(m,I_{xx},I_{yy},I_{zz},k_{r},k_{t}>0\) are system parameters, \(g=9.8\) is the gravitational acceleration, \(c(\cdot),s(\cdot),t(\cdot)\) denote \(\cos(\cdot),\sin(\cdot),\tan(\cdot)\), respectively, \((p_{x},p_{y},p_{z})\) denote the position of the quadrotor, \((\phi,\theta,\psi)\) its Euler angles and \(u=(U_{1},U_{2},U_{3},U_{4})\) the input vector consisting of thrust \(U_{1}\) and moments \(U_{2},U_{3},U_{4}\). The relation between the vector \(u\) and the individual motor speeds is given as \[\begin{bmatrix}U_{1}\\ U_{2}\\ U_{3}\\ U_{4}\end{bmatrix}=\begin{bmatrix}C_{T}&C_{T}&C_{T}&C_{T}\\ -dC_{T}\sqrt{2}&-dC_{T}\sqrt{2}&dC_{T}\sqrt{2}&dC_{T}\sqrt{2}\\ -dC_{T}\sqrt{2}&dC_{T}\sqrt{2}&dC_{T}\sqrt{2}&-dC_{T}\sqrt{2}\\ -C_{D}&C_{D}&-C_{D}&C_{D}\end{bmatrix}\begin{bmatrix}\omega_{1}^{2}\\ \omega_{2}^{2}\\ \omega_{3}^{2}\\ \omega_{4}^{2}\end{bmatrix}, \tag{3}\] where \(\omega_{i}\) is the angular speed of the \(i-\)th motor for \(i\in\{1,2,3,4\}\), \(C_{D}\) is the drag coefficient and \(C_{T}\) is the thrust coefficient. These parameters are given as: \(I_{xx}=I_{yy}=1.395\times 10^{-5}\) kg-m\({}^{2}\), \(I_{zz}=2,173\times 10^{-5}\) kg-m\({}^{2}\), \(m=0.0299\) kg, \(C_{T}=3.1582\times 10^{-10}\) N/pm\({}^{2}\), \(C_{D}=7.9379\times 10^{-12}\) N/pm\({}^{2}\) and \(d=0.03973\) m (see [31]). In this paper, we consider an actuator fault occurring at some unknown time \(t_{f}\geq 0\): \[u(t,x)=\begin{cases}\pi(t,x)&\text{if}\quad t\leq t_{F};\\ \text{diag}(\Theta)\ \pi(t,x)&\text{if}\quad t>t_{F},\end{cases}, \tag{4}\] where \(\Theta=[0,1]^{m}\in\mathbb{R}^{m}\) is the vector denoting whether an actuator is faulty or not, and \(\text{diag}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m\times m}\) maps a vector in \(\mathbb{R}^{m}\) to a diagonal matrix in \(\mathbb{R}^{m\times m}\). If the \(i-\)th actuator is faulty, then \(\Theta_{i}\in[0,1)\) and the rest of the elements of \(\Theta\) are 1. The problem statement is to design an NN-based FDI \(\Theta_{NN}\) that correctly predicts and identifies which actuator has a fault and what is the degree of the fault. ## III Neural Fault-detection and Isolation ### _Model-free FDI_ The faults must be detected correctly and promptly for the safe recovery of the system. We use a learning-based approach to design a fault-detection mechanism. Let \(\Theta\in[0,1]^{m}\) denote the fault vector, where \(\Theta_{i}<1\) indicates that \(i-\)th actuator is faulty, while \(\Theta_{i}=1\) denotes it is not faulty. Let \(\Theta_{NN}:\mathbb{Y}\times\mathbb{U}\rightarrow\mathbb{R}^{m}\) be the _predicted_ fault vector, parameterized as a neural network. Here, \(\mathbb{Y}=\{x(\cdot)\mid y(\cdot)=\rho(x(\cdot))\}\) is a function space consisting of trajectories of the state vector, and \(\mathbb{U}=\{u(\cdot)\mid u(\cdot)\in\mathcal{U}\}\) is a function space consisting of input signals. To generate the residual data, the knowledge of the system model is essential, which makes the residual-based approach model dependent. This is the biggest limitation of this approach, as modeling errors can lead to severe performance issues in fault detection due to model uncertainties. To overcome this, we propose a model-free NN-based FDI mechanism that only uses \((y,u)\) as the feature data, i.e., it does not require the model-based residual information. For a given time length \(T>0\), at any given time instant \(t\geq T\), the NN function \(\Theta_{NN}\) takes a finite trace of the system trajectory \(x(t-\tau)|_{\tau=0}^{T}\) and the _commanded_ input signal \(u(t-\tau)_{\tau=0}^{T}\) as input, and outputs the vector of predicted faults. Using the symmetry of the quadrotor (i.e., \(I_{xx}=I_{yy}\)), it is possible to only learn the fault-detector for one of the faulty actuators and detect which motor is faulty using rotational invariance. Let us define color-coding for the four motors in the original configuration (i.e., case 1): \[\#1\rightarrow\text{Black}\quad\#2\rightarrow\text{Green}\quad\#3 \rightarrow\text{Red}\quad\#4\rightarrow\text{Blue}\] During the training, without loss of generality, we assume that the green motor is faulty. Now, if instead, another motor is faulty, then a state-transformation map can be defined as \[\Phi(n)=\begin{bmatrix}R_{\theta}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&R_{\theta}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&R_{\theta}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&R_{\theta}\end{bmatrix}\text{, with }\theta=\frac{\pi}{2},\pi,\frac{3\pi}{2}\text{ for n = 3,}\] 4 and 1, respectively, where \(R_{\theta}=\begin{bmatrix}\cos(\theta)&\sin(\theta)&0\\ -\sin(\theta)&\cos(\theta)&0\\ 0&0&1\end{bmatrix}\). Thus, for case #4, the black motor acts as motor #2 in the original configuration (see Figure 1). As a result, if the black motor is faulty, and the fault-predictor is trained to detect that motor #2 is faulty (i.e., green motor in the original configuration), then case 4 will give the correct prediction (see Figure 3). ### _Model-based FDI_ For model-based FDI mechanisms, the residual data is also required as an additional feature to the NN. The error vector \(\tilde{y}\) (commonly known as residual in the FDI literature) is defined as the stepwise error between the actual state of the system with potentially faulty actuators and the state of the system assuming no faults, i.e., \(\tilde{y}(t)=y(t)-\bar{y}(t)\) where \(y\) is the output of the actual system (1) with faulty input and \(\bar{y}\) is the output of reference model without fault \(\hat{x}=f(\bar{x})+g(\bar{x})u,\bar{y}(t)=\rho(\bar{x}(t))\), with \(\bar{y}(k\tau)=y(k\tau))\), \(k=0,1,2,\dots\), where \(\tau>0\) is sampling period for data collection. In most of the prior literature, either \(\tilde{x}\) (in state-based methods) or \(\tilde{y}\) (in output-based methods) is used for designing FDI. However, that requires the availability of the model for computing the residuals, which makes such approaches very restrictive. ### _Training data_ For training, the trajectory data is collected where actuator \(\#2\) is partially faulty with \(\theta_{2}\in\{0,0.1,0.2,\cdots,0.9,1\}\). Let \(d=11\) denote the number of faulty scenarios for motor \(\#2\). At each time instant \(t\geq T\), it is possible that only a portion of the trajectory is generated under a faulty actuator. That is, the possible input to the system is \(u(t-\tau,T_{f})_{\tau=0}^{T}\coloneqq[u(t-T),u(t-T+1),\cdots,u_{f}(t-T+T_{f}),u _{f}(t-T+T_{f}+1),\cdots,u(t)]\) (with the corresponding system output \(y(t-\tau,T_{f})_{\tau=0}^{T}\) and the residual \(\tilde{y}(t-\tau,T_{f})_{\tau=0}^{T}\), where \(T_{f}\in[0,T]\) dictates the time instant when the fault occurs. Thus, the NN for fault prediction must be trained on all possible combinations of occurrences of fault. Hence, our training data includes \(\bigcup\limits_{T_{f}\in 0}^{T}\big{(}y(t-\tau,T_{f})_{\tau=0}^{T},\tilde{y}(t- \tau,T_{f})_{\tau=0}^{T},u(t-\tau,T_{f})_{\tau=0}^{T}\big{)}\) (see Figure 2). In every training iteration, we generate \(N_{traj}=N_{1}\times d\times T_{f}\) trajectories, so that we have \(N_{1}>0\) trajectories for faults in each of the \(d\) faults in motor \(\#2\) with all possible lengths of trajectories under one faulty actuator in \([0,T_{f}]\). In particular, we consider discrete fault values \(\Theta_{2}\in\{0,0.1,0.2,\cdots,0.9\}\) and thus, \(d=11\) with 10 values for faults and 1 for non-faulty cases. The training data is generated by randomly choosing initial conditions \(\{x(0)\}_{1}^{N_{1}}\) and rolling out \(d\) trajectories for each Fig. 1: The four cases used in fault-prediction. Each next case is generated through a 90 degrees anti-clockwise axes rotation from the previous model (resulting in a 90-degree clockwise configurational rotation). Fig. 2: General neural-network architecture for failure prediction. The training data includes all possible trajectories with different lengths of faulty input (the violet color represents the portion of the trajectory with faulty input). of the sampled conditions under \(d\) possible \(\Theta_{2}\) values. This enables the NN to distinguish between various kinds of fault scenarios since it obtains trajectories under all considered fault scenarios with the same initialization. The loss function for model-free FDI training is defined as \[\mathcal{L}_{\Theta}^{MF}=\frac{1}{N_{traj}}\sum_{j=1}^{N_{traj}} \Big{[}\|\Theta_{j,NN}\big{(}y_{j}(\cdot),u_{j}(\cdot)\big{)}-\Theta_{j}\|- \epsilon\Big{]}_{+}, \tag{5}\] while model-based FDI is given as: \[\mathcal{L}_{\Theta}^{MB}=\frac{1}{N_{traj}}\sum_{j=1}^{N_{traj}} \Big{[}\|\Theta_{j,NN}\big{(}y_{j}(\cdot),u_{j}(\cdot),\tilde{y}_{j}(\cdot) \big{)}-\Theta_{j}\|-\epsilon\Big{]}_{+}, \tag{6}\] where \(\Theta_{j}\in\{1\}\times[0,1]\times\{1\}\times\{1\}\) is the fault vector used for generating the data for the \(j-\)th trajectory and \(0<\epsilon\ll 1\). In each training epoch, we generate \(N=200\times 11\times 100\) trajectories of length \(T_{f}\) and maintain a buffer of \(1.5\) M trajectories. We train the NN until the loss reduces to \(10^{-3}\). We use a Linear-Quadratic Regulator (LQR) input to generate the training data. In our experiments, we illustrate that the trained NN is highly robust to the kind of input used for trajectory generation and can predict fault with the same accuracy for the trajectories generated by Control Barrier Functions (CBF)-based quadratic programs (QPs) which are commonly used for maintaining safety [32]. During training, we optimize the loss function using stochastic gradient descent, and we train the pre- and post-fault networks separately. The number of trajectories in the buffer is capped at \(N_{buf}\) so that once the maximum number of trajectories are collected, the earlier trajectories are dropped from the buffer. The training is performed either till the number of iterations reaches \(N_{M}>0\), or the loss drops below \(10^{-3}\) after at least \(N_{m}<N_{M}\) training epochs. During each training epoch, we use a batch size of 50000 trajectories and perform 500 iterations of training on all the buffer data. The learning algorithm is summarized in Algorithm 1 ``` Data:\(iter_{M},N,N_{bs},N_{buf},N_{m}\) Result:\(\Theta_{NN}\) Initialize \(\Theta_{NN}\) as NNs; /* LSTM-NN */ \(\{y,u,\tilde{y}\}_{buf}=\emptyset\) while\(iter\leq iter_{M}\) or (loss \(<10^{-3}\) and \(iter\geq N_{m}\)do Sample \(\{x_{0}\}_{1}^{N_{1}}\) for\(i\in N_{1}\)do Roll out \(d\) trajectories under \(\Theta_{2}\in[0,1]\) \(\{y,u,\tilde{y}\}_{buf}=\{y,u,\tilde{y}\}_{buf}\cup\{y,u,\tilde{y}\}_{1}^{N_{ 1}}\) end for \(\{z\}=\{y,u,\tilde{y}\}_{buf}\) \(\{z\}_{train}=\{z\}[-N_{buf}:]\) while\(\{z\}_{train}\neq\emptyset\)do \(\{z\}_{train,bs}\) = Sample \(N_{bs}\) trajectories from \(\{z\}_{train}\) loss = \(\mathcal{L}_{\Theta}^{MF}\) /* \(\mathcal{L}_{\Theta}^{MB}\) for model-based */ train \(\Theta_{NN}\) \(\{z\}_{train}=\{z\}_{train}\setminus\{z\}_{train,bs}\) end while end while ``` **Algorithm 1**Learning framework for FDI ## IV Numerical evaluations The primary objective of our numerical experiments is to evaluate the effectiveness of our method in terms of fault detection. We consider an experimental case study involving the Crazyflie quadrotor with a fault in motor \(\#2\). First, we evaluate the correctness of the fault prediction by the 4 cases explained in Figure 1. A fault is predicted if \(\min\limits_{i}\Theta_{i,NN}(\Phi_{n}(x))<\Theta_{tol}\), where \(\Theta_{tol}=0.2\). In this case, we only consider the case when \(\Theta_{i}=0\). The predicted faulty actuator is given by the \(\arg\min\) of \(\Theta_{NN}(\Phi_{n^{*}}(x))\), where \(n^{*}\) is the rotation index for which \(\Theta_{i,NN}\) is below the tolerance. The experiments are run to check the prediction accuracy of the NN-based FDI mechanism for various lengths of data with failed actuators between 0 and \(T_{f}=100\). We compute the prediction accuracy when there is a fault as well as when there is no fault. We sample 10000 initial conditions randomly from the safe set \(\mathcal{X}_{safe}\) to generate trajectories for test data, where 2000 trajectories are generated for each of the faults and 2000 trajectories are generated without any fault. Each trajectory is generated for 200 epochs with fault occurring at \(t=100\). We feed the moving trajectory data \((x(k-100,k),u(k-100,k),\tilde{x}(k-100,k)\) to the trained NN-based FDI starting from \(k=100\). For a given \(k\in[100,200]\), the portion of trajectory data with faulty actuator is \(k-100\). Figure 3 illustrates that the Fig. 3: Failure prediction accuracy for faults in different motors: **Top-left**: Failure in motor #1, **Top-right**: Failure in motor #2, **Bottom-right**: Failure in motor #3 and **Bottom-left**: Failure in motor #4. This illustrates that with one trained FDI, it is possible to predict failure in any of the actuators with high prediction accuracy. prediction accuracy for correct fault prediction for each of the cases is above 80%. This illustrates that it is possible to effectively predict faults in all motors with an FDI trained with faults in just one of the motors. Next, we evaluate prediction accuracy for different fault values \(\Theta_{2}\in[0,1]\). For this experiment, we compare the performance of two different types of NN architectures for model-free FDIs, namely, a multi-layer perceptron (MLP) with 1 input layer, 4 hidden layers, and 1 output layer and a long-short-term memory (LSTM) where the LSTM layer is followed by 2 linear layers. In each of the NN architectures, we use \(N\times\)128 as the size of the input layer with \(N\) being the size of the features, hidden layer(s) of size 128\(\times\)128 followed by a hidden layer of size 128\(\times\)64 and an output layer of size 64\(\times m\). Note that \(N=(2p+m)\times T_{f}\) for the FDI with all the data, \(p\times T_{f}\) for the FDI with just the residual data, and \(N=(p+m)\times T_{f}\) for the model-free FDI mechanism. Figure 4 plots the prediction accuracy for various values of \(\Theta_{2}\) for the two considered NN architectures. It can also be observed that LSTM-based NN FDI can accurately identify each of the faults while MLP-based FDI has a very low prediction accuracy. We use an LQR input to generate the training data since solving a CBF-based QP is relatively slower for collecting a sufficient amount of training data. In our experiments, we illustrate that the trained NN is highly robust to the kind of input used for trajectory generation and can predict fault with the same accuracy for the trajectories generated by CBF-based QPs. For this experiment, we compare the prediction accuracy of the model-free NN-FDI (\(\theta_{NN}(y,u)\)) and the model-based FDIs (\(\theta_{NN}(y,u,\tilde{y})\)). Figure 5 shows the prediction accuracy of the model-based FDIs. It can also be seen that the model-free FDI mechanism can perform at par (even better) than the model-based FDI mechanism with features (\((y,u,\tilde{y})\)). Based on this observation, we can infer that a model-free FDI mechanism can be used with very high confidence. In this case, it is crucial to note that the trained fault predictor is highly robust with respect to the input data. In particular, to accelerate the learning process, a very simple LQR controller is used, where the nonlinear system dynamics are linearized about the origin, and a constant LQR gain is used. However, as can be seen from Figure 5, the prediction generalizes to the CBF-based QP controller just as well and has a similar high prediction accuracy. Finally, we study the effect of change in model parameters (such as the inertia matrix, etc.) on the prediction accuracy of the FDI mechanisms. For this experiment, we changed the system parameters by more than 40% (see Table I). As can be seen from Figure 6, the prediction accuracy of the model-free FDI mechanism is unaffected by a change in the model parameters, while that of the model-based FDI mechanism drops significantly. Thus, in the scenarios when a correct system model is not known or the system dynamics undergo changes during operation, a model-based FDI mechanism might not remain reliable. Figures 5 and 6 illustrate that the proposed model-free NN-FDI is agnostic to the type of input used for data generation as well as to perturbations in model parameters. ## V Conclusion In this paper, we propose a learning method for effectively learning a model-free output-based FDI for the prediction of a variety of partial losses in actuation for quadrotors. The proposed NN-based FDI can verify the actuator performance with very high accuracy and correctly predicts a variety of faults. The numerical experiments demonstrated that the applicability of a model-based FDI mechanism is very limited, while that of the proposed model-free is quite broad and general. Additionally, the robustness to out-of-distribution input data illustrates that the proposed model-free mechanism can be easily trained on simple input data (e.g., LQR input), does not require the model information and generalizes to both out-of-distribution input data as well as changes in model parameters (or modeling uncertainties). As part of future work, we will explore methods that can incorporate more general fault models where the faulty actuator can take any arbitrary signal, and more than one actuator can undergo failure simultaneously. We will also explore applications of this framework to resilient control of networked and distributed control systems, which introduce additional notions of system failure, including loss of entire nodes or communication links in addition to input disturbances and loss of control authority.
2310.00517
Assessing the Generalizability of Deep Neural Networks-Based Models for Black Skin Lesions
Melanoma is the most severe type of skin cancer due to its ability to cause metastasis. It is more common in black people, often affecting acral regions: palms, soles, and nails. Deep neural networks have shown tremendous potential for improving clinical care and skin cancer diagnosis. Nevertheless, prevailing studies predominantly rely on datasets of white skin tones, neglecting to report diagnostic outcomes for diverse patient skin tones. In this work, we evaluate supervised and self-supervised models in skin lesion images extracted from acral regions commonly observed in black individuals. Also, we carefully curate a dataset containing skin lesions in acral regions and assess the datasets concerning the Fitzpatrick scale to verify performance on black skin. Our results expose the poor generalizability of these models, revealing their favorable performance for lesions on white skin. Neglecting to create diverse datasets, which necessitates the development of specialized models, is unacceptable. Deep neural networks have great potential to improve diagnosis, particularly for populations with limited access to dermatology. However, including black skin lesions is necessary to ensure these populations can access the benefits of inclusive technology.
Luana Barros, Levy Chaves, Sandra Avila
2023-09-30T22:36:51Z
http://arxiv.org/abs/2310.00517v2
# Assessing the Generalizability of Deep Neural Networks-Based Models for Black Skin Lesions ###### Abstract Melanoma is the most severe type of skin cancer due to its ability to cause metastasis. It is more common in black people, often affecting acral regions: palms, soles, and nails. Deep neural networks have shown tremendous potential for improving clinical care and skin cancer diagnosis. Nevertheless, prevailing studies predominantly rely on datasets of white skin tones, neglecting to report diagnostic outcomes for diverse patient skin tones. In this work, we evaluate supervised and self-supervised models in skin lesion images extracted from acral regions commonly observed in black individuals. Also, we carefully curate a dataset containing skin lesions in acral regions and assess the datasets concerning the Fitzpatrick scale to verify performance on black skin. Our results expose the poor generalizability of these models, revealing their favorable performance for lesions on white skin. Neglecting to create diverse datasets, which necessitates the development of specialized models, is unacceptable. Deep neural networks have great potential to improve diagnosis, particularly for populations with limited access to dermatology. However, including black skin lesions is necessary to ensure these populations can access the benefits of inclusive technology. Keywords:Self-supervision Skin cancer Black skin Image classification Out-of-distribution ## 1 Introduction Skin cancer is the most common type, with melanoma being the most aggressive and responsible for 60% of skin cancer deaths. Early diagnosis is crucial to improve patient survival rates. People of color have a lower risk of developing melanoma than those with lighter skin tones [1]. However, melanin does not entirely protect individuals from developing skin cancer. In fact, acral melanoma, or acrolentiginous melanoma, is the rarest and most aggressive type and occurs more frequently in people with darker skin [2]. This subtype is not related to sun exposure, as it tends to develop in areas with low sun exposure, such as the soles, palms, and nails [3]. When melanoma occurs in individuals with darker skin tones, it is often diagnosed later, making it more challenging to treat and associated with a high mortality rate. This can be partly explained by the fact that acral areas, especially the feet, are often neglected by dermatologists in physical evaluations because they are not exposed to the sun, leading to misdiagnoses [4]. Therefore, it is common for melanoma to be confused by patients with fungal infections, injuries, or other benign conditions [3]. This is related to the lack of representation of cases of black skin in medical education. Most textbooks do not include images of skin diseases as they appear in black people, or when they do, the number is no more than 10% [5]. This absence can lead to a racial bias in the evaluation of lesions by dermatologists since the same lesion may have different characteristics depending on the patient's skin color1, significantly affecting the diagnosis and treatment of these lesions [5]. Footnote 1: If you have skin, you can get skin cancer. Deep neural networks (DNNs) have revolutionized skin lesion analysis by automatically extracting visual patterns for lesion classification and segmentation tasks. However, training DNNs requires a substantial amount of annotated data, posing challenges in the medical field due to the cost and complexity of data collection and annotation. Transfer learning has emerged as a popular alternative. It involves pre-training a neural network, the encoder, on a large unrelated dataset to establish a powerful pattern extractor. The encoder is fine-tuned using a smaller dataset specific to the target task, enabling it to adapt to skin lesion analysis. Despite the advantages of transfer learning, there is a risk that the pre-trained representations may not fully adapt to the target dataset [6]. Self-supervised learning (SSL) has emerged as a promising solution. In SSL, the encoder is trained in a self-supervised manner on unlabeled data using pretext tasks with synthetic labels. The pretext task is only used to stimulate the network to create transformations in the images and learn the best (latent) representations in the feature space that describe them. This way, we have a powerful feature extractor network that can be used in some other target task of interest, i.e., downstream task. Furthermore, applying SSL models for diagnosing skin lesions has proven advantageous, especially in scenarios with scarce training data [7]. However, deep learning models encounter challenges related to generalization. The effectiveness of machine learning models heavily relies on the quality and quantity of training data available. Unfortunately, in the current medical landscape, skin lesion datasets often suffer from a lack of diversity, predominantly comprising samples from individuals with white skin or lacking explicit labels indicating skin color. This presents a significant challenge as it can lead to models demonstrating racial biases, performing better in diagnosing lesions that are well-represented in the training data from white individuals while potentially encountering difficulties in accurately diagnosing lesions on black skin. Evaluating skin cancer diagnosis models on black skin lesions is one step towards ensuring inclusivity and accuracy across diverse populations [8]. Most available datasets suffer from insufficient information regarding skin tones, such as the Fitzpatrick scale -- a classification of skin types from 1 to 6 based on a person's ability to tan and their sensitivity and redness when exposed to the sun [9] (Figs. 1 and 2). Consequently, we had to explore alternative approaches to address this issue, leading us to conduct the evaluation based on both skin tone and lesion location. We performed two distinct analyses: one focused on directly assessing the impact of skin color using the Fitzpatrick scale, and another centered around evaluating lesions in acral regions, which are more commonly found in individuals with black skin [10]. The primary objective of this work is to assess the performance of skin cancer classification models, which have performed well in white individuals, specifically on black skin lesions. Our contribution is threefold: * We carefully curate a dataset comprising clinical and dermoscopic images of skin lesions in acral areas (e.g., palms, soles, and nails). * We evaluate deep neural network models previously trained in a self-supervised and supervised manner to diagnose melanoma and benign lesions regarding two types of analysis: * Analysis #1 - Skin Lesions on Acral Regions: We select images from existing datasets focusing on acral regions. * Analysis #2 - Skin Lesions in People of Color: We evaluate datasets that contain Fitzpatrick skin type information. * We have made the curated sets of data and source code available at [https://github.com/httplug/black-acral-skin-lesion-detection](https://github.com/httplug/black-acral-skin-lesion-detection). ## 2 Related Work The accurate diagnosis of skin lesions in people of color, particularly those with dark skin, has been a long-standing challenge in dermatology. One major contributing factor to this issue is the underrepresentation of dark skin images in skin lesion databases. Consequently, conventional diagnostic tools may exhibit reduced accuracy when applied to this specific population, leading to disparities in healthcare outcomes. We present a pioneering effort to extensively curate and evaluate the performance of supervised and self-supervised pre-trained models, specifically on black skin lesions and acral regions. While skin lesion classification on acral regions has Figure 1: The Fitzpatrick skin type scale. (a) Type 1 (light): pale skin, always burns, and never tans; (b) Type 2 (white): fair, usually burns, tans with difficulty; (c) Type 3 (medium): white to olive, sometimes mild burn, gradually tans to olive; Type 4 (olive): moderate brown, rarely burns, tans with ease to moderate brown; Type 5 (brown): dark brown, very rarely burns, tans very easily; Type 6 (black): very dark brown to black, never burns, tans very easily, deeply pigmented. been explored in previous literature, the focus is largely on general skin types, with limited attention given to black skin tones. Works such as [12, 13, 14] investigated classification performance on acral regions, but they do not specifically address the challenges posed by black skin tones. Addressing the crucial issue of skin type diversity, Alipour et al. [15] conducted a comprehensive review of publicly available skin lesion datasets and their metadata. They observed that only PAD-UFES-20 [16], DDI [17], and Fitzpatrick 17k [11] datasets provide the Fitzpatrick scale as metadata, highlighting the need for improved representation of diverse skin types in skin lesion datasets. However, the authors did not conduct model evaluations on these datasets. Existing works explored the application of the Fitzpatrick scale in various areas, such as debiasing [18, 19] and image generation [20]. However, these studies have not adequately addressed the specific challenge of skin lesion classification on black skin tones. To bridge this research gap, our study evaluates the performance of supervised and self-supervised pre-trained models exclusively on black skin lesion images and acral regions. By systematically exploring and benchmarking different pre-training models, we aim to contribute valuable insights and advancements to the field of dermatology, particularly in the context of underrepresented skin types. ## 3 Materials and Methods In this work, we assess the performance of six pre-trained models on white skin in black skin. We pre-train all models as described in Chaves et al. [7]. First, we take a pre-trained model backbone on ImageNet [21] and fine-tune it on Figure 2: Each image corresponds to a melanoma sample and is associated with a specific Fitzpatrick scale value, representing a range of skin tones. The images are organized from left to right, following the Fitzpatrick scale (1 to 6). Images retrieved from Fitzpatrick 17k dataset [11]. the ISIC dataset [22]. The ISIC (_International Skin Imaging Collaboration_) is a common choice in this domain [6; 11; 23], presenting only white skin images. Next, we evaluate the fine-tuned model on several **out-of-distribution datasets**, where the distribution of the test data diverges from the training one. We also use the same six pre-trained models as Chaves et al. [7] because they have the code and checkpoint publicity available to reproduce their results. The authors compared the performance of five self-supervised models against a supervised baseline and showed that self-supervised pre-training outperformed traditional transfer learning techniques using the ImageNet dataset. We use the ResNet-50 [24] network as the feature extractor backbone. The self-supervised approaches vary mainly in the choice of pretext tasks, which are BYOL (_Bootstrap Your Own Latent_) [25], InfoMin [26], MoCo (_Momentum Contrast_) [27], SimCLR (_Simple Framework for Contrastive Learning of Visual Representations_) [28], and SwAV (_Swapping Assignments Between Views_) [29]. We assessed all six models using two different analyses on compound datasets. The first analysis focused on skin lesions in acral regions, while the second considered variations in skin tone. Next, we detail the datasets we curated. ### Datasets #### 3.1.1 Analysis #1: Skin Lesions on Acral Regions. To create a compound dataset of acral skin lesions, we extensively searched for datasets and dermatological atlases available on the Internet that provided annotations indicating the location of the lesions. We analyzed 17 datasets listed in SkinIA's website2 then filtered the datasets to include only images showcasing lesions in acral regions, such as the palms, soles, and nails. As a result, we identified three widely recognized datasets in the literature, namely the International Skin Imaging Collaboration (ISIC Archive) [22], the 7-Point Checklist Dermatology Dataset (Derm7pt) [30], and the PAD-UFES-20 dataset [16]. We also included three dermatological atlases: Dermatology Atlas (DermAtlas) [31], DermIS [32], and DermNet [33]. Footnote 2: [https://www.medicalimageanalysis.com/data/skinia](https://www.medicalimageanalysis.com/data/skinia) We describe the steps followed for each dataset in the following. Table 1 shows the number of lesions for each dataset. #### 3.1.2 ISIC Archive [22]: We filtered images from the ISIC Archive based on clinical attributes, focusing on lesions on palms and soles, resulting in 773 images. We excluded images classified as carcinoma or unknown, reducing the dataset to 400. As we trained our models using ISIC Archive, we removed all images appearing in the models' training set to avoid data leakage between training and testing data and ensure an unbiased evaluation, resulting in a final dataset with 149 images. **Derm7pt [30]:**: It consists of 1011 images for each lesion, including clinical and dermoscopic versions3. It offers valuable metadata such as visual patterns, lesion location, patient sex, difficulty level, and 7-point rule scores [34]. We applied a filter based on lesion location to select images from it, selecting acral images from the region attribute. This filter resulted in a total of 62 images, comprising only benign and melanoma lesions. We conducted separate evaluations using the clinical and dermoscopic images, labeling the datasets as _derm7pt-clinic_ and _derm7pt-derm_, respectively. Footnote 3: Clinical images can be captured with standard cameras, while dermoscopic images are captured with a device called dermatoscope, that normalize the light influence on the lesion, allowing to capture deeper details. **PAD-UFES-20 [16]:**: It comprises 2298 clinical images collected from smartphone patients. It also includes metadata related to the Fitzpatrick scale, providing additional information about skin tone. We focused on the hand and foot region lesions, which yielded 142 images. We also excluded images classified as carcinoma (malignant), resulting in a final set of 98 images. **Atlases (DermAtlas, DermIS, DermNet):**: The dataset included images obtained from dermatological atlas sources such as DermAtlas [31], DermIS [32], and DermNet [33]. We use specific search terms, such as _hand_, _hands_, _foot_, _feet_, _acral_, _finger_, _nail_, and _nails_ to target the lesion location. We conducted a manual selection to identify images meeting the melanoma or benign lesions criteria. This dataset comprised 8 images from DermAtlas (including 1 melanoma), 12 images from DermIS (comprising 10 melanomas), and 34 images from DermNet, all melanomas. Finally, we combined all images in a set referenced as Atlases, containing 54 images. **Analysis #2: Skin Lesions in People of Color.** We focused on selecting datasets that provided metadata indicating skin tone to analyze skin cancer diagnosis performance for darker-skinned populations. Specifically, datasets containing skin lesions with darker skin tones (Fitzpatrick scales 4, 5, and 6) allow us to evaluate the performance of the models on these populations. For this purpose, \begin{table} \begin{tabular}{l r r r} \hline \hline & \multicolumn{3}{c}{Number of Lesions} \\ Dataset & Melanoma & Benign & Total \\ \hline ISIC Archive [22] & 72 & 77 & 149 \\ Derm7pt [30] & 3 & 59 & 62 \\ PAD-UFES-20 [16] & 2 & 96 & 98 \\ Atlases [31; 32; 33] & 45 & 9 & 54 \\ \hline \hline \end{tabular} \end{table} Table 1: Number of benign and melanoma lesions for acral areas dataset. we evaluated three datasets: PAD-UFES-20 [16], which was previously included in the initial analysis, as well as Diverse Dermatology Images (DDI) [17], and Fitzpatrick 17k [11]. Table 2 shows the number of lesions for each dataset, considering the Fitzpatrick scale. **PAD-UFES-20*:**: We filtered images using the Fitzpatrick scale, including lesions from all regions rather than solely acral areas. We specifically selected melanoma cases from the malignant lesions category, excluding basal and squamous cell carcinomas. Also, we excluded images lacking Fitzpatrick scale information. Consequently, the dataset was refined to 457 images, including 52 melanoma cases. Notably, within this dataset, there were only five images with a Fitzpatrick scale of 5 and one image with a Fitzpatrick scale of 6. **Diverse Dermatology Images (DDI) [17]:**: The primary objective of DDI is to address the lack of diversity in existing datasets by actively incorporating a wide range of skin tones. For that, the dataset was curated by experienced dermatologists who assessed each patient's skin tone based on the Fitzpatrick scale. The initial dataset comprised 656 clinical images, categorized into different Fitzpatrick scale ranges. We filtered to focus on melanoma samples for malignant lesions. As a result, we excluded benign conditions that do not fall under benign skin lesions, such as inflammatory conditions, scars, \begin{table} \begin{tabular}{l r r r r} \hline \hline & Fitzpatrick & \multicolumn{3}{c}{Number of Lesions} \\ Dataset & Scale & Melanoma & Benign & Total \\ \hline \multirow{4}{*}{PAD-UFES-20* [16]} & 1–2 & 38 & 246 & 284 \\ & 3–4 & 14 & 153 & 167 \\ & 5–6 & 0 & 6 & 6 \\ \cline{2-5} & Total & 52 & 405 & 457 \\ \hline \multirow{4}{*}{DDI [17]} & 1–2 & 7 & 153 & 160 \\ & 3–4 & 7 & 153 & 160 \\ & 5–6 & 7 & 134 & 141 \\ \cline{2-5} & Total & 21 & 440 & 461 \\ \hline \multirow{4}{*}{Fitzpatrick 17k [11]} & 1–2 & 331 & 1115 & 1446 \\ & 3–4 & 168 & 842 & 1010 \\ & 5–6 & 47 & 203 & 250 \\ \cline{1-1} \cline{2-5} & Total & 546 & 2160 & 2706 \\ \hline \hline \end{tabular} \end{table} Table 2: Number of benign and melanoma lesions grouped by Fitzpatrick scale for skin tone analysis datasets. and hematomas. This process led to a refined dataset of 461 skin lesions, comprising 440 benign lesions and 21 melanomas. Regarding the distribution based on the Fitzpatrick scale, the dataset includes 160 images from scales 1 to 2, 160 images from scales 3 to 4, and 141 images from scales 5 to 6. The DDI dataset represents a notable improvement in diversity compared to previous datasets, but it still exhibits an unbalanced representation of melanoma images across different skin tones. **Fitzpatrick 17k [11]:**: It comprises 16,577 clinical images, including skin diagnostic labels and skin tone information based on the Fitzpatrick scale. The dataset was compiled by sourcing images from two online open-source dermatology atlases: 12,672 images from DermAmin [35] and 3,905 images from Atlas Dermatologico [36]. To ensure the analysis specifically targeted benign and melanoma skin lesion conditions, we applied a filter based on the "nine_partition_attribute". This filter allowed us to select images that fell into benign dermal, benign epidermal, benign melanocyte, and malignant melanoma. After removing images with the unknown Fitzpatrick value, the refined dataset consists of 2,706 images, 191 images corresponding to a Fitzpatrick scale of 5 and 59 images corresponding to a Fitzpatrick scale of 6. ### Evaluation Pipeline Our pipeline to evaluate skin lesion image classification models is divided into two main stages: pre-processing and model inference. Fig. 3 shows the pipeline. **Pre-processing:** We apply data augmentation techniques to the test data, which have been proven to enhance the performance of classification problems [23]. Figure 3: Evaluation pipeline for all models. Given a test image, we adopt the final confidence score as the average confidence over a batch of 50 augmented copies of the input image. The test set is evaluated in batches, and a batch of 50 copies is created for each image. Each copy undergoes various data augmentations, including resizing, flipping, rotations, and color changes. Additionally, we normalize the images using the mean and standard deviation values from the ImageNet dataset. **Model inference:** The batch of augmented images is fed into the selected model for evaluation. The model generates representations or features specific to its pre-training method. These representations are then passed through a softmax layer, which produces the probability values for the lesion being melanoma, the positive class of interest. We calculate the average of the probabilities obtained from all 50 augmented copies to obtain a single probability value for each image in the batch. The evaluation process consisted of two analyses: Analysis #1 (skin lesions on acral regions), which considered acral images, and Analysis #2 (skin lesions in people of color), which considered images with diverse skin tones according to the Fitzpatrick scale. For each analysis, we assessed each dataset individually using the six models: BYOL, InfoMin, MoCo, SimCLR, SwAV, and the Supervised baseline. In each evaluation, the dataset was passed to the respective model, and the probability of melanoma lesions was obtained for all images. Metrics such as balanced accuracy, precision, recall, and F1-score were calculated based on these probabilities. We computed balanced accuracy using a threshold of 0.5. ## 4 Results ### Skin Lesion Analysis on Acral Regions Table 3 shows the classification metrics grouped by datasets of SSL models and the supervised baseline for skin lesions in acral regions, such as palms, soles, and nails. In the following, we discussed the results considering each dataset. ISIC Archive:We observed consistent result between balanced accuracy and F1-score, both averaging around 87%. The evaluation metrics exhibit high performance due to the fine-tuning process of the evaluated models using the ISIC 2019 dataset. The distribution of the ISIC Archive dataset closely resembles that of the training data, distinguishing it from other datasets, and contributing to the favorable evaluation metrics observed, even though excluding training samples from our evaluation set. Furthermore, in the ISIC 2019 dataset, all results were above 90% [7]. This indicates that even with an external dataset with a distribution more akin to the training data, the performance for lesions in acral regions is significantly inferior to that in other regions. Additionally, it is essential to highlight that in the ISIC 2019 dataset, all results exceeded 90% [7]. Derm7pt:We analyzed two types of images: dermoscopic (derm7pt-derm) and clinical (derm7pt-clinical). When examining the F1-score results for clinical images, the models (SwAV, BYOL, and Supervised) encountered challenges in accurately classifying melanoma lesions. However, the evaluation was performed on a limited sample size of only three melanoma images. This scarcity of data for melanoma evaluation has contributed to the observed zero precision and recall scores. On average, dermoscopic images demonstrated better classification performance than clinical images, with dermoscopic images achieving an F1-score of 26% and clinical images achieving an F1-score of 16%. We attribute this disparity to the models being trained on dermoscopic images from the ISIC 2019 dataset. Additionally, using different image capture devices (dermatoscope vs. cell phone camera) can introduce variations in image quality and the level of detail captured, affecting the overall data distribution. Given that the models were trained with dermoscopic images and the test images were captured using a dermatoscope, the training and test data distributions are expected to be more similar. In general, the results for this dataset demonstrated low F1-score and balanced accuracy, indicating an unsatisfactory performance, especially for clinical images. Atlases:The performance varies across different models. MoCo and InfoMin achieved balanced accuracies of approximately 72%, indicating relatively better performance. Other models, such as Supervised and BYOL, exhibited poor results. Such dataset is considered challenging as it consists of non-standardized skin lesions collected from online atlases, which may introduce variability in the capture process. Still, models could perform better than previous datasets on acral region images, specifically when considering F1-score values. PAD-UFES-20:The models achieved an average balanced accuracy of around 90%. The model SwAV performed best, with a balanced accuracy of 95.8% and an F1-score of 33.3%. All models showed similar patterns: the F1-score and precision were relatively low, while recall was high (100%). The high recall was mainly due to the correct prediction of the two melanoma samples in the dataset, which inflated the balanced accuracy score. It indicates that relying solely on \begin{table} \begin{tabular}{l l c c c c} \hline \hline \multicolumn{2}{c}{Samples} & \multicolumn{2}{c}{Balanced} & & & \\ (\#Mel/\#Ben) & Model & Accuracy (\%) & Precision (\%) & Recall (\%) & F1-score (\%) \\ \hline \multirow{6}{*}{(89/336)} & SwAV & 78.3 & **77.9** & 64.3 & 70.4 \\ & MoCo & 79.9 & 72.0 & **71.4** & 71.7 \\ & SimCLR & 76.1 & 70.2 & 63.5 & 66.7 \\ & BYOL & 77.9 & 70.8 & 67.5 & 69.1 \\ & InfoMin & **80.2** & 74.2 & 70.6 & **72.4** \\ & Supervised & 78.4 & 73.7 & 66.7 & 70.0 \\ \cline{2-6} & Mean & 78.5 & 73.1 & 67.3 & 70.1 \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluation metrics for acral skin lesions. We grouped ISIC Archive, Derm7pt, Atlases, and PAD-UFES-20 due to some datasets’ low number of Melanoma samples. #Mel and #Ben indicate the number Melanoma, and benign skin lesions, respectively. balanced accuracy can lead to a misleading interpretation of the results. Also, the small number of positive class samples limits the generalizability of the results and reduces confidence in the evaluation. ### Skin Lesion Analysis in People of Color Table 4 shows the evaluation results of the SSL models and the Supervised baseline for datasets containing melanoma and benign black skin lesions. DDI:revealed poor results regarding balanced accuracy and F1-score for all models. The supervised baseline model performed the worst, with an F1-score of only 3.4%, while MoCo achieved a slightly higher F1-score of 12.5%. Although most of the DDI dataset consisted of benign lesions, the performance of all models was considered insufficient. This underscores the significance of a pre-training process incorporating diverse training data, as it enables the models to \begin{table} \begin{tabular}{c l c c c c} \hline \hline \multicolumn{2}{c}{Dataset} & \multicolumn{3}{c}{Balanced} & \multicolumn{1}{c}{} \\ (\#Mel/\#Ben) & Model & Accuracy (\%) & Precision (\%) & Recall (\%) & F1-score (\%) \\ \hline \multirow{8}{*}{DDI (21/440)} & SwAV & 52.9 & 7.5 & 14.3 & 9.8 \\ & MoCo & **55.8** & **8.5** & **23.8** & **12.5** \\ & SimCLR & 54.2 & 7.8 & 19.0 & 11.1 \\ & BYOL & 54.3 & 8.0 & 19.0 & 11.3 \\ & InfoMin & 54.4 & 8.2 & 19.0 & 11.4 \\ & Supervised & 48.2 & 2.6 & 4.8 & 3.4 \\ \cline{2-6} & Mean & 53.3 & 7.1 & 16.7 & 9.9 \\ \hline \multirow{8}{*}{Fitzpatrick 17k (546/2160)} & SwAV & 57.6 & 40.7 & 24.2 & 30.3 \\ & MoCo & 59.8 & 38.4 & 32.1 & 34.9 \\ & SimCLR & 59.3 & 40.7 & 29.5 & 34.2 \\ & BYOL & 59.3 & 38.4 & 32.1 & 34.9 \\ & InfoMin & 60.1 & 36.5 & **35.9** & 36.2 \\ & Supervised & **63.4** & **51.2** & 35.3 & **41.8** \\ \cline{2-6} & Mean & 60.0 & 40.4 & 32.4 & 35.6 \\ \hline \multirow{8}{*}{PAD-UFES-20* (52/405)} & SwAV & 57.1 & 25.0 & 23.1 & 24.0 \\ & MoCo & 59.1 & 23.9 & 30.8 & 26.9 \\ \cline{1-1} & SimCLR & 58.4 & 21.0 & **32.7** & 25.6 \\ \cline{1-1} & BYOL & **59.2** & **26.3** & 28.8 & **27.5** \\ \cline{1-1} & InfoMin & 54.3 & 16.9 & 23.1 & 19.5 \\ \cline{1-1} & Supervised & 58.5 & 23.8 & 28.8 & 26.1 \\ \cline{1-1} \cline{2-6} & Mean & 57.8 & 22.8 & 27.9 & 24.9 \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluation metrics for skin tone analysis. #Mel and #Ben indicate the number Melanoma, and benign skin lesions, respectively. learn more robust and generalizable representations across different skin tones and lesion types. In addition, this highlights the importance of self-supervised learning in improving performance and diagnostic accuracy, particularly in the context of diverse skin tones. Fitzpatrick 17k:In contrast to the DDI dataset, the supervised model achieved the highest performance in balanced accuracy (63.4%) and F1-score (41.8%). Both self-supervised and supervised models showed similar results for this dataset. PAD-UFES-20*:Both self-supervised and supervised models demonstrated comparable performance. The BYOL method achieved the highest balanced accuracy (59.2%) and F1-score (27.5%). It is essential to highlight that this dataset did not include any melanoma lesions corresponding to the Fitzpatrick scale of 5 and 6 (see Table 2). ## 5 Conclusion Our evaluation of self-supervised and supervised models on skin lesions in acral regions reveals a significant deficiency in robustness and bias in deep-learning models for out-of-distribution images, especially in darker skin tones. Both Self-supervised and Supervised models achieved poor performance in Melanoma classification task compared to white skin only datasets. These results highlight the generalization gap between models trained on white skin and tested on darker skin tones, inviting further work on improving the generalization capabilities of such models. But, we believe that improvements are not only necessary in model designing, but requires richer data to represent specific population or subgroups. The results for melanoma diagnosis in acral regions are insufficient and could cause serious social problems if used clinically. Additionally, more samples are needed to improve the metrics calculation and analysis of results. The generalization power of DNNs-based models heavily depends on training data distribution. Therefore, for DNNs-based models to be robust concerning different visual patterns of lesions, training them with datasets that represent the real clinical scenario, including patients with diverse lesion characteristics and skin tones, is necessary. There is an urgent need for the creation of datasets that guarantee data transparency regarding the source, collection process, and labeling of lesions, as well as the reliability of data descriptions and the ethnic and racial diversity of patients, in order to ensure high confidence in the diagnoses made by the models. The current state of skin cancer datasets is concerning as it impacts the performance of models and can further reinforce biases in diagnosing skin cancer in people of color. Currently, these models cannot be used in a general sense, as they only perform well on lesions in white skin on common regions affected, and their performance may vary significantly for people with different skin tones. Crafting models that are discriminative for diagnoses, yet discriminate against patients' skin tones, is unacceptable. Deep neural networks have great potential to improve diagnosis, especially for populations with limited access to dermatology. However, including black skin lesions is extremely necessary for these populations to access the benefits of inclusive technology. ###### Acknowledgements. L. Chaves is funded by Becas Santander/Unicamp - HUB 2022, Google LARA 2021, in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001. S. Avila is funded by CNPq 315231/2020-3, FAEPEX, FAPESP 2013/08293-7, 2020/09838-0, H.IAAC 01245.013778/2020-21, and Google Award for Inclusion Research Program 2022 ("Dark Skin Matters: Fair and Unbiased Skin Lesion Models").
2307.16792
Classification with Deep Neural Networks and Logistic Loss
Deep neural networks (DNNs) trained with the logistic loss (i.e., the cross entropy loss) have made impressive advancements in various binary classification tasks. However, generalization analysis for binary classification with DNNs and logistic loss remains scarce. The unboundedness of the target function for the logistic loss is the main obstacle to deriving satisfactory generalization bounds. In this paper, we aim to fill this gap by establishing a novel and elegant oracle-type inequality, which enables us to deal with the boundedness restriction of the target function, and using it to derive sharp convergence rates for fully connected ReLU DNN classifiers trained with logistic loss. In particular, we obtain optimal convergence rates (up to log factors) only requiring the H\"older smoothness of the conditional class probability $\eta$ of data. Moreover, we consider a compositional assumption that requires $\eta$ to be the composition of several vector-valued functions of which each component function is either a maximum value function or a H\"older smooth function only depending on a small number of its input variables. Under this assumption, we derive optimal convergence rates (up to log factors) which are independent of the input dimension of data. This result explains why DNN classifiers can perform well in practical high-dimensional classification problems. Besides the novel oracle-type inequality, the sharp convergence rates given in our paper also owe to a tight error bound for approximating the natural logarithm function near zero (where it is unbounded) by ReLU DNNs. In addition, we justify our claims for the optimality of rates by proving corresponding minimax lower bounds. All these results are new in the literature and will deepen our theoretical understanding of classification with DNNs.
Zihan Zhang, Lei Shi, Ding-Xuan Zhou
2023-07-31T15:58:46Z
http://arxiv.org/abs/2307.16792v2
# Classification with Deep Neural Networks and Logistic Loss + ###### Abstract Deep neural networks (DNNs) trained with the logistic loss (also known as the cross entropy loss) have made impressive advancements in various binary classification tasks. Despite the considerable success in practice, generalization analysis for binary classification with deep neural networks and the logistic loss remains scarce. The unboundedness of the target function for the logistic loss in binary classification is the main obstacle to deriving satisfying generalization bounds. In this paper, we aim to fill this gap by developing a novel theoretical analysis and using it to establish tight generalization bounds for training fully connected ReLU DNNs with logistic loss in binary classification. Our generalization analysis is based on an elegant oracle-type inequality which enables us to deal with the boundedness restriction of the target function. Using this oracle-type inequality, we establish generalization bounds for fully connected ReLU DNN classifiers \(\tilde{f}_{n}^{\mathbf{FNN}}\) trained by empirical logistic risk minimization with respect to i.i.d. samples of size \(n\), which lead to sharp rates of convergence as \(n\rightarrow\infty\). In particular, we obtain optimal convergence rates for \(\tilde{f}_{n}^{\mathbf{FNN}}\) (up to some logarithmic factor) only requiring the Holder smoothness of the conditional class probability \(\eta\) of data. Moreover, we consider a compositional assumption that requires \(\eta\) to be the composition of several vector-valued multivariate functions of which each component function is either a maximum value function or a Holder smooth function only depending on a small number of its input variables. Under this assumption, we can even derive optimal convergence rates for \(\tilde{f}_{n}^{\mathbf{FNN}}\) (up to some logarithmic factor) which are independent of the input dimension of data. This result explains why in practice DNN classifiers can overcome the curse of dimensionality and perform well in high-dimensional classification problems. Furthermore, we establish dimension-free rates of convergence under other circumstances such as when the decision boundary is piecewise smooth and the input data are bounded away from it. Besides the novel oracle-type inequality, the sharp convergence rates presented in our paper also owe to a tight error bound for approximating the natural logarithm function near zero (where it is unbounded) by ReLU DNNs. In addition, we justify our claims for the optimality of rates by proving corresponding minimax lower bounds. All these results are new in the literature and will deepen our theoretical understanding of classification with deep neural networks. **Keywords and phrases:** deep learning; deep neural networks; binary classification; logistic loss; generalization analysis Introduction In this paper, we study the binary classification problem using deep neural networks (DNNs) with the rectified linear unit (ReLU) activation function. Deep learning based on DNNs has recently achieved remarkable success in a wide range of classification tasks including text categorization [19], image classification [27], and speech recognition [17], which has become a cutting-edge learning method. ReLU is one of the most popular activation functions, as scalable computing and stochastic optimization techniques can facilitate the training of ReLU DNNs [15, 23]. Given a positive integer \(d\). Consider the binary classification problem where we regard \([0,1]^{d}\) as the input space and \(\{-1,1\}\) as the output space representing the two labels of input data. Let \(P\) be a Borel probability measure on \([0,1]^{d}\times\{-1,1\}\), regarded as the data distribution (i.e., the joint distribution of the input and output data). The goal of classification is to learn a real-valued function from a _hypothesis space_\(\mathcal{F}\) (i.e., a set of candidate functions) based on the sample of the distribution \(P\). The predictive performance of any (deterministic) real-valued function \(f\) which has a Borel measurable restriction to \([0,1]^{d}\) (i.e., the domain of \(f\) contains \([0,1]^{d}\), and \([0,1]^{d}\ni x\mapsto f(x)\in\mathbb{R}\) is Borel measurable) is measured by the _misclassification error_ of \(f\) with respect to \(P\), given by \[\mathcal{R}_{P}(f):=P\left(\left\{\left.(x,y)\in[0,1]^{d}\times\{-1,1\}\right| y\neq\mathrm{sgn}(f(x))\right\}\right), \tag{1.1}\] or equivalently, the _excess misclassification error_ \[\mathcal{E}_{P}(f):=\mathcal{R}_{P}(f)-\inf\left\{\mathcal{R}_{P}(g)\left|g:[ 0,1]^{d}\to\mathbb{R}\text{ is Borel measurable}\right.\right\}. \tag{1.2}\] Here \(\circ\) means function composition, and \(\mathrm{sgn}(\cdot)\) denotes the sign function which is defined as \(\mathrm{sgn}(t)=1\) if \(t\geq 0\) and \(\mathrm{sgn}(t)=-1\) otherwise. The misclassification error \(\mathcal{R}_{P}(f)\) characterizes the probability that the binary classifier \(\mathrm{sgn}\circ f\) makes a wrong prediction, where by a _binary classifier_ (or _classifier_ for short) we mean a \(\{-1,1\}\)-valued function whose domain contains the input space \([0,1]^{d}\). Since any real-valued function \(f\) with its domain containing \([0,1]^{d}\) determines a classifier \(\mathrm{sgn}\circ f\), we in this paper may call such a function \(f\) a _classifier_ as well. Note that the function we learn in a classification problem is based on the sample, meaning that it is not deterministic but a random function. Thus we take the expectation to measure its efficiency using the (excess) misclassification error. More specifically, let \(\{(X_{i},Y_{i})\}_{i=1}^{n}\) be an independent and identically distributed (i.i.d.) sample of the distribution \(P\) and the hypothesis space \(\mathcal{F}\) be a set of real-valued functions which have a Borel measurable restriction to \([0,1]^{d}\). We desire to construct an \(\mathcal{F}\)-valued statistic \(\hat{f}_{n}\) from the sample \(\{(X_{i},Y_{i})\}_{i=1}^{n}\) and the classification performance of \(\hat{f}_{n}\) can be characterized by upper bounds for the expectation of the excess misclassification error \(\mathbb{E}\left[\mathcal{E}_{P}(\hat{f}_{n})\right]\). One possible way to produce \(\hat{f}_{n}\) is the _empirical risk minimization_ with some _loss function_\(\phi:\mathbb{R}\to[0,\infty)\), which is given by \[\hat{f}_{n}\in\operatorname*{arg\,min}_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^ {n}\phi\left(Y_{i}f(X_{i})\right). \tag{1.3}\] If \(\hat{f}_{n}\) satisfies (1.3), then we will call \(\hat{f}_{n}\) an _empirical \(\phi\)-risk minimizer_ (ERM with respect to \(\phi\), or \(\phi\)-ERM) over \(\mathcal{F}\). For any real-valued function \(f\) which has a Borel measurable restriction to \([0,1]^{d}\), the \(\phi\)-_risk_ and _excess \(\phi\)-risk_ of \(f\) with respect to \(P\), denoted by \(\mathcal{R}_{P}^{\phi}(f)\) and \(\mathcal{E}_{P}^{\phi}(f)\) respectively, are defined as \[\mathcal{R}_{P}^{\phi}(f):=\int_{[0,1]^{d}\times\{-1,1\}}\phi(yf(x))\mathrm{d} P(x,y) \tag{1.4}\] and \[\mathcal{E}_{P}^{\phi}(f):=\mathcal{R}_{P}^{\phi}(f)-\inf\left\{\mathcal{R}_{ P}^{\phi}(g)\Big{|}g:[0,1]^{d}\to\mathbb{R}\text{ is Borel measurable}\right\}. \tag{1.5}\] To derive upper bounds for \(\mathbb{E}\left[\mathcal{E}_{P}(\hat{f}_{n})\right]\), we can first establish upper bounds for \(\mathbb{E}\left[\mathcal{E}_{P}^{\phi}(\hat{f}_{n})\right]\), which are typically controlled by two parts, namely the sample error and the approximation error (e.g., cf. Chapter 2 of [6]). Then we are able to bound \(\mathbb{E}\left[\mathcal{E}_{P}(\hat{f}_{n})\right]\) by \(\mathbb{E}\left[\mathcal{E}_{P}^{\phi}(\hat{f}_{n})\right]\) through the so-called calibration inequality_ (also known as _Comparison Theorem_, see, e.g., Theorem 10.5 of [6] and Theorem 3.22 of [40]). In this paper, we will call any upper bound for \(\mathbb{E}\left[\mathcal{E}_{P}(\hat{f}_{n})\right]\) or \(\mathbb{E}\left[\mathcal{E}_{P}^{\phi}(\hat{f}_{n})\right]\) a _generalization bound_. Note that \(\lim\limits_{n\to\infty}\frac{1}{n}\sum_{i=1}^{n}\phi\left(Y_{i}f(X_{i})\right)= \mathcal{R}_{P}^{\phi}(f)\) almost surely for all measurable \(f\). Therefore, the empirical \(\phi\)-risk minimizer \(\hat{f}_{n}\) defined in (1.3) can be regarded as an estimation of the so-called _target function_ which minimizes the \(\phi\)-risk \(\mathcal{R}_{P}^{\phi}\) over all Borel measurable functions \(f\). The target function can be defined pointwise. Rigorously, we say a measurable function \(f^{*}:[0,1]^{d}\to[-\infty,\infty]\) is a target function of the \(\phi\)-risk under the distribution \(P\) if for \(P_{X}\)-almost all \(x\in[0,1]^{d}\) the value of \(f^{*}\) at \(x\) minimizes \(\int_{\{-1,1\}}\phi(yz)\mathrm{d}P(y|x)\) over all \(z\in[-\infty,\infty]\), i.e., \[f^{*}(x)\in\operatorname*{arg\,min}_{z\in[-\infty,\infty]}\int_{\{-1,1\}}\phi (yz)\mathrm{d}P(y|x)\text{ for }P_{X}\text{-almost all }x\in[0,1]^{d}, \tag{1.6}\] where \(\phi(yz):=\varlimsup_{t\to yz}\phi(t)\) if \(z\in\{-\infty,\infty\}\), \(P_{X}\) is the marginal distribution of \(P\) on \([0,1]^{d}\), and \(P(\cdot|x)\) is the _regular conditional distribution_ of \(P\) on \(\{-1,1\}\) given \(x\) (cf. Lemma A.3.16 in [40]). In this paper, we will use \(f^{*}_{\phi,P}\) to denote the target function of the \(\phi\)-risk under \(P\). Note that \(f^{*}_{\phi,P}\) may take values in \(\{-\infty,\infty\}\), and \(f^{*}_{\phi,P}\) minimizes \(\mathcal{R}_{P}^{\phi}\) in the sense that \[\mathcal{R}_{P}^{\phi}(f^{*}_{\phi,P}):=\int_{[0,1]^{d}\times\{ -1,1\}}\phi(yf^{*}_{\phi,P}(x))\mathrm{d}P(x,y) \tag{1.7}\] \[=\inf\left\{\left.\mathcal{R}_{P}^{\phi}(g)\right|g:[0,1]^{d}\to \mathbb{R}\text{ is Borel measurable}\right\},\] where \(\phi(yf^{*}_{\phi,P}(x)):=\varlimsup_{t\to yf^{*}_{\phi,P}(x)}\phi(t)\) if \(yf^{*}_{\phi,P}(x)\in\{-\infty,\infty\}\) (cf. Lemma A.1). In practice, the choice of the loss function \(\phi\) varies, depending on the classification method used. For neural network classification, the _logistic loss_\(\phi(t)=\log(1+\mathrm{e}^{-t})\), also known as the _cross entropy loss_, is commonly used. We now explain why the logistic loss is related to cross entropy. Let \(\mathcal{X}\) be an arbitrary nonempty countable set equipped with the sigma algebra consisting of all its subsets. For any two probability measures \(Q_{0}\) and \(Q\) on \(\mathcal{X}\), the _cross entropy_ of \(Q\) relative to \(Q_{0}\) is defined as \(\mathrm{H}(Q_{0},Q):=-\sum_{z\in\mathcal{X}}Q_{0}(\{z\})\cdot\log Q(\{z\})\), where \(\log 0:=-\infty\) and \(0\cdot(-\infty):=0\) (cf. (2.112) of [32]). One can show that \(\mathrm{H}(Q_{0},Q)\geq\mathrm{H}(Q_{0},Q_{0})\geq 0\) and \[\{Q_{0}\}=\operatorname*{arg\,min}_{Q}\mathrm{H}(Q_{0},Q)\text{ if }\mathrm{H}(Q_{ 0},Q_{0})<\infty.\] Therefore, roughly speaking, the cross entropy \(\mathrm{H}(Q_{0},Q)\) characterizes how close is \(Q\) to \(Q_{0}\). For any \(a\in[0,1]\), let \(\mathscr{M}_{a}\) denote the probability measure on \(\{-1,1\}\) with \(\mathscr{M}_{a}(\{1\})=a\) and \(\mathscr{M}_{a}(\{-1\})=1-a\). Recall that any real-valued Borel measurable function \(f\) defined on the input space \([0,1]^{d}\) can induce a classifier \(\mathrm{sgn}\circ f\). We can interpret the construction of the classifier \(\mathrm{sgn}\circ f\) from \(f\) as follows. Consider the _logistic function_ \[\bar{l}:\mathbb{R}\to(0,1),\;z\mapsto\frac{1}{1+\mathrm{e}^{-z}}, \tag{1.8}\] which is strictly increasing. For each \(x\in[0,1]^{d}\), \(f\) induces a probability measure \(\mathscr{M}_{\bar{l}(f(x))}\) on \(\{-1,1\}\) via \(\bar{l}\), which we regard as a prediction made by \(f\) of the distribution of the output data (i.e., the two labels \(+1\) and \(-1\)) given the input data \(x\). Observe that the larger \(f(x)\) is, the closer the number \(\bar{l}(f(x))\) gets to \(1\), and the more likely the event \(\{1\}\) occurs under the distribution \(\mathscr{M}_{\bar{l}(f(x))}\). If \(\mathscr{M}_{\bar{l}(f(x))}(\{+1\})\geq\mathscr{M}_{\bar{l}(f(x))}(\{-1\})\), then \(+1\) is more likely to appear given the input data \(x\) and we thereby think of \(f\) as classifying the input \(x\) as class \(+1\). Otherwise, when \(\mathscr{M}_{\bar{l}(f(x))}(\{+1\})<\mathscr{M}_{\bar{l}(f(x))}(\{-1\})\), \(x\) is classified as \(-1\). In this way, \(f\) induces a classifier given by \[x\mapsto\begin{cases}+1,&\text{ if }\mathscr{M}_{\bar{l}(f(x))}(\{1\})\geq \mathscr{M}_{\bar{l}(f(x))}(\{-1\}),\\ -1,&\text{ if }\mathscr{M}_{\bar{l}(f(x))}(\{1\})<\mathscr{M}_{\bar{l}(f(x))}(\{-1\}). \end{cases} \tag{1.9}\] Indeed, the classifier in (1.9) is exactly \(\operatorname{sgn}\circ f\). Thus we can also measure the predictive performance of \(f\) in terms of \(\mathscr{M}_{\bar{l}(f(\cdot))}\) (instead of \(\operatorname{sgn}\circ f\)). To this end, one natural way is to compute the average "extent" of how close is \(\mathscr{M}_{\bar{l}(f(x))}\) to the true conditional distribution of the output given the input \(x\). If we use the cross entropy to characterize this "extent", then its average, which measures the classification performance of \(f\), will be \(\int_{[0,1]^{d}}\operatorname{H}(\mathscr{Y}_{x},\mathscr{M}_{\bar{l}(f(x))}) \mathrm{d}\mathscr{X}(x)\), where \(\mathscr{X}\) is the distribution of the input data, and \(\mathscr{Y}_{x}\) is the conditional distribution of the output data given the input \(x\). However, one can show that this quantity is just the logistic risk of \(f\). Indeed, \[\int_{[0,1]^{d}}\operatorname{H}(\mathscr{Y}_{x},\mathscr{M}_{ \bar{l}(f(x))})\mathrm{d}\mathscr{X}(x)\] \[=\int_{[0,1]^{d}}\left(-\mathscr{Y}_{x}(\{1\})\cdot\log(\mathscr{ M}_{\bar{l}(f(x))}(\{1\}))-\mathscr{Y}_{x}(\{-1\})\log(\mathscr{M}_{\bar{l}(f(x))}( \{-1\}))\right)\mathrm{d}\mathscr{X}(x)\] \[=\int_{[0,1]^{d}}\left(-\mathscr{Y}_{x}(\{1\})\cdot\log(\bar{l}(f (x)))-\mathscr{Y}_{x}(\{-1\})\log(1-\bar{l}(f(x)))\right)\mathrm{d}\mathscr{X }(x)\] \[=\int_{[0,1]^{d}}\left(\mathscr{Y}_{x}(\{1\})\cdot\log(1+\mathrm{ e}^{-f(x)})+\mathscr{Y}_{x}(\{-1\})\log(1+\mathrm{e}^{f(x)})\right)\mathrm{d} \mathscr{X}(x)\] \[=\int_{[0,1]^{d}}\left(\mathscr{Y}_{x}(\{1\})\cdot\phi(f(x))+ \mathscr{Y}_{x}(\{-1\})\phi(-f(x))\right)\mathrm{d}\mathscr{X}(x)\] \[=\int_{[0,1]^{d}}\int_{\{-1,1\}}\phi(yf(x))\mathrm{d}\mathscr{Y}_ {x}(y)\mathrm{d}\mathscr{X}(x)=\int_{[0,1]^{d}\times\{-1,1\}}\phi(yf(x)) \mathrm{d}P(x,y)=\mathcal{R}_{P}^{\phi}(f),\] where \(\phi\) is the logistic loss and \(P\) is the joint distribution of the input and output data, i.e., \(\mathrm{d}P(x,y)=\mathrm{d}\mathscr{Y}_{x}(y)\mathrm{d}\mathscr{X}(x)\). Therefore, the average cross entropy of the distribution \(\mathscr{M}_{\bar{l}(f(x))}\) induced by \(f\) to the true conditional distribution of the output data given the input data \(x\) is equal to the logistic risk of \(f\) with respect to the joint distribution of the input and output data, which explains why the logistic loss is also called the cross entropy loss. Compared with the misclassification error \(\mathcal{R}_{P}(f)\) which measures the performance of the classifier \(f(x)\) in correctly generating the class label \(\operatorname{sgn}(f(x))\) that equals the most probable class label of the input data \(x\) (i.e., the label \(y_{x}\in\{-1,+1\}\) such that \(\mathscr{Y}_{x}(\{y_{x}\})\geq\mathscr{Y}_{x}(\{-y_{x}\})\)), the logistic risk \(\mathcal{R}_{P}^{\phi}(f)\) measures how close is the induced distribution \(\mathscr{M}_{\bar{l}(f(x))}\) to the true conditional distribution \(\mathscr{Y}_{x}\). Consequently, in comparison with the (excess) misclassification error, the (excess) logistic risk is also a reasonable quantity for characterizing the performance of classifiers but from a different angle. When classifying with the logistic loss, we are essentially learning the conditional distribution \(\mathscr{Y}_{x}\) through the cross entropy and the logistic function \(\bar{l}\). Moreover, for any classifier \(\hat{f}_{n}:[0,1]^{d}\to\mathbb{R}\) trained with logistic loss, the composite function \(\bar{l}\circ\hat{f}_{n}(x)=\mathscr{M}_{\bar{l}_{0}f_{n}(x)}(\{1\})\) yields an estimation of the _conditional class probability function_\(\eta(x):=P(\{1\}\,|x)=\mathscr{Y}_{x}(\{1\})\). Therefore, classifiers trained with logistic loss essentially capture more information about the exact value of the conditional class probability function \(\eta(x)\) than we actually need to minimize the misclassification error \(\mathcal{R}_{P}(\cdot)\), since the knowledge of the sign of \(2\eta(x)-1\) is already sufficient for minimizing \(\mathcal{R}_{P}(\cdot)\) (see (2.49)). In addition, we point out that the excess logistic risk \(\mathcal{E}_{P}^{\phi}(f)\) is actually the average _Kullback-Leibler divergence_ (_KL divergence_) from \(\mathscr{M}_{\bar{l}(f(x))}\) to \(\mathscr{Y}_{x}\). Here for any two probability measures \(Q_{0}\) and \(Q\) on some countable set \(\mathcal{X}\), the KL divergence from \(Q\) to \(Q_{0}\) is defined as \(\mathrm{KL}(Q_{0}\|Q):=\sum_{z\in\mathcal{X}}Q_{0}(\{z\})\cdot\log\frac{Q_{0}( \{z\})}{Q(\{z\})}\), where \(Q_{0}(\{z\})\cdot\log\frac{Q_{0}(\{z\})}{Q(\{z\})}:=0\) if \(Q_{0}(\{z\})=0\) and \(Q_{0}(\{z\})\cdot\log\frac{Q_{0}(\{z\})}{Q(\{z\})}:=\infty\) if \(Q_{0}(\{z\})>0=Q(\{z\})\) (cf. (2.11) of [32] or Definition 2.5 of [44]). In this work, we focus on the generalization analysis of binary classification with empirical risk minimization over ReLU DNNs. That is, the classifiers under consideration are produced by algorithm (1.3) in which the hypothesis space \(\mathcal{F}\) is generated by deep ReLU networks. Based on recent studies in complexity and approximation theory of DNNs (e.g., [5, 33, 47]), several researchers have derived generalization bounds for \(\phi\)-ERMs over DNNs in binary classification problems [9, 22, 38]. However, to the best of our knowledge, the existing literature fails to establish satisfactory generalization analysis if the target function \(f_{\phi,P}^{*}\) is unbounded. In particular, take \(\phi\) to be the logistic loss, i.e., \(\phi(t)=\log(1+\mathrm{e}^{-t})\). The target function is then explicitly given by \(f_{\phi,P}^{*}\xrightarrow[]{P_{X}\cdot\mathrm{a.s.}}\log\frac{\eta}{1-\eta}\) with \(\eta(x):=P(\{1\}\,|x)\) (\(x\in[0,1]^{d}\)) being the conditional class probability function of \(P\) (cf. Lemma A.2), where recall that \(P(\cdot|x)\) denotes the conditional probability of \(P\) on \(\{-1,1\}\) given \(x\). Hence \(f_{\phi,P}^{*}\) is unbounded if \(\eta\) can be arbitrarily close to \(0\) or \(1\), which happens in many practical problems (see Section 3 for more details). For instance, we have \(\eta(x)=0\) or \(\eta(x)=1\) for a noise-free distribution \(P\), implying \(f_{\phi,P}^{*}(x)=\infty\) for \(P_{X}\)-almost all \(x\in[0,1]^{d}\), where \(P_{X}\) is the marginal distribution of \(P\) on \([0,1]^{d}\). DNNs trained with the logistic loss perform efficiently in various image recognition applications as the smoothness of the loss function can further simplify the optimization procedure [11, 27, 39]. However, due to the unboundedness of \(f_{\phi,P}^{*}\), the existing generalization analysis for classification with DNNs and the logistic loss either results in slow rates of convergence (e.g., the logarithmic rate in [38]) or can only be conducted under very restrictive conditions (e.g., [22, 9]) (cf. the discussions in Section 3). The unboundedness of the target function brings several technical difficulties to the generalization analysis. Indeed, if \(f_{\phi,P}^{*}\) is unbounded, it cannot be approximated uniformly by continuous functions on \([0,1]^{d}\), which poses extra challenges for bounding the approximation error. Besides, previous sample error estimates based on concentration techniques are no longer valid because these estimates usually require involved random variables to be bounded or to satisfy strong tail conditions (cf. Chapter 2 of [45]). Therefore, in contrast to empirical studies, the previous strategies for generalization analysis could not demonstrate the efficiency of classification with DNNs and the logistic loss. To fill this gap, in this paper we develop a novel theoretical analysis to establish tight generalization bounds for training DNNs with ReLU activation function and logistic loss in binary classification. Our main contributions are summarized as follows. * For \(\phi\) being the logistic loss, we establish an oracle-type inequality to bound the excess \(\phi\)-risk without using the explicit form of the target function \(f_{\phi,P}^{*}\). Through constructing a suitable bivariate function \(\psi:[0,1]^{d}\times\{-1,1\}\rightarrow\mathbb{R}\), generalization analysis based on this oracle-type inequality can remove the boundedness restriction of the target function. Similar results hold even for the more general case when \(\phi\) is merely Lipschitz continuous (see Theorem 2.1 and related discussions in Section 2.1). * By using our oracle-type inequality, we establish tight generalization bounds for fully connected ReLU DNN classifiers \(\hat{f}_{n}^{\mathbf{FNN}}\) trained by empirical logistic risk minimization (see (2.14)) and obtain sharp convergence rates in various settings: * We establish optimal convergence rates for the excess logistic risk of \(\hat{f}_{n}^{\mathbf{FNN}}\) only requiring the Holder smoothness of the conditional probability function \(\eta\) of the data distribution. Specifically, for Holder-\(\beta\) smooth \(\eta\), we show that the convergence rates of the excess logistic risk of \(\hat{f}_{n}^{\mathbf{FNN}}\) can achieve \(\mathcal{O}(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta}{\beta+d}})\), which is optimal up to the logarithmic term \((\log n)^{\frac{5\beta}{\beta+d}}\). From this we obtain the convergence rate \(\mathcal{O}(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta}{2\beta+2d}})\) of the excess misclassification error of \(\hat{f}_{n}^{\mathbf{FNN}}\), which is very close to the optimal rate, by using the calibration inequality (see Theorem 2.2). As a by-product, we also derive a new tight error bound for the approximation of the natural logarithm function (which is unbounded near zero) by ReLU DNNs (see Theorem 2.4). This bound plays a key role in establishing the aforementioned optimal rates of convergence. * We consider a compositional assumption which requires the conditional probability function \(\eta\) to be the composition \(h_{q}\circ h_{q-1}\circ\cdots\circ h_{1}\circ h_{0}\) of several vector-valued multivariate functions \(h_{i}\), satisfying that each component function of \(h_{i}\) is either a Holder-\(\beta\) smooth function only depending on (a small number) \(d_{*}\) of its input variables or the maximum value function among some of its input variables. We show that under this compositional assumption the convergence rate of the excess logistic risk of \(\hat{f}_{n}^{\mathbf{FNN}}\) can achieve \(\mathcal{O}(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta\cdot(1\wedge\beta) ^{d}}{d_{*}+\beta\cdot(1\wedge\beta)^{d}}})\), which is optimal up to the logarithmic term \((\log n)^{\frac{5\beta\cdot(1\wedge\beta)^{d}}{d_{*}+\beta\cdot(1\wedge\beta) ^{d}}}\). We then use the calibration inequality to obtain the convergence rate \(\mathcal{O}(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta\cdot(1\wedge\beta) ^{d}}{2d_{*}+2\beta\cdot(1\wedge\beta)^{d}}})\) of the excess misclassification error of \(\hat{f}_{n}^{\mathbf{FNN}}\) (see Theorem 2.3). Note that the derived convergence rates \(\mathcal{O}\big{(}\big{(}\frac{\log n\big{)}^{5}}{n}\big{)}^{\frac{\beta(\cdot( 1,\beta))^{6}}{2\alpha+\beta\cdot(1,\alpha)^{3}}}\) and \(\mathcal{O}\big{(}\big{(}\frac{\log n\big{)}^{5}}{n}\big{)}^{\frac{\beta(\cdot( 1,\beta))^{6}}{2\alpha+\beta\cdot(1,\alpha)^{3}}}\) are independent of the input dimension \(d\), thereby circumventing the well-known curse of dimensionality. It can be shown that the above compositional assumption is likely to be satisfied in practice (see comments before Theorem 2.3). Thus this result helps to explain the huge success of DNNs in practical classification problems, especially high-dimensional ones. * We derive convergence rates of the excess misclassification error of \(\hat{f}_{n}^{\mathbf{FNN}}\) under the piecewise smooth decision boundary condition combining with the noise and margin conditions (see Theorem 2.5). As a special case of this result, we show that when the input data are bounded away from the decision boundary almost surely, the derived rates can also be dimension-free. * We demonstrate the optimality of the convergence rates stated above by presenting corresponding minimax lower bounds (see Theorem 2.6 and Corollary 2.1). The rest of this paper is organized as follows. In the remainder of this section, we first introduce some conventions and notations that will be used in this paper. Then we describe the mathematical modeling of fully connected ReLU neural networks which defines the hypothesis spaces in our setting. At the end of this section, we provide a symbol glossary for the convenience of readers. In Section 2, we present our main results in this paper, including the oracle-type inequality, several generalization bounds for classifiers obtained from empirical logistic risk minimization over fully connected ReLU DNNs, and two minimax lower bounds. Section 3 provides discussions and comparisons with related works and Section 4 concludes the paper. In the Appendix, we first present covering number bounds and some approximation bounds for the space of fully connected ReLU DNNs, and then give detailed proofs of results in the main body of this paper. ### Conventions and Notations Throughout this paper, we follow the conventions that \(0^{0}:=1\), \(1^{\infty}:=1\), \(\frac{5}{6}:=\infty=:\infty^{c}\), \(\log(\infty):=\infty\), \(\log 0:=-\infty\), \(0\cdot w:=0=:w\cdot 0\) and \(\frac{a}{\infty}:=0=:b^{\infty}\) for any \(a\in\mathbb{R},b\in[0,1),c\in(0,\infty)\), \(z\in[0,\infty]\), \(w\in[-\infty,\infty]\) where we denote by \(\log\) the natural logarithm function (i.e. the base-e logarithm function). The terminology "measurable" means "Borel measurable" unless otherwise specified. Any Borel subset of some Euclidean space \(\mathbb{R}^{m}\) is equipped with the Borel sigma algebra by default. Let \(\mathcal{G}\) be an arbitrary measurable space and \(n\) be a positive integer. We call any sequence of \(\mathcal{G}\)-valued random variables \(\left\{Z_{i}\right\}_{i=1}^{n}\) a _sample_ in \(\mathcal{G}\) of size \(n\). Furthermore, for any measurable space \(\mathcal{F}\) and any sample \(\left\{Z_{i}\right\}_{i=1}^{n}\) in \(\mathcal{G}\), an \(\mathcal{F}\)-valued statistic on \(\mathcal{G}^{n}\) from the sample \(\left\{Z_{i}\right\}_{i=1}^{n}\) is a random variable \(\hat{\theta}\) together with a measurable map \(\mathcal{T}:\mathcal{G}^{n}\to\mathcal{F}\) such that \(\hat{\theta}=\mathcal{T}(Z_{1},\ldots,Z_{n})\), where \(\mathcal{T}\) is called _the map associated with_ the statistic \(\hat{\theta}\). Let \(\hat{\theta}\) be an arbitrary \(\mathcal{F}\)-valued statistic from some sample \(\left\{Z_{i}\right\}_{i=1}^{n}\) and \(\mathcal{T}\) is the map associated with \(\hat{\theta}\). Then for any measurable space \(\mathcal{D}\) and any measurable map \(\mathcal{T}_{0}:\mathcal{F}\to\mathcal{D}\), \(\mathcal{T}_{0}(\hat{\theta})=\mathcal{T}_{0}(\mathcal{T}(Z_{1},\ldots,Z_{n}))\) is a \(\mathcal{D}\)-valued statistic from the sample \(\left\{Z_{i}\right\}_{i=1}^{n}\), and \(\mathcal{T}_{0}\circ\mathcal{T}\) is the map associated with \(\mathcal{T}_{0}(\hat{\theta})\). Next we will introduce some notations used in this paper. We denote by \(\mathbb{N}\) the set of all positive integers \(\left\{1,2,3,4,\ldots\right\}\). For \(d\in\mathbb{N}\), we use \(\mathcal{F}_{d}\) to denote the set of all Borel measurable functions from \([0,1]^{d}\) to \((-\infty,\infty)\), and use \(\mathcal{H}_{0}^{d}\) to denote the set of all Borel probability measures on \([0,1]^{d}\times\left\{-1,1\right\}\). For any set \(A\), the indicator function of \(A\) is given by \[\mathbb{1}_{A}(x):=\begin{cases}0,&\text{if }x\notin A,\\ 1,&\text{if }x\in A,\end{cases} \tag{1.10}\] and the number of elements of \(A\) is denoted by \(\#(A)\). For any finite dimensional vector \(v\) and any positive integer \(l\) less than or equal to the dimension of \(v\), we denote by \((v)_{l}\) the \(l\)-th component of \(v\). More generally, for any nonempty subset \(I=\left\{i_{1},i_{2},\ldots,i_{m}\right\}\) of \(\mathbb{N}\) with \(1\leq i_{1}<i_{2}<\cdots<i_{m}\leq\) the dimension of \(v\), we denote \((v)_{I}:=\big{(}(v)_{i_{1}},(v)_{i_{2}},\ldots,(v)_{i_{m}}\big{)}\), which is a \(\#(I)\)-dimensional vector. For any function \(f\), we use \(\mathbf{dom}(f)\) to denote the domain of \(f\), and use \(\mathbf{ran}(f)\) to denote the range of \(f\), that is, \(\mathbf{ran}(f):=\left\{f(x)\big{|}x\in\mathbf{dom}(f)\right\}\). If \(f\) is a \([-\infty,\infty]^{m}\)-valued function for some \(m\in\mathbb{N}\) with \(\mathbf{dom}(f)\) containing a nonempty set \(\Omega\), then the uniform norm of \(f\) on \(\Omega\) is given by \[\|f\|_{\Omega}:=\sup\left\{\left|\big{(}f(x)\big{)}_{i}\right|\Big{|}x\in \Omega,\,i\in\{1,2,\dots,m\}\right\}. \tag{1.11}\] For integer \(m\geq 2\) and real numbers \(a_{1},\cdots,a_{m}\), define \(a_{1}\lor a_{2}\vee\cdots\lor a_{m}=\max\{a_{1},a_{2},\cdots a_{m}\}\) and \(a_{1}\wedge a_{2}\wedge\cdots\wedge a_{m}=\min\{a_{1},a_{2},\cdots a_{m}\}.\) Given a real matrix \(\boldsymbol{A}=(a_{i,j})_{i=1,\dots,m,j=1,\dots,l}\) and \(t\in[0,\infty]\), the \(\ell^{t}\)-norm of \(\boldsymbol{A}\) is defined by \[\|\boldsymbol{A}\|_{t}:=\left\{\begin{aligned} &\sum_{i=1}^{m}\sum_{j=1}^{l} \mathbbm{1}_{(0,\infty)}(|a_{i,j}|),&\text{if }t=0,\\ &\left|\sum_{i=1}^{m}\sum_{j=1}^{l}|a_{i,j}|^{t}\right|^{1/t},& \text{if }0<t<\infty,\\ &\sup\left\{|a_{i,j}|\left|i\in\{1,\cdots,m\},j\in\{1,\cdots,l\} \right.\right\},&\text{if }t=\infty.\end{aligned}\right. \tag{1.12}\] Note that a vector is exactly a matrix with only one column or one row. Consequently, (1.12) with \(l=1\) or \(m=1\) actually defines the \(\ell^{t}\)-norm of a real vector \(\boldsymbol{A}\). Let \(\mathcal{G}\) be a measurable space, \(\left\{Z_{i}\right\}_{i=1}^{n}\) be a sample in \(\mathcal{G}\) of size \(n\), \(\mathcal{P}_{n}\) be a probability measure on \(\mathcal{G}^{n}\), and \(\hat{\theta}\) be a \([-\infty,\infty]\)-valued statistic on \(\mathcal{G}^{n}\) from the sample \(\left\{Z_{i}\right\}_{i=1}^{n}\). Then we denote \[\boldsymbol{E}_{\mathcal{P}_{n}}[\hat{\theta}]:=\int\mathcal{T}\mathrm{d} \mathcal{P}_{n} \tag{1.13}\] provided that the integral \(\int\mathcal{T}\mathrm{d}\mathcal{P}_{n}\) exists, where \(\mathcal{T}\) is the map associated with \(\hat{\theta}\). Therefore, \[\boldsymbol{E}_{\mathcal{P}_{n}}[\hat{\theta}]=\mathbb{E}\left[\mathcal{T}(Z _{1},\dots,Z_{n})\right]=\mathbb{E}[\hat{\theta}]\] if the joint distribution of \((Z_{1},\dots,Z_{n})\) is exactly \(\mathcal{P}_{n}\). Let \(P\) be a Borel probability measure on \([0,1]^{d}\times\{-1,1\}\) and \(x\in[0,1]^{d}\). We use \(P(\cdot|x)\) to denote the regular conditional distribution of \(P\) on \(\{-1,1\}\) given \(x\), and \(P_{X}\) to denote the marginal distribution of \(P\) on \([0,1]^{d}\). For short, we will call the function \([0,1]^{d}\ni x\mapsto P(\{1\}\,|x)\in[0,1]\) the _conditional probability function_ (instead of the _conditional class probability function_) of \(P\). For any probability measure \(\mathscr{Q}\) defined on some measurable space \((\Omega,\mathcal{F})\) and any \(n\in\mathbb{N}\), we use \(\mathscr{Q}^{\otimes n}\) to denote the product measure \(\underbrace{\mathscr{Q}\times\mathscr{Q}\times\cdots\mathscr{Q}}_{n}\) defined on the product measurable space \((\underbrace{\Omega\times\Omega\times\cdots\Omega}_{n},\,\underbrace{ \mathcal{F}\otimes\mathcal{F}\otimes\cdots\mathcal{F}}_{n})\). ### Spaces of fully connected neural networks In this paper, we restrict ourselves to neural networks with the ReLU activation function. Consequently, hereinafter, for simplicity, we sometimes omit the word "ReLU" and the terminology "neural networks" will always refer to "ReLU neural networks". The ReLU function is given by \(\sigma:\mathbb{R}\rightarrow[0,\infty),\ t\mapsto\max\left\{t,0\right\}\). For any vector \(v\in\mathbb{R}^{m}\) with \(m\) being some positive integer, the \(v\)-shifted ReLU function is defined as \(\sigma_{v}:\mathbb{R}^{m}\rightarrow[0,\infty)^{m},\ x\mapsto\sigma(x-v)\), where the function \(\sigma\) is applied componentwise. Neural networks considered in this paper can be expressed as a family of real-valued functions which take the form \[f:\mathbb{R}^{d}\rightarrow\mathbb{R},\quad x\mapsto\boldsymbol{W}_{L}\sigma _{v_{L}}\boldsymbol{W}_{L-1}\sigma_{v_{L-1}}\cdots\boldsymbol{W}_{1}\sigma_{v _{1}}\boldsymbol{W}_{0}x, \tag{1.14}\] where the depth \(L\) denotes the number of hidden layers, \(\mathrm{m}_{k}\) is the width of \(k\)-th layer, \(\boldsymbol{W}_{k}\) is an \(\mathrm{m}_{k+1}\times\mathrm{m}_{k}\) weight matrix with \(\mathrm{m}_{0}=d\) and \(\mathrm{m}_{L+1}=1\), and the shift vector \(v_{k}\in\mathbb{R}^{\mathrm{m}_{k}}\) is called a bias. The architecture of a neural network is parameterized by weight matrices \(\left\{\boldsymbol{W}_{k}\right\}_{k=0}^{L}\) and biases \(\{v_{k}\}_{k=1}^{L}\), which will be estimated from data. Throughout the paper, whenever we talk about a neural network, we will explicitly associate it with a function \(f\) of the form (1.14) generated by \(\{\mathbf{W}_{k}\}_{k=0}^{L}\) and \(\{v_{k}\}_{k=1}^{L}\). The space of fully connected neural networks is characterized by their depth and width, as well as the number of nonzero parameters in weight matrices and bias vectors. In addition, the complexity of this space is also determined by the \(\|\cdot\|_{\infty}\)-bounds of neural network parameters and \(\|\cdot\|_{[0,1]^{d}}\)-bounds of associated functions in form (1.14). Concretely, let \((G,N)\in[0,\infty)^{2}\) and \((S,B,F)\in[0,\infty]^{3}\), the space of fully connected neural networks is defined as \[\mathcal{F}_{d}^{\mathbf{FNN}}(G,N,S,B,F):=\left\{f:\mathbb{R}^{d}\to \mathbb{R}\begin{vmatrix}f\text{ is defined in (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq: \begin{tabular}{c l l} \(\left\|\cdot\right\|_{\Omega}\) & The uniform norm on a set \(\Omega\) & Eq. (1.11) \\ \(\left\|\cdot\right\|_{\Omega}\) & The \(\ell^{*}\)-norm. & Eq. (1.12) \\ \(\left\|\cdot\right\|_{\Omega^{k,\lambda}(\Omega)}\) & The Holder norm. & Eq. (2.12) \\ \(\left\|\cdot\right\|_{S\Omega}\) & The sign function. & Below Eq. (1.2) \\ \(\left\|\cdot\right\|_{\Omega}\) & The ReLU function, that is, \(\mathbb{R}\ni t\mapsto\max\left\{0,t\right\}\in[0,\infty)\). & Above Eq. (1.14) \\ \(\left\|\cdot\right\|_{\Omega^{*}}\) & The \(\upsilon\)-shifted ReLU function. & Above Eq. (1.14) \\ \(\left\|\cdot\right\|_{\Omega}\) & The probability measure on \(\{-1,1\}\) with \(\mathscr{M}_{a}(\{1\})=a\). & Above Eq. (1.8) \\ \(P_{X}\) & The marginal distribution of \(P\) on \([0,1]^{d}\). & Below Eq. (1.6) \\ \(P(\cdot|x)\) & The regular conditional distribution of \(P\) on \(\{-1,1\}\) given & Below Eq. (1.6) \\ \(P_{\eta,2}\) & The probability on \([0,1]^{d}\times\{-1,1\}\) of which the marginal distribution on \([0,1]^{d}\) is \(\mathscr{Q}\) and the conditional probability function is \(\eta\). & Eq. (2.57) \\ \(P_{\eta}\) & The probability on \([0,1]^{d}\times\{-1,1\}\) of which the marginal distribution on \([0,1]^{d}\) is the Lebesgue measure and the conditional probability function is \(\eta\). & Below Eq. (2.57) \\ \(\left\|\cdot\right\|_{\Omega^{*}}\) & The expectation of a statistic \(\theta\) when the joint distribution & Eq. (1.13) \\ \(\mathscr{Q}^{\otimes m}\) & The product measure \(\underline{\mathscr{Q}\times\mathscr{Q}\times\cdots\mathscr{Q}}\). & Below Eq. (1.13) \\ \(\left\|\cdot\right\|_{\Omega^{*}}\) & The misclassification error of \(f\) with respect to \(P\). & Eq. (1.1) \\ \(\mathscr{E}_{P}(f)\) & The excess misclassification error of \(f\) with respect to \(P\). & Eq. (1.2) \\ \(\left\|\cdot\right\|_{\Omega^{*}}\) & The \(\phi\)-risk of \(f\) with respect to \(P\). & Eq. (1.4) \\ \(\mathscr{E}_{P}^{\dagger}(f)\) & The excess \(\phi\)-risk of \(f\) with respect to \(P\). & Eq. (1.5) \\ \(f_{\phi,P}^{\dagger}\) & The target function of the \(\phi\)-risk under some distribution & Eq. (1.6) \\ \(P\). & \(P\). & Eq. (1.6) \\ \(\mathscr{N}(\mathcal{F},\gamma)\) & The covering number of a class of real-valued functions \(\mathcal{F}\) & Eq. (2.1) \\ & with radius \(\gamma\) in the uniform norm. & Eq. (1.2) \\ \(\mathscr{B}_{r}^{\circ}(\Omega)\) & The closed ball of radius \(r\) centered at the origin in the & Eq. (2.13) \\ & Holder space of order \(\beta\) on \(\Omega\). & Eq. (2.27) \\ \(\mathscr{Q}_{d}^{\mathsf{CH}}(d_{*})\) & The set of all functions from \([0,1]^{d}\) to \(\mathbb{R}\) which compute the maximum value of up to \(d_{*}\) components of their input & Eq. (2.28) \\ \(\mathscr{G}_{d}^{\mathsf{CH}}(d_{*},\beta,r)\) & The set of all functions in \(\mathcal{B}_{r}^{\circ}([0,1]^{d})\) whose output values & Eq. (2.28) \\ & depend on exactly \(d_{*}\) components of their input vectors. & Above Eq. (2.30) \\ \(\mathscr{G}_{\infty}^{\mathsf{CH}}(d_{*})\) & \(\mathscr{G}_{\infty}^{\mathsf{CH}}(d_{*}):=[\underline{\cup}_{\Omega^{*}}^{ \mathsf{CH}}(d_{*},\beta,r)\) & Above Eq. (2.31) \\ \(\mathscr{G}_{d}^{\mathsf{CH}}(\cdots)\) & \(\mathscr{G}_{d}^{\mathsf{CH}}(q,K,d_{*},\beta,r)\) consists of compositional functions \(h_{q}\circ\) & Eq. (2.30) \\ & \(\cdots\circ h_{0}\) satisfying that each component function of \(h_{i}\) belongs to \(\mathscr{G}_{\infty}^{\mathsf{CH}}(d_{*},\beta,r)\). & Eq. (2.32) \\ \(\mathscr{G}_{d}^{\mathsf{CHOM}}(\cdots)\) & \(\mathscr{G}_{d}^{\mathsf{CHOM}}(q,K,d_{*},d_{*},\beta,r)\) consists of compositional functions & Eq. (2.32) \\ & \(h_{q}\circ\cdots\circ h_{0}\) satisfying that each component function of \(h_{i}\) belongs to \(\mathscr{G}_{\infty}^{\mathsf{CH}}(d_{*},\beta,r)\cup\mathscr{G}_{\infty}^{ \mathsf{CH}}(d_{*})\). & \\ \(\mathscr{C}^{d,\beta,r,\Gamma,P}\) & The set of binary classifiers \(\mathsf{C}:[0,1]^{d}\rightarrow\{-1,+1\}\) such that \(\left\{x\in[0,1]^{d}\,|\,\mathsf{C}(x)=+1\right\}\) is the union of some disjoint closed regions with piecewise Holder smooth boundary. & Eq. (2.48) \\ \(\Delta_{\mathsf{C}}(x)\) & The distance from some point \(x\in[0,1]^{d}\) to the decision & Eq. (2.48) \\ & boundary of some classifier \(\mathsf{C}\in\mathcal{C}^{d,\beta,r,I,\Theta}\). & Above Eq. (1.10) \\ \(\mathscr{F}_{d}\) & The set of all Borel measurable functions from \([0,1]^{d}\) to & Above Eq. (1.10) \\ \((-\infty,\infty)\). & & \\ \(\mathscr{F}_{d}^{\mathsf{NNN}}(\cdots)\) & The class of ReLU neural networks defined on \(\mathbb{R}^{d}\). & Eq. (1.15) \\ \hline \end{tabular} ## 2 Main results ### Main Upper Bounds In this subsection, we state our main results about upper bounds for the (excess) logistic risk or (excess) misclassification error of empirical logistic risk minimizers. The first result, given in Theorem 2.1, is an oracle-type inequality which provides upper bounds for the logistic risk of empirical logistic risk minimizers. Oracle-type inequalities have been extensively studied in the literature of nonparametric statistics (see [21] and references therein). As one of the main contributions in this paper, this inequality deserves special attention in its own right, allowing us to establish a novel strategy for generalization analysis. Before we state Theorem 2.1, we introduce some notations. For any pseudometric space \(\left(\mathcal{F},\rho\right)\) (cf. Section 10.5 of [1]) and \(\gamma\in\left(0,\infty\right)\), the covering number of \(\left(\mathcal{F},\rho\right)\) with radius \(\gamma\) is defined as \[\mathcal{N}\left(\left(\mathcal{F},\rho\right),\gamma\right):=\inf\left\{\# \left(\mathcal{A}\right)\begin{vmatrix}\mathcal{A}\subset\mathcal{F},\,\text{and for any }f\in\mathcal{F}\text{ there}\\ \text{exists }g\in\mathcal{A}\text{ such that }\rho(f,g)\leq\gamma\end{vmatrix}\right\},\] where we recall that \(\#\left(\mathcal{A}\right)\) denotes the number of elements of the set \(\mathcal{A}\). When the pseudometric \(\rho\) on \(\mathcal{F}\) is clear and no confusion arises, we write \(\mathcal{N}(\mathcal{F},\gamma)\) instead of \(\mathcal{N}\left(\left(\mathcal{F},\rho\right),\gamma\right)\) for simplicity. In particular, if \(\mathcal{F}\) consists of real-valued functions which are bounded on \(\left[0,1\right]^{d}\), we will use \(\mathcal{N}(\mathcal{F},\gamma)\) to \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt}} \(\mathcal{H}_{0}^{d}\) & The set of all Borel probability measures on \(\left[0,1\right]^{d}\times\{-1,1\}\) & Above Eq. (1.10) \\ \(\mathcal{H}_{1}^{d,\beta,\gamma}\) & The set of all probability measures \(P\in\mathcal{H}_{0}^{d}\) whose conditional probability function coincides with some function in \(\mathcal{B}_{\gamma}^{\beta}\left(\left[0,1\right]^{d}\right)\)\(P_{X}\)-a.s.. \\ \(\mathcal{H}_{2,s_{1},c_{1},t_{1}}^{d,\beta,\gamma}\) & The set of all probability measures \(P\) in \(\mathcal{H}_{1}^{d,\beta,\gamma}\) satisfying the noise condition (2.24). \\ \(\mathcal{H}_{3,A}^{d,\beta,\gamma}\) & The set of all probability measures \(P\in\mathcal{H}_{0}^{d}\) whose marginal distribution on \(\left[0,1\right]^{d}\) is the Lebesgue measure and whose conditional probability function is in \(\mathcal{B}_{\gamma}^{\beta}\left(\left[0,1\right]^{d}\right)\) and bounded away from \(\frac{1}{2}\) almost surely. \\ \(\mathcal{H}_{4,q,K,d,\epsilon}^{d,\beta,\gamma}\) & The set of all probability measures \(P\in\mathcal{H}_{0}^{d}\) whose conditional probability function coincides with some function in \(\mathcal{G}_{d}^{\text{CHOM}}(q,K,d_{\ast},d_{\ast},\beta,r)\)\(P_{X}\)-a.s.. \\ \(\mathcal{H}_{5,A,q,K,d_{\ast}}^{d,\beta,\gamma}\) & The set of all probability measures \(P\in\mathcal{H}_{0}^{d}\) whose marginal distribution on \(\left[0,1\right]^{d}\) is the Lebesgue measure and whose conditional probability function is in \(\mathcal{G}_{d}^{\text{CH}}(q,K,d_{\ast},\beta,r)\) and bounded away from \(\frac{1}{2}\) almost surely. \\ \(\mathcal{H}_{6,t_{1},c_{1},t_{2},c_{2}}^{d,\beta,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma, \gamma,\gamma denote \[\mathcal{N}\left(\left(\mathcal{F},\rho:(f,g)\mapsto\sup_{x\in[0,1]^{d}}|f(x)-g(x) |\,\right),\gamma\right) \tag{2.1}\] unless otherwise specified. Recall that the \(\phi\)-risk of a measurable function \(f:[0,1]^{d}\to\mathbb{R}\) with respect to a distribution \(P\) on \([0,1]^{d}\times\{-1,1\}\) is denoted by \(\mathcal{R}_{P}^{\phi}(f)\) and defined in (1.4). **Theorem 2.1**.: _Let \(\{(X_{i},Y_{i})\}_{i=1}^{n}\) be an i.i.d. sample of a probability distribution \(P\) on \([0,1]^{d}\times\{-1,1\}\), \(\mathcal{F}\) be a nonempty class of uniformly bounded real-valued functions defined on \([0,1]^{d}\), and \(\hat{f}_{n}\) be an ERM with respect to the logistic loss \(\phi(t)=\log(1+\mathrm{e}^{-t})\) over \(\mathcal{F}\), i.e.,_ \[\hat{f}_{n}\in\operatorname*{arg\,min}_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1 }^{n}\phi\left(Y_{i}f(X_{i})\right). \tag{2.2}\] _If there exists a measurable function \(\psi:[0,1]^{d}\times\{-1,1\}\to\mathbb{R}\) and a constant triple \((M,\Gamma,\gamma)\in(0,\infty)^{3}\) such that_ \[\int_{[0,1]^{d}\times\{-1,1\}}\psi(x,y)\mathrm{d}P(x,y)\leq\inf_{f\in \mathcal{F}}\int_{[0,1]^{d}\times\{-1,1\}}\phi(yf(x))\mathrm{d}P(x,y), \tag{2.3}\] \[\sup\left\{\phi(t)\left|\left|t\right|\leq\sup_{f\in\mathcal{F}}\|f\|_{[0,1]^{ d}}\right\}\vee\sup\left\{\left|\psi(x,y)\right|\left|(x,y)\in[0,1]^{d} \times\{-1,1\}\right.\right\}\leq M, \tag{2.4}\] \[\int_{[0,1]^{d}\times\{-1,1\}}\left(\phi(yf(x))-\psi\left(x,y)\right)^{2} \mathrm{d}P(x,y)\right. \tag{2.5}\] _and_ \[W:=\max\left\{3,\,\mathcal{N}\left(\mathcal{F},\gamma\right)\right\}<\infty.\] _Then for any \(\varepsilon\in(0,1)\), there holds_ \[\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n}\right)- \int_{[0,1]^{d}\times\{-1,1\}}\psi(x,y)\mathrm{d}P(x,y)\right] \tag{2.6}\] \[\leq 80\cdot\frac{(1+\varepsilon)^{2}}{\varepsilon}\cdot\frac{ \Gamma\log W}{n}+(20+20\varepsilon)\cdot\frac{M\log W}{n}+(20+20\varepsilon) \cdot\sqrt{\gamma}\cdot\sqrt{\frac{\Gamma\log W}{n}}\] \[\quad+4\gamma+(1+\varepsilon)\cdot\inf_{f\in\mathcal{F}}\left( \mathcal{R}_{P}^{\phi}(f)-\int_{[0,1]^{d}\times\{-1,1\}}\psi(x,y)\mathrm{d}P( x,y)\right).\] According to its proof in Appendix A.3.2, Theorem 2.1 remains true when the logistic loss is replaced by any nonnegative function \(\phi\) satisfying \[\left|\phi(t)-\phi(t^{\prime})\right|\leq\left|t-t^{\prime}\right|,\;\forall \;t,t^{\prime}\in\left[-\sup_{f\in\mathcal{F}}\|f\|_{[0,1]^{d}},\sup_{f\in \mathcal{F}}\|f\|_{[0,1]^{d}}\right].\] Then by rescaling, Theorem 2.1 can be further generalized to the case when \(\phi\) is any nonnegative locally Lipschitz continuous loss function such as the exponential loss or the LUM (large-margin unified machine) loss (cf. [29]). Generalization analysis for classification with these loss functions based on oracle-type inequalities similar to Theorem 2.1 has been studied in our coming work [48]. Let us give some comments on conditions (2.3) and (2.5) of Theorem 2.1. To our knowledge, these two conditions are introduced for the first time in this paper, and will play pivotal roles in our estimates. Let \(\phi\) be the logistic loss and \(P\) be a probability measure on \([0,1]^{d}\times\{-1,1\}\). Recall that \(f_{\phi,P}^{*}\) denotes the target function of the logistic risk. If \[\int_{[0,1]^{d}\times\{-1,1\}}\psi(x,y)\mathrm{d}P(x,y)=\inf\left\{\left. \mathcal{R}_{P}^{\phi}(f)\right|f:[0,1]^{d}\to\mathbb{R}\text{ is measurable}\right\}, \tag{2.7}\] then condition (2.3) is satisfied and the left hand side of (2.6) is exactly \(\mathbb{E}\left[\mathcal{E}_{P}^{\phi}\left(\hat{f}_{n}\right)\right]\). Therefore, Theorem 2.1 can be used to establish excess \(\phi\)-risk bounds for the \(\phi\)-ERM \(\hat{f}_{n}\). In particular, one can take \(\psi(x,y)\) to be \(\phi(yf_{\phi,P}^{*}(x))\) to ensure the equality (2.7) (recalling (1.7)). It should be pointed out that if \(\psi(x,y)=\phi(yf_{\phi,P}^{*}(x))\), inequality (2.5) is of the same form as the following inequality with \(\tau=1\), which asserts that there exist \(\tau\in[0,1]\) and \(\Gamma>0\) such that \[\int_{[0,1]^{d}\times\{-1,1\}}\Big{(}\phi(yf(x))-\phi\left(yf_{\phi}^{*}(x) \right)\Big{)}^{2}\mathrm{d}P(x,y)\leq\Gamma\cdot\Big{(}\mathcal{E}_{P}^{\phi }(f)\Big{)}^{\tau},\,\forall\,\,f\in\mathcal{F}. \tag{2.8}\] This inequality appears naturally when bounding the sample error by using concentration inequalities, which is of great importance in previous generalization analysis for binary classification (cf. condition (A4) in [22] and Definition 10.15 in [6]). In [9], the authors actually prove that if the target function \(f_{\phi,P}^{*}\) is bounded and the functions in \(\mathcal{F}\) are uniformly bounded by some \(F>0\), the inequality (2.5) holds with \(\psi(x,y)=\phi(yf_{\phi,P}^{*}(x))\) and \[\Gamma=\frac{2}{\inf\left\{\phi^{\prime\prime}(t)\left|t\in\mathbb{R},\,\,|t| \leq\max\left\{F,\left\|f_{\phi,P}^{*}\right\|_{[0,1]^{d}}\right\}\right\}}.\] Here \(\phi^{\prime\prime}(t)\) denotes the second order derivative of \(\phi(t)=\log(1+\mathrm{e}^{-t})\) which is given by \(\phi^{\prime\prime}(t)=\frac{e^{t}}{(1+\mathrm{e}^{t})^{2}}\). The boundedness of \(f_{\phi,P}^{*}\) is a key ingredient leading to the main results in [9] (see Section 3 for more details). However, \(f_{\phi,P}^{*}\) is explicitly given by \(\log\frac{n}{1-\eta}\) with \(\eta(x)=P(\{1\}\,|x)\), which tends to infinity when \(\eta\) approaches to \(0\) or \(1\). In some cases, the uniformly boundedness assumption on \(f_{\phi,P}^{*}\) is too restrictive. When \(f_{\phi,P}^{*}\) is unbounded, i.e., \(\|f_{\phi,P}^{*}\|_{[0,1]^{d}}=\infty\), condition (2.5) will not be satisfied by simply taking \(\psi(x,y)=\phi(yf_{\phi,P}^{*}(x))\). Since in this case we have \(\inf_{t\in(-\infty,+\infty)}\phi^{\prime\prime}(t)=0\), one cannot find a finite constant \(\Gamma\) to guarantee the validity of (2.5), i.e., the inequality (2.8) cannot hold for \(\tau=1\), which means the previous strategy for generalization analysis in [9] fails to work. In Theorem 2.1, the requirement for \(\psi(x,y)\) is much more flexible, we don't require \(\psi(x,y)\) to be \(\phi(yf_{\phi,P}^{*}(x))\) or even to satisfy (2.7). In this paper, by recurring to Theorem 2.1, we carefully construct \(\psi\) to avoid using \(f_{\phi,P}^{*}\) directly in the following estimates. Based on this strategy, under some mild regularity conditions on \(\eta\), we can develop a more general analysis to demonstrate the performance of neural network classifiers trained with the logistic loss regardless of the unboundedness of \(f_{\phi,P}^{*}\). The derived generalization bounds and rates of convergence are stated in Theorem 2.2, Theorem 2.3, and Theorem 2.5, which are new in the literature and constitute the main contributions of this paper. It is worth noticing that in Theorem 2.2 and Theorem 2.3, we use Theorem 2.1 to obtain optimal rates of convergence (up to some logarithmic factor), which demonstrates the tightness and power of the inequality (2.6) in Theorem 2.1. To obtain these optimal rates from Theorem 2.1, a delicate construction of \(\psi\) which allows small constants \(M\) and \(\Gamma\) in (2.4) and (2.5) is necessary. One frequently used form of \(\psi\) in this paper is \[\psi: [0,1]^{d}\times\{-1,1\}\to\mathbb{R}, \tag{2.9}\] \[(x,y)\mapsto\begin{cases}\phi\left(y\log\frac{\eta(x)}{1-\eta(x )}\right),&\eta(x)\in[\delta_{1},1-\delta_{1}],\\ 0,&\eta(x)\in\{0,1\},\\ \eta(x)\log\frac{1}{\eta(x)}+(1-\eta(x))\log\frac{1}{1-\eta(x)},&\eta(x)\in(0, \delta_{1})\cup(1-\delta_{1},1),\end{cases}\] which can be regarded as a truncated version of \(\phi(yf_{\phi,P}^{*}(x))=\phi\left(y\log\frac{\eta(x)}{1-\eta(x)}\right)\), where \(\delta_{1}\) is some suitable constant in \((0,1/2]\). However, in Theorem 2.5 we use a different form of \(\psi\), which will be specified later. The proof of Theorem 2.1 is based on the following error decomposition \[\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n}\right)-\Psi\right]\leq \mathrm{T}_{\varepsilon,\psi,n}+(1+\varepsilon)\cdot\inf_{g\in\mathcal{F}} \left(\mathcal{R}_{P}^{\phi}(g)-\Psi\right),\,\forall\,\,\varepsilon\in[0,1), \tag{2.10}\] where \(\mathrm{T}_{\varepsilon,\psi,n}:=\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_ {n}\right)-\Psi-(1+\varepsilon)\cdot\frac{1}{n}\sum_{i=1}^{n}\left(\phi\left(Y_ {i}\hat{f}_{n}(X_{i})\right)-\psi(X_{i},Y_{i})\right)\right]\) and \(\Psi=\int_{[0,1]^{d}\times\{-1,1\}}\psi(x,y)\mathrm{d}P(x,y)\) (see (A.17)). Although (2.10) is true for \(\varepsilon=0\), it's better to take \(\varepsilon>0\) in (2.10) to obtain sharp rates of convergence. This is because bounding the term \(\mathrm{T}_{\varepsilon,\psi,n}\) with \(\varepsilon\in(0,1)\) is easier than bounding \(\mathrm{T}_{0,\psi,n}\). To see this, note that for \(\varepsilon\in(0,1)\) we have \[\mathrm{T}_{\varepsilon,\psi,n}=(1+\varepsilon)\cdot\mathrm{T}_{0,\psi,n}- \varepsilon\cdot\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n} \right)-\Psi\right]\leq(1+\varepsilon)\cdot\mathrm{T}_{0,\psi,n},\] meaning that we can always establish tighter upper bounds for \(\mathrm{T}_{\varepsilon,\psi,n}\) than for \(\mathrm{T}_{0,\psi,n}\) (up to the constant factor \(1+\varepsilon<2\)). Indeed, \(\varepsilon>0\) is necessary in establishing Theorem 2.1, as indicated in its proof in Appendix A.3.2. We also point out that, setting \(\varepsilon=0\) and \(\psi\equiv 0\) (hence \(\Psi=0\)) in (2.10), and subtracting \(\inf\left\{\mathcal{R}_{P}^{\phi}(g)\middle|g:[0,1]^{d}\to\mathbb{R}\text{ measurable}\right\}\) from both sides, we will obtain a simpler error decomposition \[\begin{split}&\mathbb{E}\left[\mathcal{E}_{P}^{\phi}\left(\hat{f}_ {n}\right)\right]\leq\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n} \right)-\frac{1}{n}\sum_{i=1}^{n}\left(\phi\left(Y_{i}\hat{f}_{n}(X_{i}) \right)\right)\right]+\inf_{g\in\mathcal{F}}\mathcal{E}_{P}^{\phi}(g)\\ &\leq\mathbb{E}\left[\sup_{g\in\mathcal{F}}\left|\mathcal{R}_{P}^ {\phi}\left(g\right)-\frac{1}{n}\sum_{i=1}^{n}\left(\phi\left(Y_{i}g(X_{i}) \right)\right)\right|\right]+\inf_{g\in\mathcal{F}}\mathcal{E}_{P}^{\phi}(g), \end{split} \tag{2.11}\] which is frequently used in the literature (see e.g., Lemma 2 in [25] and the proof of Proposition 4.1 in [31]). Note that (2.11) does not require the explicit form of \(f_{\phi,P}^{*}\), which means that we can also use this error decomposition to establish rates of convergence for \(\mathbb{E}\left[\mathcal{E}_{P}^{\phi}\{\hat{f}_{n}\}\right]\) regardless of the unboundedness of \(f_{\phi,P}^{*}\). However, in comparison with Theorem 2.1, using (2.11) may result in slow rates of convergence because of the absence of the positive parameter \(\varepsilon\) and a carefully constructed function \(\psi\). We now state Theorem 2.2 which establishes generalization bounds for empirical logistic risk minimizers over DNNs. In order to present this result, we need the definition of Holder spaces [7]. The Holder space \(\mathcal{C}^{k,\lambda}(\Omega)\), where \(\Omega\subset\mathbb{R}^{d}\) is a closed domain, \(k\in\mathbb{N}\cup\{0\}\) and \(\lambda\in(0,1]\), consists of all those functions from \(\Omega\) to \(\mathbb{R}\) which have continuous derivatives up to order \(k\) and whose \(k\)-th partial derivatives are Holder-\(\lambda\) continuous on \(\Omega\). Here we say a function \(g:\Omega\to\mathbb{R}\) is Holder-\(\lambda\) continuous on \(\Omega\), if \[\left|g\right|_{\mathcal{C}^{0,\lambda}(\Omega)}:=\sup_{\Omega\ni x\neq z\in \Omega}\frac{\left|g(x)-g(z)\right|}{\left\|x-z\right\|_{2}^{\lambda}}<\infty.\] Then the Holder spaces \(\mathcal{C}^{k,\lambda}(\Omega)\) can be assigned the norm \[\left\|f\right\|_{\mathcal{C}^{k,\lambda}(\Omega)}:=\max_{\left\|\mathbf{m} \right\|_{1}\leq k}\left\|\mathrm{D}^{\mathbf{m}}f\right\|_{\Omega}+\max_{\left\| \mathbf{m}\right\|_{1}=k}\left|\mathrm{D}^{\mathbf{m}}f\right|_{\mathcal{C}^{0, \lambda}(\Omega)}, \tag{2.12}\] where \(\mathbf{m}=(m_{1},\cdots,m_{d})\in(\mathbb{N}\cup\{0\})^{d}\) ranges over multi-indices (hence \(\left\|\mathbf{m}\right\|_{1}=\sum_{i=1}^{d}m_{i}\)) and \(\mathrm{D}^{\mathbf{m}}f(x_{1},\ldots,x_{d})=\frac{\partial^{m_{1}}}{\partial x_{1 }^{m_{1}}}\cdots\frac{\partial^{m_{d}}}{\partial x_{d}^{m_{d}}}f(x_{1},\ldots,x _{d})\). Given \(\beta\in(0,\infty)\), we say a function \(f:\Omega\to\mathbb{R}\) is Holder-\(\beta\) smooth if \(f\in\mathcal{C}^{k,\lambda}(\Omega)\) with \(k=\left\lceil\beta\right\rceil-1\) and \(\lambda=\beta-\left\lceil\beta\right\rceil+1\), where \(\left\lceil\beta\right\rceil\) denotes the smallest integer than or equal to \(\beta\). For any \(\beta\in(0,\infty)\) and any \(r\in(0,\infty)\), let \[\mathcal{B}_{r}^{\beta}\left(\Omega\right):=\left\{f:\Omega\to\mathbb{R}\begin{ }\left|f\in\mathcal{C}^{k,\lambda}(\Omega)\text{ and }\left\|f\right\|_{\mathcal{C}^{k,\lambda}(\Omega)}\leq r \text{ for }k=\\ -1+\left\lceil\beta\right\rceil\text{ and }\lambda=\beta-\left\lceil\beta \right\rceil+1\end{array}\right\} \tag{2.13}\] denote the closed ball of radius \(r\) centered at the origin in the Holder space of order \(\beta\) on \(\Omega\). Recall that the space \(\mathcal{F}_{d}^{\mathbf{FNN}}(G,N,S,B,F)\) generated by fully connected neural networks is given in (1.15), which is parameterized by the depth and width of neural networks (bounded by \(G\) and \(N\)), the number of nonzero entries in weight matrices and bias vectors (bounded by \(S\)), and the upper bounds of neural network parameters and associated functions of form (1.14) (denoted by \(B\) and \(F\)). In the following theorem, we show that to ensure the rate of convergence as the sample size \(n\) becomes large, all these parameters should be taken within certain ranges scaling with \(n\). For two positive sequences \(\{\lambda_{n}\}_{n\geq 1}\) and \(\{\nu_{n}\}_{n\geq 1}\), we say \(\lambda_{n}\lesssim_{n}\nu_{n}\) holds if there exist \(n_{0}\in\mathbb{N}\) and a positive constant \(c\) independent of such that \(\lambda_{n}\leq c\nu_{n},\forall\ n\geq n_{0}\). In addition, we write \(\lambda_{n}\asymp\nu_{n}\) if and only if \(\lambda_{n}\lesssim\nu_{n}\) and \(\nu_{n}\lesssim\lambda_{n}\). Recall that the excess misclassification error of \(f:\mathbb{R}^{d}\to\mathbb{R}\) with respect to some distribution \(P\) on \([0,1]^{d}\times\{-1,1\}\) is defined as \[\mathcal{E}_{P}(f)=\mathcal{R}_{P}(f)-\inf\left\{\mathcal{R}_{P}(g)\left|g:[0, 1]^{d}\to\mathbb{R}\text{ is Borel measurable}\right.\right\},\] where \(\mathcal{R}_{P}(f)\) denotes the misclassification error of \(f\) given by \[\mathcal{R}_{P}(f)=P\left(\left\{\left.(x,y)\in[0,1]^{d}\times\{-1,1\}\right|y \neq\operatorname{sgn}(f(x))\right\}\right).\] **Theorem 2.2**.: _Let \(d\in\mathbb{N}\), \((\beta,r)\in(0,\infty)^{2}\), \(n\in\mathbb{N}\), \(\nu\in[0,\infty)\), \(\{(X_{i},Y_{i})\}_{i=1}^{n}\) be an i.i.d. sample in \([0,1]^{d}\times\{-1,1\}\) and \(\hat{f}_{n}^{\mathbf{FNN}}\) be an ERM with respect to the logistic loss \(\phi(t)=\log\left(1+\mathrm{e}^{-t}\right)\) over \(\mathcal{F}_{d}^{\mathbf{FNN}}(G,N,S,B,F)\), i.e.,_ \[\hat{f}_{n}^{\mathbf{FNN}}\in\operatorname*{arg\,min}_{f\in\mathcal{F}_{d}^{ \mathbf{FNN}}(G,N,S,B,F)}\frac{1}{n}\sum_{i=1}^{n}\phi\left(Y_{i}f(X_{i}) \right). \tag{2.14}\] _Define_ \[\mathcal{H}_{1}^{d,\beta,r}:=\left\{P\in\mathcal{H}_{0}^{d}\begin{bmatrix}P_{ X}(\left\{\left.z\in[0,1]^{d}\right|P(\left\{1\right\}|z)=\hat{\eta}(z)\right\}) \right.=\ 1\ \text{for}\\ \text{some }\hat{\eta}\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d}\right)\end{bmatrix}. \tag{2.15}\] _Then there exists a constant \(\mathrm{c}\in(0,\infty)\) only depending on \((d,\beta,r)\), such that the estimator \(\hat{f}_{n}^{\mathbf{FNN}}\) defined by (2.14) with_ \[\begin{split}\mathrm{c}\log n&\leq G\lesssim\log n,\ N\asymp\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d}{d+\beta}},\ S\asymp \left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d}{d+\beta}}\cdot\log n,\\ & 1\leq B\lesssim n^{\nu},\text{ and }\ \frac{\beta}{d+\beta} \cdot\log n\leq F\lesssim\log n\end{split} \tag{2.16}\] _satisfies_ \[\sup_{P\in\mathcal{H}_{1}^{d,\beta,r}}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E} _{P}^{\phi}\left(\hat{f}_{n}^{\mathbf{FNN}}\right)\right]\lesssim\left(\frac {(\log n)^{5}}{n}\right)^{\frac{\beta}{\beta+d}} \tag{2.17}\] _and_ \[\sup_{P\in\mathcal{H}_{1}^{d,\beta,r}}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E} _{P}\left(\hat{f}_{n}^{\mathbf{FNN}}\right)\right]\lesssim\left(\frac{(\log n )^{5}}{n}\right)^{\frac{\beta}{2\beta+2d}}. \tag{2.18}\] Theorem 2.2 will be proved in Appendix A.3.4. As far as we know, for classification with neural networks and the logistic loss \(\phi\), generalization bounds presented in (2.17) and (2.18) establish fastest rates of convergence among the existing literature under the Holder smoothness condition on the conditional probability function \(\eta\) of the data distribution \(P\). Note that to obtain such generalization bounds in (2.17) and (2.18) we do not require any assumption on the marginal distribution \(P_{X}\) of the distribution \(P\). For example, we dot not require that \(P_{X}\) is absolutely continuous with respect to the Lebesgue measure. The rate \(\mathcal{O}(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta}{2\beta+d}})\) in (2.17) for the convergence of excess \(\phi\)-risk is indeed optimal (up to some logarithmic factor) in the minimax sense (see Corollary 2.1 and comments therein). However, the rate \(\mathcal{O}(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta}{2\beta+2d}})\) in (2.18) for the convergence of excess misclassification error is not optimal. According to Theorem 4.1, Theorem 4.2, Theorem 4.3 and their proofs in [3], there holds \[\inf_{\hat{f}_{n}}\sup_{P\in\mathcal{H}_{1}^{d,\beta,r}}\mathbf{E}_{P\otimes n} \left[\mathcal{E}_{P}(\hat{f}_{n})\right]\asymp n^{-\frac{\beta}{2\beta+d}}, \tag{2.19}\] where the infimum is taken over all \(\mathcal{F}_{d}\)-valued statistics from the sample \(\left\{(X_{i},Y_{i})\right\}_{i=1}^{n}\). Therefore, the rate \(\mathcal{O}(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta}{2\beta+2d}})\) in (2.18) does not match the minimax optimal rate \(\mathcal{O}(\left(\frac{1}{n}\right)^{\frac{\beta}{2\beta+d}})\). Despite suboptimality, the rate \(\mathcal{O}(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta}{2\beta+2d}})\) in (2.18) is fairly close to the optimal rate \(\mathcal{O}(\left(\frac{1}{n}\right)^{\frac{\beta}{2\beta+d}})\), especially when \(\beta>>d\) because the exponents satisfy \[\lim_{\beta\rightarrow+\infty}\frac{\beta}{2\beta+2d}=\frac{1}{2}=\lim_{\beta \rightarrow+\infty}\frac{\beta}{2\beta+d}.\] In our proof of Theorem 2.2, the rate \(\mathcal{O}(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta}{2\beta+2d}})\) in (2.18) is derived directly from the rate \(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta}{\beta+d}}\) in (2.17) via the so-called calibration inequality which takes the form \[\mathcal{E}_{P}(f)\leq c\cdot\left(\mathcal{E}_{P}^{\phi}(f)\right)^{\frac{1}{ 2}}\text{ for any }f\in\mathcal{F}_{d}\text{ and any }P\in\mathcal{H}_{0}^{d} \tag{2.20}\] with \(c\) being a constant independent of \(P\) and \(f\) (see (A.102)). Indeed, it follows from Theorem 8.29 of [40] that \[\mathcal{E}_{P}\left(f\right)\leq 2\sqrt{2}\cdot\left(\mathcal{E}_{P}^{\phi} \left(f\right)\right)^{\frac{1}{2}}\text{ for any }f\in\mathcal{F}_{d}\text{ and any }P\in\mathcal{H}_{0}^{d}. \tag{2.21}\] In other words, (2.20) holds when \(c=2\sqrt{2}\). Interestingly, we can use Theorem 2.2 to obtain that the inequality (2.20) is optimal in the sense that the exponent \(\frac{1}{2}\) cannot be replaced by a larger one. Specifically, by using (2.17) of our Theorem 2.2 together with (2.19), we can prove that \(\frac{1}{2}\) is the largest number \(s\) such that there holds \[\mathcal{E}_{P}(f)\leq c\cdot\left(\mathcal{E}_{P}^{\phi}(f)\right)^{s}\text{ for any }f\in\mathcal{F}_{d}\text{ and any }P\in\mathcal{H}_{0}^{d} \tag{2.22}\] for some constant \(c\) independent of \(P\) or \(f\). We now demonstrate this by contradiction. Fix \(d\in\mathbb{N}\). Suppose there exists an \(s\in(1/2,\infty)\) and a \(c\in(0,\infty)\) such that (2.22) holds. Since \[\lim_{\beta\rightarrow+\infty}\frac{(\frac{2}{3}\wedge s)\cdot\beta}{d+\beta} =\frac{2}{3}\wedge s>1/2=\lim_{\beta\rightarrow+\infty}\frac{\beta}{2\beta+d},\] we can choose \(\beta\) large enough such that \(\frac{(\frac{2}{3}\wedge s)\cdot\beta}{d+\beta}>\frac{\beta}{2\beta+d}\). Besides, it follows from \(\mathcal{E}_{P}(f)\leq 1\) and (2.22) that \[\mathcal{E}_{P}(f)\leq|\mathcal{E}_{P}(f)|^{\frac{2}{3}\wedge s}\leq\left|c \cdot\left(\mathcal{E}_{P}^{\phi}(f)\right)^{s}\right|^{\frac{2}{3}\wedge s }\leq(1+c)\cdot\left(\mathcal{E}_{P}^{\phi}(f)\right)^{(\frac{2}{3}\wedge s)} \tag{2.23}\] for any \(f\in\mathcal{F}_{d}\) and any \(P\in\mathcal{H}_{0}^{d}\). Let \(r=3\) and \(\hat{f}_{n}^{\mathbf{FNN}}\) be the estimator in Theorem 2.2. Then it follows from (2.17), (2.19), (2.23) and Holder's inequality that \[n^{-\frac{\beta}{2\beta+d}}\leq\inf_{\tilde{f}_{n}}\sup_{P\in \mathcal{H}_{1}^{d,\beta,r}}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}(\hat{ f}_{n})\right]\leq\sup_{P\in\mathcal{H}_{1}^{d,\beta,r}}\mathbf{E}_{P^{\otimes n}} \left[\mathcal{E}_{P}\left(\hat{f}_{n}^{\mathbf{FNN}}\right)\right]\] \[\leq(1+c)\cdot\sup_{P\in\mathcal{H}_{1}^{d,\beta,r}}\left(\mathbf{E} _{P^{\otimes n}}\left[\mathcal{E}_{P}^{\phi}(\hat{f}_{n}^{\mathbf{FNN}})\right] \right)^{(\frac{2}{3}\wedge s)}\] \[\leq(1+c)\cdot\left(\sup_{P\in\mathcal{H}_{1}^{d,\beta,r}}\mathbf{E} _{P^{\otimes n}}\left[\mathcal{E}_{P}^{\phi}(\hat{f}_{n}^{\mathbf{FNN}}) \right]\right)^{(\frac{2}{3}\wedge s)}\] \[\lesssim\left(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta}{ \beta+d}}\right)^{(\frac{2}{3}\wedge s)}=\left(\frac{(\log n)^{5}}{n}\right)^{ (\frac{2}{3}\wedge s)\cdot\beta+d}.\] Hence \(n^{-\frac{\beta}{2\beta+d}}\lesssim\left(\frac{\left(\log n\right)^{5}}{n}\right)^ {\frac{\left(\frac{\beta}{2\beta+2d}\right)\beta}{\beta+d}}\), which contradicts the fact that \(\frac{\left(\frac{\beta}{2\beta+\beta}\right)\cdot\beta}{d+\beta}>\frac{\beta} {2\beta+d}\). This proves the desired result. Due to the optimality of (2.20) and the minimax lower bound \(\mathcal{O}(n^{-\frac{\beta}{d+\beta}})\) for rates of convergence of the excess \(\phi\)-risk stated in Corollary 2.1, we deduce that rates of convergence of the excess misclassification error obtained directly from those of the excess \(\phi\)-risk and the calibration inequality which takes the form of (2.22) can never be faster than \(\mathcal{O}(n^{-\frac{\beta}{2\beta+2d}})\). Therefore, the convergence rate \(\mathcal{O}(\left(\frac{\left(\log n\right)^{5}}{n}\right)^{\frac{\beta}{2 \beta+2d}})\) of the excess misclassification error in (2.18) is the fastest one (up to the logarithmic term \(\left(\log n\right)^{\frac{5\beta}{2\beta+2d}}\)) among all those that are derived directly from the convergence rates of the excess \(\phi\)-risk and the calibration inequality of the form (2.22), which justifies the tightness of (2.18). It should be pointed out that the rate \(\mathcal{O}(\left(\frac{\left(\log n\right)^{5}}{n}\right)^{\frac{\beta}{2 \beta+2d}})\) in (2.18) can be further improved if we assume the following noise condition (cf. [43]) on \(P\): there exist \(c_{1}>0\), \(t_{1}>0\) and \(s_{1}\in[0,\infty]\) such that \[P_{X}\left(\left\{\left.x\in[0,1]^{d}\right|\left|2\cdot P(\left\{1\right\} \left.x\right)-1\right|\leq t\right\}\right)\leq c_{1}t^{s_{1}},\quad\forall\;0< t\leq t_{1}. \tag{2.24}\] This condition measures the size of high-noisy points and reflects the noise level through the exponent \(s_{1}\in[0,\infty]\). Obviously, every distribution satisfies condition (2.24) with \(s_{1}=0\) and \(c_{1}=1\), whereas \(s_{1}=\infty\) implies that we have a low amount of noise in labeling \(x\), i.e., the conditional probability function \(P(\left\{1\right\}|x)\) is bounded away from \(1/2\) for \(P_{X}\)-almost all \(x\in[0,1]^{d}\). Under the noise condition (2.24), the calibration inequality for logistic loss \(\phi\) can be refined as \[\mathcal{E}_{P}\left(f\right)\leq\bar{c}\cdot\left(\mathcal{E}_{P}^{\phi} \left(f\right)\right)^{\frac{s_{1}+1}{s_{1}+2}}\text{ for all }f\in\mathcal{F}_{d}, \tag{2.25}\] where \(\bar{c}\in(0,\infty)\) is a constant only depending on \((s_{1},c_{1},t_{1})\), and \(\frac{s_{1}+1}{s_{1}+2}:=1\) if \(s_{1}=\infty\) (cf. Theorem 8.29 in [40] and Theorem 1.1 in [46]). Combining this calibration inequality (2.25) and (2.17), we can obtain an improved generalization bound given by \[\sup_{P\in\mathcal{H}_{2,s_{1},c_{1},t_{1}}^{d,\beta,r}}\boldsymbol{E}_{P^{ \otimes n}}\left[\mathcal{E}_{P}\left(f_{n}^{\mathbf{FNN}}\right)\right]\lesssim \left(\frac{\left(\log n\right)^{5}}{n}\right)^{\frac{\left(s_{1}+1\right) \beta}{\left(s_{1}+2\right)\left(s+d\right)}},\] where \[\mathcal{H}_{2,s_{1},c_{1},t_{1}}^{d,\beta,r}:=\left\{\left.P\in\mathcal{H}_{ 1}^{d,\beta,r}\right|P\text{ satisfies \eqref{eq:2.24}}\right\}. \tag{2.26}\] One can refer to Section 3 for more discussions about comparisons between Theorem 2.2 and other related results. In our Theorem 2.2, the rates \(\left(\frac{\left(\log n\right)^{5}}{n}\right)^{\frac{\beta}{3+d}}\) and \(\left(\frac{\left(\log n\right)^{5}}{n}\right)^{\frac{\beta}{2\beta+2d}}\) become slow when the dimension \(d\) is large. This phenomenon, known as the curse of dimensionality, arises in our Theorem 2.2 because our assumption on the data distribution \(P\) is very mild and general. Except for the Holder smoothness condition on the conditional probability function \(\eta\) of \(P\), we do not require any other assumptions in our Theorem 2.2. The curse of dimensionality cannot be circumvented under such general assumption on \(P\), as shown in Corollary 2.1 and (2.19). Therefore, to overcome the curse of dimensionality, we need other assumptions. In our next theorem, we assume that \(\eta\) is the composition of several multivariate vector-valued functions \(h_{q}\circ\cdots\circ h_{1}\circ h_{0}\) such that each component function of \(h_{i}\) is either a Holder smooth function whose output values only depend on a small number of its input variables, or the function computing the maximum value of some of its input variables (see (2.32) and (2.34)). Under this assumption, the curse of dimensionality is circumvented because each component function of \(h_{i}\) is either essentially defined on a low-dimensional space or a very simple maximum value function. Our hierarchical composition assumption on the conditional probability function is convincing and likely to be met in practice because many phenomena in natural sciences can be "described well by processes that take place at a sequence of increasing scales and are local at each scale, in the sense that they can be described well by neighbor-to-neighbor interactions" (Appendix 2 of [35]). Similar compositional assumptions have been adopted in many works such as [37, 25, 24]. One may refer to [34, 35, 36, 24] for more discussions about the reasonableness of such compositional assumptions. In our compositional assumption mentioned above, we allow the component function of \(h_{i}\) to be the maximum value function, which is not Holder-\(\beta\) smooth when \(\beta>1\). The maximum value function is incorporated because taking the maximum value is an important operation to pass key information from lower scale levels to higher ones, which appears naturally in the compositional structure of the conditional probability function \(\eta\) in practical classification problems. To see this, let us consider the following example. Suppose the classification problem is to determine whether an input image contains a cat. We assume the data is perfectly classified, in the sense that the conditional probability function \(\eta\) is equal to zero or one almost surely. It should be noted that the assumption "\(\eta=0\) or \(1\) almost surely" does not conflict with the continuity of \(\eta\) because the support of the distribution of the input data may be unconnected. This classification task can be done by human beings through considering each subpart of the input image and determining whether each subpart contains a cat. Mathematically, let \(\mathcal{V}\) be a family of subset of \(\{1,2,\ldots,d\}\) which consists of all the index sets of those (considered) subparts of the input image \(x\in[0,1]^{d}\). \(\mathcal{V}\) should satisfy \[\bigcup_{J\in\mathcal{V}}J=\{1,2,\ldots,d\}\] because the union of all the subparts should cover the input image itself. For each \(J\in\mathcal{V}\), let \[\eta_{J}((x)_{J})=\begin{cases}1,&\text{ if the subpart $(x)_{J}$ of the input image $x$ contains a cat},\\ 0,&\text{ if the subpart $(x)_{J}$ of the input image $x$ doesn't contains a cat}.\end{cases}\] Then we will have \(\eta(x)=\max_{J\in\mathcal{V}}\left\{\eta_{J}((x)_{J})\right\}\) a.s. because \[\eta(x)=1\xleftarrow{\text{a.s.}}x\text{ contains a cat}\Leftrightarrow\text{ at least one of the subpart $(x)_{J}$ contains a cat}\] \[\Leftrightarrow\eta_{J}((x)_{J})=1\text{ for at least one }J\in\mathcal{V}\Leftrightarrow\max_{J\in\mathcal{V}}\left\{\eta_{J}((x)_{J}) \right\}=1.\] Hence the maximum value function emerges naturally in the expression of \(\eta\). We now give the specific mathematical definition of our compositional model. For any \((d,d_{\star},d_{\star},\beta,r)\in\mathbb{N}\times\mathbb{N}\times(0,\infty) \times(0,\infty)\), define \[\mathcal{G}_{d}^{\mathbf{M}}(d_{\star}):=\left\{f:[0,1]^{d}\to\mathbb{R} \begin{vmatrix}\exists\;I\subset\{1,2,\ldots,d\}\text{ such that }1\leq\#(I)\leq d_{\star}\text{ and }\\ f(x)=\max\left\{(x)_{i}\left|i\in I\right.\right\},\,\forall\,x\in[0,1]^{d} \end{vmatrix}\right\}, \tag{2.27}\] and \[\mathcal{G}_{d}^{\mathbf{H}}(d_{\star},\beta,r) \tag{2.28}\] \[:=\left\{f:[0,1]^{d}\to\mathbb{R}\begin{vmatrix}\exists\;\;I\subset \{1,2,\ldots,d\}\text{ and }g\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d_{\star}}\right)\text{ such that }\\ \#(I)=d_{\star}\text{ and }f(x)=g\left((x)_{I}\right)\text{ for all }x\in[0,1]^{d}\end{vmatrix}\right\}.\] Thus \(\mathcal{G}_{d}^{\mathbf{M}}(d_{\star})\) consists of all functions from \([0,1]^{d}\) to \(\mathbb{R}\) which compute the maximum value of at most \(d_{\star}\) components of their input vectors, and \(\mathcal{G}_{d}^{\mathbf{H}}(d_{\star},\beta,r)\) consists of all functions from \([0,1]^{d}\) to \(\mathbb{R}\) which only depend on \(d_{\star}\) components of the input vector and are Holder-\(\beta\) smooth with corresponding Holder-\(\beta\) norm less than or equal to \(r\). Obviously, \[\mathcal{G}_{d}^{\mathbf{H}}(d_{\star},\beta,r)=\varnothing,\;\forall\;(d,d_{ \star},\beta,r)\in\mathbb{N}\times\mathbb{N}\times(0,\infty)\times(0,\infty) \text{ with }d<d_{\star}. \tag{2.29}\] Next, for any \((d_{\star},d_{\star},\beta,r)\in\mathbb{N}\times\mathbb{N}\times(0,\infty) \times(0,\infty)\), define \(\mathcal{G}_{\infty}^{\mathbf{H}}(d_{\star},\beta,r):=\bigcup_{d=1}^{\infty} \mathcal{G}_{d}^{\mathbf{H}}(d_{\star},\beta,r)\) and \(\mathcal{G}_{\infty}^{\mathbf{M}}(d_{\star}):=\bigcup_{d=1}^{\infty}\mathcal{G }_{d}^{\mathbf{M}}(d_{\star})\). Finally, for any \(\beta\in\mathbb{N}\cup\{0\}\), any \((\beta,r)\in(0,\infty)^{2}\) and any \((d,d_{\star},d_{\star},K)\in\mathbb{N}^{4}\) with \[d_{\star}\leq\min\left\{d,K+\mathbb{1}_{\{0\}}(q)\cdot(d-K)\right\}, \tag{2.30}\] define \[\mathcal{G}_{d}^{\mathbf{CH}}(q,K,d_{*},\beta,r) \tag{2.31}\] \[:=\left\{h_{q}\circ\cdots\circ h_{1}\circ h_{0}\right.\] and \[\mathcal{G}_{d}^{\mathbf{CHOM}}(q,K,d_{*},\beta,r) \tag{2.32}\] \[:=\left\{h_{q}\circ\cdots\circ h_{1}\circ h_{0}\right.\] Obviously, we always have that \(\mathcal{G}_{d}^{\mathbf{CH}}(q,K,d_{*},\beta,r)\subset\mathcal{G}_{d}^{ \mathbf{CHOM}}(q,K,d_{*},\beta,r)\). The condition (2.30), which is equivalent to \[d_{*}\leq\begin{cases}d,&\text{if }q=0,\\ d\wedge K,&\text{if }q>0,\end{cases}\] is required in the above definitions because it follows from (2.29) that \[\mathcal{G}_{d}^{\mathbf{CH}}(q,K,d_{*},\beta,r)=\varnothing\text{ if }d_{*}> \min\left\{d,K+\mathbb{1}_{\{0\}}(q)\cdot(d-K)\right\}.\] Thus we impose the condition (2.30) simply to avoid the trivial empty set. The space \(\mathcal{G}_{d}^{\mathbf{CH}}(q,K,d_{*},\beta,r)\) consists of composite functions \(h_{q}\circ\cdots h_{1}\circ h_{0}\) satisfying that each component function of \(h_{i}\) only depends on \(d_{*}\) components of its input vector and is Holder-\(\beta\) smooth with corresponding Holder-\(\beta\) norm less than or equal to \(r\). For example, the function \([0,1]^{4}\ni x\mapsto\sum\limits_{1\leq i<j\leq 4}(x)_{i}\cdot(x)_{j}\in \mathbb{R}\) belongs to \(\mathcal{G}_{4}^{\mathbf{CH}}(2,4,2,2,8)\) (cf. Figure 2.1). The definition of \(\mathcal{G}_{d}^{\mathbf{CHOM}}(q,K,d_{*},\beta,r)\) is similar to that of \(\mathcal{G}_{d}^{\mathbf{CH}}(q,K,d_{*},\beta,r)\). The only difference is that, in comparison to \(\mathcal{G}_{d}^{\mathbf{CH}}(q,K,d_{*},\beta,r)\), we in the definition of \(\mathcal{G}_{d}^{\mathbf{CHOM}}(q,K,d_{*},\beta,r)\) additionally allow the component function of \(h_{i}\) to be the function which computes the maximum value of at most \(d_{*}\) components of its input vector. For example, the function \([0,1]^{4}\ni x\mapsto\max\limits_{1\leq i<j\leq 4}(x)_{i}\cdot(x)_{j}\in \mathbb{R}\) belongs to \(\mathcal{G}_{4}^{\mathbf{CHOM}}(2,6,3,2,2,2)\) (cf. Figure 2.2). From the above description of the spaces \(\mathcal{G}_{d}^{\mathbf{CH}}(q,K,d_{*},\beta,r)\) and \(\mathcal{G}_{d}^{\mathbf{CHOM}}(q,K,d_{*},d_{*},\beta,r)\), we see that the condition (2.30) is very natural because it merely requires the essential input dimension \(d_{*}\) of the Holder-\(\beta\) smooth component function of \(h_{i}\) to be less than or equal to its actual input dimension, which is \(d\) (if \(i=0\)) or \(K\) (if \(i>0\)). At last, we point out that the space \(\mathcal{G}_{d}^{\mathbf{CH}}(q,K,d_{*},\beta,r)\) reduces to the Holder ball \(\mathcal{B}_{r}^{\beta}([0,1]^{d})\) when \(q=0\) and \(d_{*}=d\). Indeed, we have that \[\begin{split}&\mathcal{B}_{r}^{\beta}([0,1]^{d})=\mathcal{G}_{d}^ {\mathbf{H}}(d,\beta,r)=\mathcal{G}_{d}^{\mathbf{CH}}(0,K,d,\beta,r)\\ &\subset\mathcal{G}_{d}^{\mathbf{CHOM}}(0,K,d_{*},d,\beta,r),\; \forall\;K\in\mathbb{N},\;d\in\mathbb{N},\;d_{*}\in\mathbb{N},\;\beta\in(0, \infty),r\in(0,\infty).\end{split} \tag{2.33}\] Now we are in a position to state our Theorem 2.3, where we establish sharp convergence rates, which are free from the input dimension \(d\), for fully connected DNN classifiers trained with the logistic loss under the assumption that the conditional probability function \(\eta\) of the data distribution belongs to \(\mathcal{G}_{d}^{\mathbf{CHOM}}(q,K,d_{*},d_{*},\beta,r)\). In particular, it can be shown the convergence rate of the excess logistic risk stated in (2.36) in Theorem 2.3 is optimal (up to some logarithmic term). Since \(\mathcal{G}_{d}^{\mathbf{CH}}(q,K,d_{*},\beta,r)\subset\mathcal{G}_{d}^{ \mathbf{CHOM}}(q,K,d_{*},d_{*},\beta,r)\), the same convergences rates as in Theorem 2.3 can also be achieved under the slightly narrower assumption that \(\eta\) belongs to \(\mathcal{G}_{d}^{\mathbf{CH}}(q,K,d_{*},\beta,r)\). The results of Theorem 2.3 break the curse of dimensionality and help explain why deep neural networks perform well, especially in high-dimensional problems. **Theorem 2.3**.: _Let \(q\in\mathbb{N}\cup\{0\}\), \((d,d_{*},d_{*},K)\in\mathbb{N}^{4}\) with \(d_{*}\leq\min\left\{d,K+\mathbb{1}_{\{0\}}(q)\cdot(d-K)\right\}\), \((\beta,r)\in(0,\infty)^{2}\), \(n\in\mathbb{N}\), \(\nu\in[0,\infty)\), \(\{(X_{i},Y_{i})\}_{i=1}^{n}\) be an i.i.d. sample in \([0,1]^{d}\times\{-1,1\}\) and \(f_{n}^{\mathbf{FNN}}\) be an ERM with respect to the logistic loss \(\phi(t)=\log\left(1+\mathrm{e}^{-t}\right)\) over the space \(\mathcal{F}_{d}^{\mathbf{FNN}}(G,N,S,B,F)\), which is given by (2.14). Define_ \[\mathcal{H}_{4,q,K,d_{*},d_{*}}^{d,\beta,r}:=\left\{P\in\mathcal{H}_{0}^{d} \left|P_{X}(\left\{\,z\in[0,1]^{d}\right]P(\left\{1\right\}|z)=\hat{\eta}(z) \right\})\;=\;1\;\;\text{for}\;\right\}. \tag{2.34}\] _Then there exists a constant \(\mathrm{c}\in(0,\infty)\) only depending on \((d_{*},d_{*},\beta,r,q)\), such that the estimator \(\hat{f}_{n}^{\mathbf{FNN}}\) defined by (2.14) with_ \[\begin{split}&\mathrm{c}\log n\leq G\lesssim\log n,\ N\asymp\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d_{ \star}}{d_{\star}+\beta\cdot(1\wedge\beta)^{q}}},\ S\asymp\left(\frac{(\log n)^{5 }}{n}\right)^{\frac{-d_{\star}}{d_{\star}+\beta\cdot(1\wedge\beta)^{q}}}\cdot \log n,\\ & 1\leq B\lesssim n^{\nu},\text{ and }\ \frac{\beta\cdot(1\wedge\beta)^{q}}{d_{\star}+\beta\cdot(1\wedge\beta)^{q}} \cdot\log n\leq F\lesssim\log n\end{split} \tag{2.35}\] _satisfies_ \[\sup_{P\in\mathcal{H}_{4,q}^{d,\beta,r},\mathcal{E}_{P}\otimes n}\left[ \mathcal{E}_{P}^{\phi}\left(\hat{f}_{n}^{\mathbf{FNN}}\right)\right]\lesssim \left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta\cdot(1\wedge\beta)^{q}}{d_{ \star}+\beta\cdot(1\wedge\beta)^{q}}} \tag{2.36}\] _and_ \[\sup_{P\in\mathcal{H}_{4,q}^{d,\beta,r},\mathcal{E}_{P}\otimes n}\left[ \mathcal{E}_{P}\left(\hat{f}_{n}^{\mathbf{FNN}}\right)\right]\lesssim\left( \frac{(\log n)^{5}}{n}\right)^{\frac{\beta\cdot(1\wedge\beta)^{q}}{2d_{\star}+ \beta\cdot(1\wedge\beta)^{q}}}. \tag{2.37}\] The proof of Theorem 2.3 is given in Appendix A.3.4. Note that Theorem 2.3 directly leads to Theorem 2.2 because it follows from (2.33) that \[\mathcal{H}_{1}^{d,\beta,r}\subset\mathcal{H}_{4,q,K,d_{\star},d_{\star}}^{d, \beta,r}\ \text{if }q=0,\,d_{\star}=d\text{ and }d_{\star}=K=1.\] Consequently, Theorem 2.3 can be regarded as a generalization of Theorem 2.2. Note that both the rates \(\mathcal{O}(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta\cdot(1\wedge \beta)^{q}}{d_{\star}+\beta\cdot(1\wedge\beta)^{q}}})\) and \(\mathcal{O}(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta\cdot(1\wedge\beta) ^{q}}{2d_{\star}+2\beta\cdot(1\wedge\beta)^{q}}})\) in (2.36) and (2.37) are independent of the input dimension \(d\), thereby overcoming the curse of dimensionality. Moreover, according to Theorem 2.6 and the comments therein, the rate \(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta\cdot(1\wedge\beta)^{q}}{d_{ \star}+\beta\cdot(1\wedge\beta)^{q}}}\) in (2.36) for the convergence of the excess logistic risk is even optimal (up to some logarithmic factor). This justifies the sharpness of Theorem 2.3. Next, we would like to demonstrate the main idea of the proof of Theorem 2.3. The strategy we adopted is to apply Theorem 2.1 with a suitable \(\psi\) satisfying (2.7). Let \(P\) be an arbitrary probability in \(\mathcal{H}_{4,q,K,d_{\star},d_{\star}}^{d,\beta,r}\) and denote by \(\eta\) the conditional probability function \(P(\{1\}\,|\cdot)\) of \(P\). According to the previous discussions, we cannot simply take \(\psi(x,y)=\phi(yf_{\phi,P}^{*}(x))\) as the target function \(f_{\phi,P}^{*}=\log\frac{\eta}{1-\eta}\) is unbounded. Instead, we define \(\psi\) by (2.9) for some carefully selected \(\delta_{1}\in(0,1/2]\). For such \(\psi\), we prove \[\int_{[0,1]^{d}\times\{-1,1\}}\psi\left(x,y\right)\mathrm{d}P(x,y)=\inf\left\{ \mathcal{R}_{P}^{\phi}(f)\Big{|}\,f:[0,1]^{d}\to\mathbb{R}\text{ is measurable}\right\} \tag{2.38}\] in Lemma A.3, and establish a tight inequality of form (2.5) with \(\Gamma=\mathcal{O}((\log\frac{1}{\delta_{1}})^{2})\) in Lemma A.10. We then calculate the covering numbers of \(\mathcal{F}:=\mathcal{F}_{d}^{\mathbf{FNN}}(G,N,S,B,F)\) by Corollary A.1 and use Lemma A.15 to estimate the approximation error \[\inf_{f\in\mathcal{F}}\left(\mathcal{R}_{P}^{\phi}(f)-\int_{[0,1]^{d}\times\{- 1,1\}}\psi(x,y)\mathrm{d}P(x,y)\right)\] which is essentially \(\inf_{f\in\mathcal{F}}\mathcal{E}_{P}^{\phi}(f)\). Substituting the above estimations into the right hand side of (2.6) and taking supremum over \(P\in\mathcal{H}_{4,q,K,d_{\star},d_{\star}}^{d,\beta,r}\), we obtain (2.36). We then derive (2.37) from (2.36) through the calibration inequality (2.21). We would like to point out that the above scheme for obtaining generalization bounds, which is built on our novel oracle-type inequality in Theorem 2.1 with a carefully constructed \(\psi\), is very general. This scheme can be used to establish generalization bounds for classification in other settings, provided that the estimation for the corresponding approximation error is given. For example, one can expect to establish generalization bounds for convolutional neural network (CNN) classification with the logistic loss by using Theorem 2.1 together with recent results about CNN approximation. CNNs perform convolutions instead of matrix multiplications in at least one of their layers (cf. Chapter 9 of [11]). Approximation properties of various CNN architectures have been intensively studied recently. For instance, 1D CNN approximation is studied in [51, 50, 30, 8], and 2D CNN approximation is investigated in [24, 16]. With the help of these CNN approximation results and classical concentration techniques, generalization bounds for CNN classification have been established in many works such as [25, 24, 38, 10]. In our coming work [48], we will derive generalization bounds for CNN classification with logistic loss on spheres under the Sobolev smooth conditional probability assumption through the novel framework developed in our paper. In our proof of Theorem 2.2 and Theorem 2.3, a tight error bound for neural network approximation of the logarithm function \(\log(\cdot)\) arises as a by-product. Indeed, for a given data distribution \(P\) on \([0,1]^{d}\times\{-1,1\}\), to estimate the approximation error of \(\mathcal{F}_{d}^{\mathbf{FNN}}\), we need to construct neural networks to approximate the target function \(f_{\phi,P}^{*}=\log\frac{\eta}{1-\eta}\), where \(\eta\) denotes the conditional probability function of \(P\). Due to the unboundedness of \(f_{\phi,P}^{*}\), one cannot approximate \(f_{\phi,P}^{*}\) directly. To overcome this difficulty, we consider truncating \(f_{\phi,P}^{*}\) to obtain an efficient approximation. We design neural networks \(\tilde{\eta}\) and \(\tilde{l}\) to approximate \(\eta\) on \([0,1]^{d}\) and \(\log(\cdot)\) on \([\delta_{n},1-\delta_{n}]\) respectively, where \(\delta_{n}\in(0,1/4]\) is a carefully selected number which depends on the sample size \(n\) and tends to zero as \(n\to\infty\). Let \(\Pi_{\delta_{n}}\) denote the clipping function given by \(\Pi_{\delta_{n}}:\mathbb{R}\to[\delta_{n},1-\delta_{n}],t\mapsto\arg\min_{t^{ \prime}\in[\delta_{n},1-\delta_{n}]}|t^{\prime}-t|\). Then \(\tilde{L}:t\mapsto\tilde{l}(\Pi_{\delta_{n}}(t))-\tilde{l}(1-\Pi_{\delta_{n}} (t))\) is a neural network which approximates the function \[\overline{L}_{\delta_{n}}:t\mapsto\begin{cases}\log\frac{t}{1-t},&\text{if }t \in[\delta_{n},1-\delta_{n}],\\ \log\frac{1-\delta_{n}}{\delta_{n}},&\text{if }t>1-\delta_{n},\\ \log\frac{\delta_{n}}{1-\delta_{n}},&\text{if }t<\delta_{n},\end{cases} \tag{2.39}\] meaning that the function \(\tilde{L}(\tilde{\eta}(x))=\tilde{l}\left(\Pi_{\delta_{n}}\left(\tilde{\eta} (x)\right)\right)-\tilde{l}\left(\Pi_{\delta_{n}}\left(1-\tilde{\eta}(x) \right)\right)\) is a neural network which approximates the truncated \(f_{\phi,P}^{*}\) given by \[\overline{L}_{\delta_{n}}\circ\eta:x\mapsto\overline{L}_{\delta_{n}}(\eta(x)) =\begin{cases}f_{\phi,P}^{*}(x),&\text{if }\left|f_{\phi,P}^{*}(x)\right|\leq\log\frac{1- \delta_{n}}{\delta_{n}},\\ \operatorname{sgn}(f_{\phi,P}^{*}(x))\log\frac{1-\delta_{n}}{\delta_{n}},& \text{otherwise}.\end{cases}\] One can build \(\tilde{\eta}\) by applying some existing results on approximation theory of neural networks (see Appendix A.2). However, the construction of \(\tilde{l}\) requires more effort. Since the logarithm function \(\log(\cdot)\) is unbounded near \(0\), which leads to the blow-up of its Holder norm on \([\delta_{n},1-\delta_{n}]\) when \(\delta_{n}\) is becoming small, existing conclusions, e.g., the results in Appendix A.2, cannot yield satisfactory error bounds for neural network approximation of \(\log(\cdot)\) on \([\delta_{n},1-\delta_{n}]\). To see this, let us consider using Theorem A.2 to estimate the approximation error directly. Note that approximating \(\log(\cdot)\) on \([\delta_{n},1-\delta_{n}]\) is equivalent to approximating \(l_{\delta_{n}}(t):=\log((1-2\delta_{n})t+\delta_{n})\) on \([0,1]\). For \(\beta_{1}>0\) with \(k=\lceil\beta_{1}-1\rceil\) and \(\lambda=\beta_{1}-\lceil\beta_{1}-1\rceil\), denote by \(l_{\delta_{n}}^{(k)}\) the \(k\)-th derivative of \(l_{\delta_{n}}\). Then there holds \[\|l_{\delta_{n}}\|_{\mathcal{C}^{k,\lambda}([0,1])}\geq\sup_{0 \leq t<t^{\prime}\leq 1}\frac{\left|l_{\delta_{n}}^{(k)}(t)-l_{\delta_{n}}^{(k)}(t^{\prime}) \right|}{\left|t-t^{\prime}\right|^{\lambda}}\] \[\geq\frac{\left|l_{\delta_{n}}^{(k)}(0)-l_{\delta_{n}}^{(k)} \left(\frac{\delta_{n}}{1-2\delta_{n}}\right)\right|}{\left|0-\frac{\delta_{n} }{1-2\delta_{n}}\right|^{\lambda}}\geq\inf_{t\in\left[0,\frac{\delta_{n}}{1-2 \delta_{n}}\right]}\frac{\left|l_{\delta_{n}}^{(k+1)}(t)\right|\cdot\left|0- \frac{\delta_{n}}{1-2\delta_{n}}\right|}{\left|0-\frac{\delta_{n}}{1-2\delta_{ n}}\right|^{\lambda}}\] \[=\inf_{t\in[0,\frac{\delta_{n}}{1-2\delta_{n}}]}\frac{\left|\frac {k!}{((1-2\delta_{n})t+\delta_{n})^{k+1}}\right|\cdot\left(1-2\delta_{n}\right) ^{k+1}\cdot\left|0-\frac{\delta_{n}}{1-2\delta_{n}}\right|}{\left|0-\frac{ \delta_{n}}{1-2\delta_{n}}\right|^{\lambda}}=\frac{k!}{2^{k+1}}\cdot(1-2 \delta_{n})^{k+\lambda}\cdot\frac{1}{\delta_{n}^{k+\lambda}}.\] Hence it follows from \(\delta_{n}\in(0,1/4]\) that \[\|l_{\delta_{n}}\|_{\mathcal{C}^{k,\lambda}([0,1])}\geq\frac{\lceil\beta_{1}-1 \rceil!}{2^{\lceil\beta_{1}\rceil}}\cdot(1-2\delta_{n})^{\beta_{1}}\cdot \frac{1}{\delta_{n}^{\beta_{1}}}\geq\frac{\lceil\beta_{1}-1\rceil!}{4^{\lceil \beta_{1}\rceil}}\cdot\frac{1}{\delta_{n}^{\beta_{1}}}\geq\frac{3}{128}\cdot \frac{1}{\delta_{n}^{\beta_{1}}}.\] By Theorem A.2, for any positive integers \(m\) and \(M^{\prime}\) with \[M^{\prime}\geq\max\left\{(\beta_{1}+1),\left(\|l_{\delta_{n}}\|_{\mathcal{C}^{k, \lambda}([0,1])}\,\lceil\beta_{1}\rceil+1\right)\cdot\mathrm{e}\right\}\geq\|l _{\delta_{n}}\|_{\mathcal{C}^{k,\lambda}([0,1])}\geq\frac{3}{128}\cdot\frac{1}{ \delta_{n}^{\beta_{1}}}, \tag{2.40}\] there exists a neural network \[\tilde{f}\in\mathcal{F}_{1}^{\mathbf{FNN}}\left(14m(2+\log_{2}\left(1\lor \beta_{1}\right)),6\left(1+\lceil\beta_{1}\rceil\right)M^{\prime},987(2+\beta _{1})^{4}M^{\prime}m,1,\infty\right) \tag{2.41}\] such that \[\sup_{x\in[0,1]}\left|l_{\delta_{n}}(x)-\tilde{f}(x)\right| \leq\|l_{\delta_{n}}\|_{\mathcal{C}^{k,\lambda}([0,1])}\cdot\lceil \beta_{1}\rceil\cdot 3^{\beta_{1}}M^{\prime-\beta_{1}}\] \[\qquad+\left(1+2\,\|l_{\delta_{n}}\|_{\mathcal{C}^{k,\lambda}([0,1])}\cdot\lceil\beta_{1}\rceil\right)\cdot 6\cdot(2+\beta_{1}^{2})\cdot M^{ \prime}\cdot 2^{-m}.\] To make this error less than or equal to a given error threshold \(\varepsilon_{n}\) (depending on \(n\)), there must hold \[\varepsilon_{n}\geq\|l_{\delta_{n}}\|_{\mathcal{C}^{k,\lambda}([0,1])}\cdot \lceil\beta_{1}\rceil\cdot 3^{\beta_{1}}M^{\prime-\beta_{1}}\geq\|l_{\delta_{n}}\|_{ \mathcal{C}^{k,\lambda}([0,1])}\cdot M^{\prime-\beta_{1}}\geq M^{\prime-\beta _{1}}\cdot\frac{3}{128}\cdot\frac{1}{\delta_{n}^{\beta_{1}}}.\] This together with (2.40) gives \[M^{\prime}\geq\max\left\{\frac{3}{128}\cdot\frac{1}{\delta_{n}^{\beta_{1}}}, \varepsilon_{n}^{-1/\beta_{1}}\cdot\left|\frac{3}{128}\right|^{1/\beta_{1}} \cdot\frac{1}{\delta_{n}}\right\}. \tag{2.42}\] Consequently, the width and the number of nonzero parameters of \(\tilde{f}\) are greater than or equal to the right hand side of (2.42), which may be too large when \(\delta_{n}\) is small (recall that \(\delta_{n}\to 0\) as \(n\to\infty\)). In this paper, we establish a new sharp error bound for approximating the natural logarithm function \(\log(\cdot)\) on \([\delta_{n},1-\delta_{n}]\), which indicates that one can achieve the same approximation error by using a much smaller network. This refined error bound is given in Theorem 2.4 which is critical in our proof of Theorem 2.2 and also deserves special attention in its own right. **Theorem 2.4**.: _Given \(a\in(0,1/2]\), \(b\in(a,1]\), \(\alpha\in(0,\infty)\) and \(\varepsilon\in(0,1/2]\), there exists_ \[\tilde{f}\in\mathcal{F}_{1}^{\mathbf{FNN}}\left(A_{1}\log\frac{1} {\varepsilon}+139\log\frac{1}{a}\,,\;A_{2}\left|\frac{1}{\varepsilon}\right|^{ \frac{1}{\alpha}}\cdot\log\frac{1}{a}\,,\right.\] \[\left.A_{3}\left|\frac{1}{\varepsilon}\right|^{\frac{1}{\alpha}} \cdot\left|\log\frac{1}{\varepsilon}\right|\cdot\log\frac{1}{a}+65440\left| \log\frac{1}{a}\right|^{2},1,\infty\right)\] _such that_ \[\sup_{z\in[a,b]}\left|\log z-\tilde{f}(z)\right|\leq\varepsilon\text{ and }\log a\leq\tilde{f}(t)\leq\log b,\;\forall\;t\in\mathbb{R},\] _where \((A_{1},A_{2},A_{3})\in(0,\infty)^{3}\) are constants depending only on \(\alpha\)._ In Theorem 2.4, we show that for each fixed \(\alpha\in(0,\infty)\) one can construct a neural network to approximate the natural logarithm function \(\log(\cdot)\) on \([a,b]\) with error \(\varepsilon\), where the depth, width and number of nonzero parameters of this neural network are in the same order of magnitude as \(\log\frac{1}{\varepsilon}+\log\frac{1}{a}\), \(\left(\frac{1}{\varepsilon}\right)^{\frac{1}{\alpha}}\left(\log\frac{1}{a}\right)\) and \(\left(\frac{1}{\varepsilon}\right)^{\frac{1}{\alpha}}\left(\log\frac{1}{ \varepsilon}\right)\left(\log\frac{1}{a}\right)+\left(\log\frac{1}{a}\right)^{2}\) respectively. Recall that in our generalization analysis we need to approximate \(\log\) on \([\delta_{n},1-\delta_{n}]\), which is equivalent to approximating \(l_{\delta_{n}}(t)=\log((1-2\delta_{n})t+\delta_{n})\) on \([0,1]\). Let \(\varepsilon_{n}\in(0,1/2]\) denote the desired accuracy of the approximation of \(l_{\delta_{n}}\) on \([0,1]\), which depends on the sample size \(n\) and converges to zero as \(n\to\infty\). Using Theorem 2.4 with \(\alpha=2\beta_{1}\), we deduce that for any \(\beta_{1}>0\) one can approximate \(l_{\delta_{n}}\) on \([0,1]\) with error \(\varepsilon_{n}\) by a network of which the width and the number of nonzero parameters are less than \(C_{\beta_{1}}\varepsilon_{n}^{-\frac{1}{2\beta_{1}}}\left|\log\varepsilon_{n} \right|\cdot\left|\log\delta_{n}\right|^{2}\) with some constant \(C_{\beta_{1}}>0\) (depending only on \(\beta_{1}\)). The complexity of this neural network is much smaller than that of \(\tilde{f}\) defined in (2.41) with (2.42) as \(n\to\infty\) since \(\left|\log\delta_{n}\right|^{2}=\mathrm{o}\left(1/\delta_{n}\right)\) and \(\varepsilon_{n}^{-\frac{1}{2\beta_{1}}}\left|\log\varepsilon_{n}\right|= \mathrm{o}\left(\varepsilon_{n}^{-1/\beta_{1}}\right)\) as \(n\to\infty\). In particular, when \[\frac{1}{n^{\theta_{2}}}\lesssim\varepsilon_{n}\wedge\delta_{n}\leq \varepsilon_{n}\vee\delta_{n}\lesssim\frac{1}{n^{\theta_{1}}}\text{ for some }\theta_{2}\geq\theta_{1}>0\text{ independent of }n\text{ or }\beta_{1}, \tag{2.43}\] which occurs in our generalization analysis (e.g., in our proof of Theorem 2.3, we essentially take \(\varepsilon_{n}=\delta_{n}\asymp\left(\frac{(\log n)^{5}}{n}\right)^{\frac{ \beta+(1+\beta)\tilde{\eta}}{4+\beta+(1+\beta)\tilde{\eta}^{3}}}\), meaning that \(n^{\frac{-\beta-(1+\beta)\tilde{\eta}}{4+\beta+(1+\beta)\tilde{\eta}^{3}}} \lesssim\varepsilon_{n}=\delta_{n}\lesssim n^{\frac{-\beta-(1+\beta)\tilde{ \eta}}{24+\beta+(1+\beta)\tilde{\eta}^{3}}}\) (cf. (A.78), (A.82), (A.91) and (A.96)), we will have that the right hand side of (2.42) grows no slower than \(n^{\theta_{1}+\theta_{1}/\beta_{1}}\). Hence, in this case, no matter what \(\beta_{1}\) is, the width and the number of nonzero parameters of the network \(\tilde{f}\), which approximates \(l_{\delta_{n}}\) on \([0,1]\) with error \(\varepsilon_{n}\) and is obtained by using Theorem A.2 directly (cf. (2.41)), will grow faster than \(n^{\theta_{1}}\) as \(n\to\infty\). However, it follows from Theorem 2.4 that there exists a network \(\overline{f}\) of which the width and the number of nonzero parameters are less than \(C_{\beta_{1}}\varepsilon_{n}^{-\frac{1}{2\beta_{1}}}\left|\log\varepsilon_{n} \right|\cdot\left|\log\delta_{n}\right|^{2}\lesssim n^{\frac{\theta_{2}}{2 \beta_{1}}}\left|\log n\right|^{3}\) such that it achieves the same approximation error as that of \(\tilde{f}\). By taking \(\beta_{1}\) large enough we can make the growth (as \(n\to\infty\)) of the width and the number of nonzero parameters of \(\overline{f}\) slower than \(n^{\theta}\) for arbitrary \(\theta\in(0,\theta_{1}]\). Therefore, in the usual case when the complexity of \(\tilde{\eta}\) is not too small in the sense that the width and the number of nonzero parameters of \(\tilde{\eta}\) grow faster than \(n^{\theta_{3}}\) as \(n\to\infty\) for some \(\theta_{3}\in(0,\infty)\) independent of \(n\) or \(\beta_{1}\), we can use Theorem 2.4 with a large enough \(\alpha=2\beta_{1}\) to construct the desired network \(\tilde{l}\) of which the complexity is insignificant in comparison to that of \(\tilde{L}\circ\tilde{\eta}\). In other words, the neural network approximation of logarithmic function based on Theorem 2.4 brings little complexity in approximating the target function \(f_{\phi,p}^{\ast}\). The above discussion demonstrates the tightness of the inequality in Theorem 2.4 and the advantage of Theorem 2.4 over those general results on approximation theory of neural networks such as Theorem A.2. It is worth mentioning that an alternative way to approximate the function \(\overline{L}_{\delta_{n}}\) defined in (2.39) is by simply using its piecewise linear interpolation. For example, in [25], the authors express the piecewise linear interpolation of \(\overline{L}_{\delta_{n}}\) at equidistant points by a neural network \(\tilde{L}\), and construct a CNN \(\tilde{\eta}\) to approximate \(\eta\), leading to an approximation of the truncated target function of the logistic risk \(\tilde{L}\circ\tilde{\eta}\). It follows from Proposition 3.2.4 of [2] that \[h_{n}^{2}\lesssim\left\|\tilde{L}-\overline{L}_{\delta_{n}}\right\|_{|\delta_ {n},1-\delta_{n}|}\lesssim\frac{h_{n}^{2}}{\delta_{n}^{2}}, \tag{2.44}\] where \(h_{n}\) denotes the step size of the interpolation. Therefore, to ensure the error bound \(\varepsilon_{n}\) for the approximation of \(\overline{L}_{\delta_{n}}\) by \(\tilde{L}\), we must have \(h_{n}\lesssim\sqrt{\varepsilon_{n}}\), implying that the number of nonzero parameters of \(\tilde{L}\) will grow no slower than \(\frac{1}{h_{n}}\gtrsim\frac{1}{\sqrt{\varepsilon_{n}}}\) as \(n\to\infty\). Consequently, in the case (2.43), we will have that the number of nonzero parameters of \(\tilde{L}\) will grow no slower than \(n^{\theta_{1}/2}\). Therefore, in contrast to using Theorem 2.4, we cannot make the number of nonzero parameters of the network \(\tilde{L}\) obtained from piecewise linear interpolation grow slower than \(n^{\theta}\) for arbitrarily small \(\theta>0\). As a result, using piecewise linear interpolation to approximate \(\overline{L}_{\delta_{n}}\) may bring extra complexity in establishing the approximation of the target function. However, the advantage of using piecewise linear interpolation is that one can make the depth or width of the network \(\tilde{L}\) which expresses the desired interpolation bounded as \(n\to\infty\) (cf. Lemma 7 in [25] and its proof therein). The proof of Theorem 2.4 is in Appendix A.3.3. The key observation in our proof is the fact that for all \(k\in\mathbb{N}\), the following holds true: \[\log x=\log(2^{k}\cdot x)-k\log 2,\ \ \ \ \forall\ x\in(0,\infty). \tag{2.45}\] Then we can use the values of \(\log(\cdot)\) which are taken far away from zero (i.e., \(\log(2^{k}\cdot x)\) in the right hand side of (2.45)) to determine its values taken near zero, while approximating the former is more efficient as the Holder norm of the natural logarithm function on domains far away from zero can be well controlled. In the next theorem, we show that if the data distribution has a piecewise smooth decision boundary, then DNN classifiers trained by empirical logistic risk minimization can also achieve dimension-free rates of convergence under the noise condition (2.24) and a margin condition (see (2.51) below). Before stating this result, we need to introduce this margin condition and relevant concepts. We first define the set of (binary) classifiers which have a piecewise Holder smooth decision boundary. We will adopt similar notations from [22] to describe this set. Specifically, let \(\beta,r\in(0,\infty)\) and \(I,\Theta\in\mathbb{N}\). For \(g\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d-1}\right)\) and \(j=1,2,\cdots,d\), we define horizon function \(\Psi_{g,j}:[0,1]^{d}\to\{0,1\}\) as \(\Psi_{g,j}(x):=\mathbb{1}_{\{(x)_{j}\geq g(x_{-j})\}}\), where \(x_{-j}:=((x)_{1},\cdots,(x)_{j-1},(x)_{j+1},\cdots,(x)_{d})\in[0,1]^{d-1}\). For each horizon function, the corresponding basis piece \(\Lambda_{g,j}\) is defined as \(\Lambda_{g,j}:=\left\{x\in[0,1]^{d}\left|\Psi_{g,j}(x)=1\right.\right\}\). Note that \(\Lambda_{g,j}=\left\{x\in[0,1]^{d}\right|(x)_{j}\geq\max\left\{0,g(x_{-j}) \right\}\right\}\). Thus \(\Lambda_{g,j}\) is enclosed by the hypersurface \(\mathcal{S}_{g,j}:=\left\{x\in[0,1]^{d}\right|(x)_{j}=\max\left\{0,g(x_{-j}) \right\}\right\}\) and (part of) the boundary of \([0,1]^{d}\). We then define the set of pieces which are the intersection of \(I\) basis pieces as \[\mathcal{A}^{d,\beta,r,I}:=\left\{A\left|A=\bigcap_{k=1}^{I}\Lambda_{g_{k},j_ {k}}\text{ for some }j_{k}\in\{1,2,\cdots,d\}\text{ and }g_{k}\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d-1}\right)\ \right\},\] and define \(\mathcal{C}^{d,\beta,r,I,\Theta}\) to be a set of binary classifiers as \[\mathcal{C}^{d,\beta,r,I,\Theta} \tag{2.46}\] \[:=\left\{\begin{array}{l}\mathtt{C}(x)=2\sum_{i=1}^{\Theta} \mathbb{1}_{A_{i}}(x)-1:[0,1]^{d}\to\{-1,1\}\Bigg{|}\begin{array}{l}A_{1},A _{2},A_{3},\cdots,A_{\Theta}\text{ are }\\ \text{disjoint sets in }\mathcal{A}^{d,\beta,r,I}\end{array}\right\}.\] Thus \(\mathcal{C}^{d,\beta,r,I,\Theta}\) consists of all binary classifiers which are equal to \(+1\) on some disjoint sets \(A_{1},\ldots,A_{\Theta}\) in \(\mathcal{A}^{d,\beta,r,I}\) and \(-1\) otherwise. Let \(A_{t}=\cap_{k=1}^{I}\Lambda_{g_{t,k},j_{t,k}}\) (\(t=1,2,\ldots,\Theta\)) be arbitrary disjoint sets in \(\mathcal{A}^{d,\beta,r,I}\), where \(j_{t,k}\in\{1,2,\ldots,d\}\) and \(g_{t,k}\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d-1}\right)\). Then \(\mathtt{C}:[0,1]^{d}\to\{-1,1\},x\mapsto 2\sum_{i=1}^{\Theta}\mathbb{1}_{A_{i}}(x)-1\) is a classifier in \(\mathcal{C}^{d,\beta,r,I,\Theta}\). Recall that \(\Lambda_{g_{t,k},j_{t,k}}\) is enclosed by \(\mathcal{S}_{g_{t,k},j_{t,k}}\) and (part of) the boundary of \([0,1]^{d}\) for each \(t,k\). Hence for each \(t\), the region \(A_{t}\) is enclosed by hypersurfaces \(\mathcal{S}_{g_{t,k},j_{t,k}}\) (\(k=1,\ldots,I\)) and (part of) the boundary of \([0,1]^{d}\). We say the piecewise Holder smooth hypersurface \[D_{\mathtt{C}}^{*}:=\bigcup_{t=1}^{\Theta}\bigcup_{k=1}^{I}\left(\mathcal{S}_ {g_{t,k},j_{t,k}}\cap A_{t}\right) \tag{2.47}\] is the _decision boundary_ of the classifier \(\mathtt{C}\) because intuitively, points on different sides of \(D_{\mathtt{C}}^{*}\) are classified into different categories (i.e. \(+1\) and \(-1\)) by \(\mathtt{C}\) (cf. Figure 2.3). Denote by \(\Delta_{\mathtt{C}}(x)\) the distance from \(x\in[0,1]^{d}\) to the decision boundary \(D_{\mathtt{C}}^{*}\), i.e., \[\Delta_{\mathtt{C}}(x):=\inf\left\{\left\|x-x^{\prime}\right\|_{2}\left|x^{ \prime}\in D_{\mathtt{C}}^{*}\right\}. \tag{2.48}\] We then describe the margin condition mentioned above. Let \(P\) be a probability measure on \([0,1]^{d}\times\{-1,1\}\), which we regard as the joint distribution of the input and output data, and \(\eta(\cdot)=P(\{1\}\mid\cdot)\) is the conditional probability function of \(P\). The corresponding Bayes classifier is the sign of \(2\eta-1\) which minimizes the misclassification error over all measurable functions, i.e., \[\mathcal{R}_{P}(\operatorname{sgn}(2\eta-1))=\mathcal{R}_{P}(2\eta-1)=\inf \left\{\mathcal{R}_{P}(f)\left|f:[0,1]^{d}\to\mathbb{R}\text{ is measurable}\right.\right\}. \tag{2.49}\] We say the distribution \(P\) has a piecewise smooth decision boundary if \[\exists\mathtt{C}\in\mathcal{C}^{d,\beta,r,I,\Theta}\text{ s.t. }\text{ sgn}(2\eta-1)\xrightarrow{P_{\mathtt{X}\text{-a.s.}}}\mathtt{C},\] that is, \[P_{X}\left(\left\{x\in[0,1]^{d}\right|\operatorname{sgn}(2\cdot P(\{1\}\mid x )-1)=\mathtt{C}(x)\right\}\right)=1 \tag{2.50}\] for some \(\mathtt{C}\in\mathcal{C}^{d,\beta,r,I,\Theta}\). Suppose \(\mathtt{C}\in\mathcal{C}^{d,\beta,r,I,\Theta}\) and (2.50) holds. We call \(D_{\mathtt{C}}^{*}\) the _decision boundary_ of \(P\), and for \(c_{2}\in(0,\infty)\), \(t_{2}\in(0,\infty)\), \(s_{2}\in[0,\infty]\), we use the following condition \[P_{X}\left(\left\{x\in[0,1]^{d}\right|\Delta_{\mathtt{C}}(x)\leq t\right\} \right)\leq c_{2}t^{s_{2}},\quad\forall\;0<t\leq t_{2}, \tag{2.51}\] which we call the _margin condition_, to measure the concentration of the input distribution \(P_{X}\) near the decision boundary \(D_{\mathtt{C}}^{*}\) of \(P\). In particular, when the input data are bounded away from the decision boundary \(D_{\mathtt{C}}^{*}\) of \(P\) (\(P_{X}\)-a.s.), (2.51) will hold for \(s_{2}=\infty\). Now we are ready to give our next main theorem. **Theorem 2.5**.: _Let \(d\in\mathbb{N}\cap[2,\infty)\), \((n,I,\Theta)\in\mathbb{N}^{3}\), \((\beta,r,t_{1},t_{2},c_{1},c_{2})\in(0,\infty)^{6}\), \((s_{1},s_{2})\in[0,\infty]^{2}\), \(\{(X_{i},Y_{i})\}_{i=1}^{n}\) be a sample in \([0,1]^{d}\times\{-1,1\}\) and \(\hat{f}_{n}^{\mathbf{FNN}}\) be an ERM with respect to the logistic loss \(\phi(t)=\log\left(1+\mathrm{e}^{-t}\right)\) over \(\mathcal{F}_{d}^{\mathbf{FNN}}(G,N,S,B,F)\) which is given by (2.14). Define_ \[\mathcal{H}_{6,t_{1},c_{1},t_{2},c_{2}}^{d,\beta,r,I,\Theta,s_{1},s_{2}}:= \left\{P\in\mathcal{H}_{0}^{d}\Big{|}\begin{subarray}{c}\eqref{eq:2.4}\text{, \eqref{eq:2.50} and \eqref{eq:2.51} hold}\\ \text{for some }\mathcal{C}\in\mathcal{C}^{d,\beta,r,I,\Theta}\end{subarray} \right\}. \tag{2.52}\] _Then the following statements hold true:_ 1. _For_ \(s_{1}\in[0,\infty]\) _and_ \(s_{2}=\infty\)_, the_ \(\phi\)_-ERM_ \(\hat{f}_{n}^{\mathbf{FNN}}\) _with_ \[G =G_{0}\log\frac{1}{t_{2}\wedge\frac{1}{2}},\ N=N_{0}\left(\frac{ 1}{t_{2}\wedge\frac{1}{2}}\right)^{\frac{d-1}{\beta}},\ S=S_{0}\left(\frac{1}{ t_{2}\wedge\frac{1}{2}}\right)^{\frac{d-1}{\beta}}\log\left(\frac{1}{t_{2} \wedge\frac{1}{2}}\right),\] \[B =B_{0}\left(\frac{1}{t_{2}\wedge\frac{1}{2}}\right),\text{ and }\ F\asymp \left(\frac{\log n}{n}\right)^{\frac{1}{s_{1}+2}}\] _satisfies_ \[\sup_{P\in\mathcal{H}_{6,t_{1},c_{1},t_{2},c_{2}}^{d,s,r,I,\Theta,s_{1},s_{2},s_{2}}}\boldsymbol{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}\left(\hat{f}_{n}^ {\mathbf{FNN}}\right)\right]\lesssim\left(\frac{\log n}{n}\right)^{\frac{s_{1} }{s_{1}+2}},\] (2.53) _where_ \(G_{0},N_{0},S_{0},B_{0}\) _are positive constants only depending on_ \(d,\beta,r,I,\Theta\)_;_ 2. _For_ \(s_{1}=\infty\) _and_ \(s_{2}\in[0,\infty)\)_, the_ \(\phi\)_-ERM_ \(\hat{f}_{n}^{\mathbf{FNN}}\) _with_ \[G \asymp\log n,\ N\asymp\left(\frac{n}{(\log n)^{3}}\right)^{\frac{d-1}{s _{2}+d-1}},\ S\asymp\left(\frac{n}{(\log n)^{3}}\right)^{\frac{d-1}{s_{2} \beta+d-1}}\log n,\] \[B \asymp\left(\frac{n}{(\log n)^{3}}\right)^{\frac{1}{s_{2}+\frac{ d-1}{\beta}}},\text{ and }\ F=t_{1}\wedge\frac{1}{2}\] _satisfies_ \[\sup_{P\in\mathcal{H}_{6,t_{1},c_{1},t_{2},c_{2}}^{d,s,r,I,\Theta,s_{1},s_{2},s_{2}}}\boldsymbol{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}\left(\hat{f}_{n}^ {\mathbf{FNN}}\right)\right]\lesssim\left(\frac{(\log n)^{3}}{n}\right)^{\frac {1}{s_{1}+\frac{1}{s_{2}s_{2}}}};\] (2.54) _._ 3. _For_ \(s_{1}\in[0,\infty)\) _and_ \(s_{2}\in[0,\infty)\)_, the_ \(\phi\)_-ERM_ \(\hat{f}_{n}^{\mathbf{FNN}}\) _with_ \[G\asymp\log n,N\asymp\left(\frac{n}{(\log n)^{3}}\right)^{\frac{(d-1)(s_{ 1}+1)}{s_{2}\beta+(s_{1}+1)(s_{2}\beta+d-1)}},S\asymp\left(\frac{n}{(\log n)^{3 }}\right)^{\frac{(d-1)(s_{1}+1)}{s_{2}\beta+(s_{1}+1)(s_{2}\beta+d-1)}}\log n,\] \[B\asymp\left(\frac{n}{(\log n)^{3}}\right)^{\frac{s_{1}+1}{s_{2}+(s_{ 1}+1)\left(s_{2}+\frac{d-1}{\beta}\right)}},\text{ and }\ F\asymp\left(\frac{(\log n)^{3}}{n}\right)^{\frac{s_{2}}{s_{2}+(s_{ 1}+1)\left(s_{2}+\frac{d-1}{\beta}\right)}}\] _satisfies_ \[\sup_{P\in\mathcal{H}_{d,s_{1},s_{2}}^{d,\beta,r,I,\theta,s_{1},s_{2}}}\mathbf{E} _{P\otimes n}\left[\mathcal{E}_{P}\left(\hat{f}_{n}^{\mathbf{FNN}}\right) \right]\lesssim\left(\frac{(\log n)^{3}}{n}\right)^{\frac{s_{1}}{1+(s_{1}+1) \left(1+\frac{d-1}{\beta s_{2}}\right)}}.\] (2.55) It is worth noting that the rate \(\mathcal{O}(\left(\frac{\log n}{n}\right)^{\frac{s_{1}}{s_{1}+2}})\) established in (2.53) does not depend on the dimension \(d\), and dependency of the rates in (2.54) and (2.55) on the dimension \(d\) diminishes as \(s_{2}\) increases, which demonstrates that the condition (2.51) with \(s_{2}=\infty\) helps circumvent the curse of dimensionality. In particular, (2.53) will give a fast dimension-free rate of convergence \(\mathcal{O}(\frac{\log n}{n})\) if \(s_{1}=s_{2}=\infty\). One may refer to Section 3 for more discussions about the result of Theorem 2.5. The proof of Theorem 2.5 is in Appendix A.3.5. Our proof relies on Theorem 2.1 and the fact that the ReLU networks are good at approximating indicator functions of bounded regions with piecewise smooth boundary [18, 33]. Let \(P\) be an arbitrary probability in \(\mathcal{H}_{6,t_{1},c_{1},t_{2},c_{2}}^{d,\beta,r,I,\theta,s_{1},s_{2}}\) and denote by \(\eta\) the condition probability function \(P(\{1\}\,|\cdot)\) of \(P\). To apply Theorem 2.1 and make good use of the noise condition (2.24) and the margin condition (2.51), we define another \(\psi\) (which is different from that in (2.9)) as \[\psi:[0,1]^{d}\times\{-1,1\}\to\mathbb{R},\quad(x,y)\mapsto\begin{cases} \phi\left(yF_{0}\text{sgn}(2\eta(x)-1)\right),&\text{if }\left|2\eta(x)-1 \right|>\eta_{0},\\ \phi\left(y\log\frac{\eta(x)}{1-\eta(x)}\right),&\text{if }\left|2\eta(x)-1 \right|\leq\eta_{0}\end{cases}\] for some suitable \(\eta_{0}\in(0,1)\) and \(F_{0}\in\left(0,\log\frac{1+\eta_{0}}{1-\eta_{0}}\right)\). For such \(\psi\), Lemma A.17 guarantees that inequality (2.3) holds as \[\int_{[0,1]^{d}\times\{-1,1\}}\psi\left(x,y\right)\text{d}P(x,y)\leq\inf\left\{ \left.\mathcal{R}_{P}^{\phi}(f)\right|f:[0,1]^{d}\to\mathbb{R}\text{ is measurable}\right\},\] and (2.4), (2.5) of Theorem 2.1 are satisfied with \(M=\frac{2}{1-\eta_{0}}\) and \(\Gamma=\frac{8}{1-\eta_{0}^{2}}\). Moreover, we utilize the noise condition (2.24) and the margin condition (2.51) to bound the approximation error \[\inf_{f\in\mathcal{F}_{d}^{\mathbf{FNN}}(G,N,S,B,F)}\left(\mathcal{R}_{P}^{ \phi}(f)-\int_{[0,1]^{d}\times\{-1,1\}}\psi(x,y)\text{d}P(x,y)\right) \tag{2.56}\] (see (A.115), (A.116), (A.117)). Then, as in the proof of Theorem 2.2, we combine Theorem 2.1 with estimates for the covering number of \(\mathcal{F}_{d}^{\mathbf{FNN}}(G,N,S,B,F)\) and the approximation error (2.56) to obtain an upper bound for \(\mathbf{E}_{P^{\otimes n}}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n}^{\mathbf{ FNN}}\right)-\int\psi\text{d}P\right]\), which, together with the noise condition (2.24), yields an upper bound for \(\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}(\hat{f}_{n}^{\mathbf{FNN}})\right]\) (see (A.113)). Finally taking the supremum over all \(P\in\mathcal{H}_{6,t_{1},c_{1},t_{2},c_{2}}^{d,\beta,r,I,\theta,s_{1},s_{2}}\) gives the desired result. The proof of Theorem 2.5 along with that of Theorem 2.2 and Theorem 2.3 indicates that Theorem 2.1 is very flexible in the sense that it can be used in various settings with different choices of \(\psi\). ### Main Lower Bounds In this subsection, we will give our main results on lower bounds for convergence rates of the logistic risk, which will justify the optimality of our upper bounds established in the last subsection. To state these results, we need some notations. Recall that for any \(a\in[0,1]\), \(\mathscr{M}_{a}\) denotes the probability measure on \(\{-1,1\}\) with \(\mathscr{M}_{a}(\{1\})=a\) and \(\mathscr{M}_{a}(\{-1\})=1-a\). For any measurable \(\eta:[0,1]^{d}\to[0,1]\) and any Borel probability measure \(\mathscr{Q}\) on \([0,1]^{d}\), we denote \[\begin{split} P_{\eta,\mathscr{Q}}:&\{\text{ Borel subsets of }[0,1]^{d}\times\{-1,1\}\}\to[0,1],\\ S&\mapsto\int_{[0,1]^{d}}\int_{\{-1,1\}}\mathbb{1} _{S}(x,y)\mathrm{d}\mathscr{M}_{\eta(x)}(y)\mathrm{d}\mathscr{Q}(x).\end{split} \tag{2.57}\] Therefore, \(P_{\eta,\mathscr{Q}}\) is the (unique) probability measure on \([0,1]^{d}\times\{-1,1\}\) of which the marginal distribution on \([0,1]^{d}\) is \(\mathscr{Q}\) and the conditional probability function is \(\eta\). If \(\mathscr{Q}\) is the Lebesgue measure on \([0,1]^{d}\), we will write \(P_{\eta}\) for \(P_{\eta,\mathscr{Q}}\). For any \(\beta\in(0,\infty)\), \(r\in(0,\infty)\), \(A\in[0,1)\), \(q\in\mathbb{N}\cup\{0\}\), and \((d,d_{*},K)\in\mathbb{N}^{3}\) with \(d_{*}\leq\min\left\{d,K+\mathbb{1}_{\{0\}}(q)\cdot(d-K)\right\}\), define \[\begin{split}&\mathcal{H}_{3,A}^{d,\beta,r}:=\left\{P_{\eta} \Big{|}\begin{subarray}{c}\eta\in\mathcal{B}_{r}^{\beta}([0,1]^{d}),\ \mathbf{ran}(\eta)\subset[0,1],\text{ and}\\ \int_{[0,1]^{d}}\mathbb{1}_{[0,A]}([2\eta)-1]\mathrm{d}x=0\end{subarray} \right\},\\ &\mathcal{H}_{5,A,q,K,d_{*}}^{d,\beta,r}:=\left\{P_{\eta}\left| \begin{subarray}{c}\eta\in\mathcal{G}_{d}^{\mathbf{CH}}(q,K,d_{*},\beta,r), \mathbf{ran}(\eta)\subset[0,1],\text{ and}\\ \int_{[0,1]^{d}}\mathbb{1}_{[0,A]}([2\eta(x)-1])\mathrm{d}x=0\end{subarray} \right.\right.\end{split} \tag{2.58}\] Now we can state our Theorem 2.6. Recall that \(\mathcal{F}_{d}\) is the set of all measurable real-valued functions defined on \([0,1]^{d}\). **Theorem 2.6**.: _Let \(\phi\) be the logistic loss, \(n\in\mathbb{N}\), \(\beta\in(0,\infty)\), \(r\in(0,\infty)\), \(A\in[0,1)\), \(q\in\mathbb{N}\cup\{0\}\), and \((d,d_{*},K)\in\mathbb{N}^{3}\) with \(d_{*}\leq\min\left\{d,K+\mathbb{1}_{\{0\}}(q)\cdot(d-K)\right\}\). Suppose \(\{(X_{i},Y_{i})\}_{i=1}^{n}\) is a sample in \([0,1]^{d}\times\{-1,1\}\) of size \(n\). Then there exists a constant \(\mathrm{c}_{0}\in(0,\infty)\) only depending on \((d_{*},\beta,r,q)\), such that_ \[\inf_{f_{n}}\sup_{P\in\mathcal{H}_{5,A,q,K,d_{*}}^{d,\beta,r}}\mathbf{E}_{P^{ \otimes n}}\left[\mathcal{E}_{P}^{\phi}(\hat{f}_{n})\right]\geq\mathrm{c}_{0} n^{-\frac{\beta\cdot(1,\beta)q}{d_{*}+\beta\cdot(1,\beta)q}}\text{ provided that }n>\left|\frac{7}{1-A}\right|^{\frac{d_{*}+\beta\cdot(1,\beta)q}{\beta \cdot(1,\beta)q}},\] _where the infimum is taken over all \(\mathcal{F}_{d}\)-valued statistics on \(([0,1]^{d}\times\{-1,1\})^{n}\) from the sample \(\{(X_{i},Y_{i})\}_{i=1}^{n}\)._ Taking \(q=0\), \(K=1\), and \(d_{*}=d\) in Theorem 2.6, we immediately obtain the following corollary: **Corollary 2.1**.: _Let \(\phi\) be the logistic loss, \(d\in\mathbb{N}\), \(\beta\in(0,\infty)\), \(r\in(0,\infty)\), \(A\in[0,1)\), and \(n\in\mathbb{N}\). Suppose \(\{(X_{i},Y_{i})\}_{i=1}^{n}\) is a sample in \([0,1]^{d}\times\{-1,1\}\) of size \(n\). Then there exists a constant \(\mathrm{c}_{0}\in(0,\infty)\) only depending on \((d,\beta,r)\), such that_ \[\inf_{\hat{f}_{n}}\sup_{P\in\mathcal{H}_{3,A}^{d,\beta,r}}\mathbf{E}_{P^{\otimes n }}\left[\mathcal{E}_{P}^{\phi}(\hat{f}_{n})\right]\geq\mathrm{c}_{0}n^{-\frac {\beta}{d+\beta}}\text{ provided that }n>\left|\frac{7}{1-A}\right|^{\frac{d+\beta}{\beta}},\] _where the infimum is taken over all \(\mathcal{F}_{d}\)-valued statistics on \(([0,1]^{d}\times\{-1,1\})^{n}\) from the sample \(\{(X_{i},Y_{i})\}_{i=1}^{n}\)._ Theorem 2.6, together with Corollary 2.1, is proved in Appendix A.3.6. Obviously, \(\mathcal{H}_{5,A,q,K,d_{*}}^{d,\beta,r}\subset\mathcal{H}_{4,q,K,d_{*}}^{d, \beta,r}\). Therefore, it follows from Theorem 2.6 that \[\inf_{\hat{f}_{n}}\sup_{P\in\mathcal{H}_{4,q,K,d_{*},d_{*}}^{d,\beta,r}}\mathbf{E}_{P^ {\otimes n}}\left[\mathcal{E}_{P}^{\phi}(\hat{f}_{n})\right]\geq\inf_{\hat{f}_{ n}}\sup_{P\in\mathcal{H}_{5,A,q,K,d_{*}}^{d,\beta,r}}\mathbf{E}_{P^{\otimes n}}\left[ \mathcal{E}_{P}^{\phi}(\hat{f}_{n})\right]\gtrsim n^{-\frac{\beta\cdot(1\wedge \beta)q}{2\star+\beta\cdot(1\wedge\beta)q}}.\] This justifies that the rate \(\mathcal{O}(\left(\frac{\log n}{n}\right)^{\frac{\beta\cdot(1\wedge\beta)q}{2 \star+\beta\cdot(1\wedge\beta)q}})\) in (2.36) is optimal (up to the logarithmic factor \((\log n)^{\frac{5\beta\cdot(1\wedge\beta)q}{2\star+\beta\cdot(1\wedge\beta)q}}\)). Similarly, it follows from \(\mathcal{H}_{3,A}^{d,\beta,r}\subset\mathcal{H}_{1}^{d,\beta,r}\) and Corollary 2.1 that \[\inf_{\hat{f}_{n}}\sup_{P\in\mathcal{H}_{1}^{d,\beta,r}}\mathbf{E}_{P^{\otimes n}} \left[\mathcal{E}_{P}^{\phi}(\hat{f}_{n})\right]\geq\inf_{\hat{f}_{n}}\sup_{P \in\mathcal{H}_{3,A}^{d,\beta,r}}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}^{ \phi}(\hat{f}_{n})\right]\gtrsim n^{-\frac{\beta}{2\star+\beta}},\] which justifies that the rate \(\mathcal{O}(\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta}{\beta+d}})\) in (2.17) is optimal (up to the logarithmic factor \((\log n)^{\frac{5\beta}{\beta+d}}\)). Moreover, note that any probability \(P\) in \(\mathcal{H}^{d,\beta,r}_{3,A}\) must satisfy the noise condition (2.24) provided that \(s_{1}\in[0,\infty]\), \(t_{1}\in(0,A]\), and \(c_{1}\in(0,\infty)\). In other words, for any \(s_{1}\in[0,\infty]\), \(t_{1}\in(0,A]\), and \(c_{1}\in(0,\infty)\), there holds \(\mathcal{H}^{d,\beta,r}_{3,A}\subset\mathcal{H}^{d,\beta,r}_{2,s_{1},c_{1},t_ {1}}\), meaning that \[n^{-\frac{\beta}{d+\beta}}\lesssim\inf_{f_{n}}\sup_{P\in\mathcal{H}^{d,\beta,r }_{3,A}}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}^{\phi}_{P}(\hat{f}_{n})\right] \leq\inf_{f_{n}}\sup_{P\in\mathcal{H}^{d,\beta,r}_{2,s_{1},c_{1},t_{1}}}\mathbf{E }_{P^{\otimes n}}\left[\mathcal{E}^{\phi}_{P}(\hat{f}_{n})\right]\] \[\leq\inf_{f_{n}}\sup_{P\in\mathcal{H}^{d,\beta,r}_{1}}\mathbf{E}_{P^{\otimes n}} \left[\mathcal{E}^{\phi}_{P}(\hat{f}_{n})\right]\leq\sup_{P\in\mathcal{H}^{d, \beta,r}_{1}}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}^{\phi}_{P}\left(\hat{f}_ {n}^{\textbf{FNN}}\right)\right]\lesssim\left(\frac{(\log n)^{5}}{n}\right)^{ \frac{\beta}{\beta+d}},\] where \(\hat{f}_{n}^{\textbf{FNN}}\) is the estimator defined in Theorem 2.2. From above inequalities we see that the noise condition (2.24) does little to help improve the convergence rate of the excess \(\phi\)-risk in classification. The proof of Theorem 2.6 and Corollary 2.1 is based on a general scheme for obtaining lower bounds, which is given in Section 2 of [44]. However, the scheme in [44] is stated for a class of probabilities \(\mathcal{H}\) that takes the form \(\mathcal{H}=\{Q_{\theta}|\theta\in\Theta\}\) with \(\Theta\) being some pseudometric space. In our setting, we do not have such pseudometric space. Instead, we introduce another quantity \[\inf_{f\in\mathcal{F}_{d}}\left|\mathcal{E}^{\phi}_{P}(f)+\mathcal{E}^{\phi} _{Q}(f)\right| \tag{2.59}\] to characterize the difference between any two probability measures \(P\) and \(Q\) (see (A.130)). Estimating lower bounds for the quantity defined in (2.59) plays a key role in our proof of Theorem 2.6 and Corollary 2.1. ## 3 Discussions on Related Work In this section, we compare our results with some existing ones in the literature. We first compare Theorem 2.2 and Theorem 2.5 with related results about binary classification using fully connected DNNs and logistic loss in [22] and [9] respectively. Then we compare our work with [20], in which the authors carry out generalization analysis for estimators obtained from gradient descent algorithms. Throughout this section, we will use \(\phi\) to denote the logistic loss (i.e., \(\phi(t)=\log(1+\mathrm{e}^{-t})\)) and \(\{(X_{i},Y_{i})\}_{i=1}^{n}\) to denote an i.i.d. sample in \([0,1]^{d}\times\{-1,1\}\). The symbols \(d\), \(\beta\), \(r\), \(I\), \(\Theta\), \(t_{1}\), \(c_{1}\), \(t_{2}\), \(c_{2}\) and \(c\) will denote arbitrary numbers in \(\mathbb{N}\), \((0,\infty)\), \((0,\infty)\), \(\mathbb{N}\), \((0,\infty)\), \((0,\infty)\), \((0,\infty)\), \((0,\infty)\), \((0,\infty)\) and \([0,\infty)\), respectively. The symbol \(P\) will always denote some probability measure on \([0,1]^{d}\times\{-1,1\}\), regarded as the data distribution, and \(\eta\) will denote the corresponding conditional probability function \(P(\{1\}\left|\cdot\right)\) of \(P\). Recall that \(\mathcal{C}^{d,\beta,r,I,\Theta}\), defined in (2.46), is the space consisting of classifiers which are equal to \(+1\) on the union of some disjoint regions with piecewise Holder smooth boundary and \(-1\) otherwise. In Theorem 4.1 of [22], the authors conduct generalization analysis when the data distribution \(P\) satisfies the piecewise smooth decision boundary condition (2.50), the noise condition (2.24), and the margin condition (2.51) with \(s_{1}=s_{2}=\infty\) for some \(\mathsf{C}\in\mathcal{C}^{d,\beta,r,I,\Theta}\). They show that there exist constants \(G_{0},N_{0},S_{0},B_{0},F_{0}\) not depending on the sample size \(n\) such that the \(\phi\)-ERM \[\hat{f}_{n}^{\textbf{FNN}}\in\operatorname*{arg\,min}_{f\in\mathcal{F}^{ \textbf{FNN}}_{d}(G_{0},N_{0},S_{0},B_{0},F_{0})}\frac{1}{n}\sum_{i=1}^{n}\phi \left(Y_{i}f(X_{i})\right)\] satisfies \[\sup_{P\in\mathcal{H}^{d,\beta,r,I,\Theta,\infty}_{d,1},c_{1},t_{2},c_{2}, \infty}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}\left(\hat{f}_{n}^{\textbf{FNN}} \right)\right]\lesssim\frac{(\log n)^{1+\epsilon}}{n} \tag{3.1}\] for any \(\epsilon>0\). Indeed, the noise conditions (2.24) and the margin condition (2.51) with \(s_{1}=s_{2}=\infty\) are equivalent to the following two conditions: there exist \(\eta_{0}\in(0,1)\) and \(\overline{\Delta}>0\) such that \[P_{X}\left(\left\{\left.x\in[0,1]^{d}\right|\left|2\eta(x)-1\right|\leq\eta_{0 }\right\}\right)=0\] \[P_{X}\left(\left\{x\in[0,1]^{d}\mid\Delta_{\mathcal{C}}(x)\leq\overline{\Delta} \right\}\right)=0\] (cf. conditions (N\({}^{\prime}\)) and (M\({}^{\prime}\)) in [22]). Under the two conditions above, combining with the assumption \(\mathrm{sgn}(2\eta-1)\xrightarrow{P_{X}\cdot\mathrm{a.s.}}\mathfrak{C}\in \mathcal{C}^{d,\beta,r,I,\Theta}\), Lemma A.7 of [22] asserts that there exists \(f_{0}^{*}\in\mathcal{F}_{d}^{\mathbf{FNN}}(G_{0},N_{0},S_{0},B_{0},F_{0})\) such that \[f_{0}^{*}\in\operatorname*{arg\,min}_{f\in\mathcal{F}_{d}^{\mathbf{FNN}}(G_{0},N_{0},S_{0},B_{0},F_{0})}\mathcal{R}_{P}^{\phi}(f)\] and \[\mathcal{R}_{P}(f_{0}^{*})=\mathcal{R}_{P}(2\eta-1)=\inf\left\{\mathcal{R}_{P }(f)\left|f:[0,1]^{d}\to\mathbb{R}\text{ is measurable}\right.\right\}.\] The excess misclassification error of \(f:[0,1]^{d}\to\mathbb{R}\) is then given by \(\mathcal{E}_{P}(f)=\mathcal{R}_{P}(f)-\mathcal{R}_{P}(f_{0}^{*})\). Since \(f_{0}^{*}\) is bounded by \(F_{0}\), the authors in [22] can apply classical concentration techniques developed for bounded random variables (cf. Appendix A.2 of [22]) to deal with \(f_{0}^{*}\) (instead of the target function \(f_{\phi,P}^{*}\)), leading to the generalization bound (3.1). In this paper, employing Theorem 2.1, we extend Theorem 4.1 of [22] to much less restrictive cases in which the noise exponent \(s_{1}\) and the margin exponent \(s_{2}\) are allowed to be taken from \([0,\infty]\). The derived generalization bounds are presented in Theorem 2.5. In particular, when \(s_{1}=s_{2}=\infty\) (i.e., let \(s_{1}=\infty\) in statement (1) of Theorem 2.5), we obtain a refined generalization bound under the same conditions as those of Theorem 4.1 in [22], which asserts that the \(\phi\)-ERM \(f_{n}^{\mathbf{FNN}}\) over \(\mathcal{F}_{d}^{\mathbf{FNN}}(G_{0},N_{0},S_{0},B_{0},F_{0})\) satisfies \[\sup_{P\in\mathcal{H}_{0,t_{1},e_{1},e_{1},e_{2},e_{2}}^{d,\beta,r,I,\Theta, \infty}}\boldsymbol{E}_{P}\left(\hat{f}_{n}^{\mathbf{FNN}}\right)\right]\lesssim \frac{\log n}{n}, \tag{3.2}\] removing the \(\epsilon\) in their bound (3.1). The above discussion indicates that Theorem 2.1 can lead to sharper estimates in comparison with classical concentration techniques, and can be applied in very general settings. The recent work [9] considers estimation and inference using fully connected DNNs and the logistic loss in which their setting can cover both regression and classification. For any probability measure \(P\) on \([0,1]^{d}\times\{-1,1\}\) and any measurable function \(f:[0,1]^{d}\to[-\infty,\infty]\), define \(\left\|f\right\|_{\mathcal{L}^{2}_{P_{X}}}:=\left(\int_{[0,1]^{d}}\left|f(x) \right|^{2}\mathrm{d}P_{X}(x)\right)^{\frac{1}{2}}\). Recall that \(\mathcal{B}_{r}^{\beta}\left(\Omega\right)\) is defined in (2.13). Let \(\mathcal{H}_{7}^{d,\beta}\) be the set of all probability measures \(P\) on \([0,1]^{d}\times\{-1,1\}\) such that the target function \(f_{\phi,P}^{*}\) belongs to \(\mathcal{B}_{1}^{\beta}\left([0,1]^{d}\right)\). In Corollary 1 of [9], the authors claimed that if \(P\in\mathcal{H}_{7}^{d,\beta}\) and \(\beta\in\mathbb{N}\), then with probability at least \(1-\mathrm{e}^{-v}\) there holds \[\left\|\hat{f}_{n}^{\mathbf{FNN}}-f_{\phi,P}^{*}\right\|_{\mathcal{L}^{2}_{P_ {X}}}^{2}\lesssim n^{-\frac{2\beta}{2\beta+d}}\log^{4}n+\frac{\log\log n+v}{n}, \tag{3.3}\] where the estimator \(\hat{f}_{n}^{\mathbf{FNN}}\in\mathcal{F}_{d}^{\mathbf{FNN}}(G,N,S,\infty,F)\) is defined by (2.14) with \[G\asymp\log n,\ N\asymp n^{\frac{d}{d+2\beta}},\ S\asymp n^{\frac{d}{d+2 \beta}}\log n,\ \text{and}\ \ F=2. \tag{3.4}\] Note that \(f_{\phi,P}^{*}\in\mathcal{B}_{1}^{\beta}\left([0,1]^{d}\right)\) implies \(\|f_{\phi,P}^{*}\|_{\infty}\leq 1\). From Lemma 8 of [9], bounding the quantity \(\left\|\hat{f}_{n}^{\mathbf{FNN}}-f_{\phi,P}^{*}\right\|_{\mathcal{L}^{2}_{P_ {X}}}^{2}\) on the left hand side of (3.3) is equivalent to bounding \(\mathcal{E}_{P}^{\phi}(\hat{f}_{n}^{\mathbf{FNN}})\), since \[\frac{1}{2(\mathrm{e}+\mathrm{e}^{-1}+2)}\left\|\hat{f}_{n}^{\mathbf{FNN}}-f_{ \phi,P}^{*}\right\|_{\mathcal{L}^{2}_{P_{X}}}^{2}\leq\mathcal{E}_{P}^{\phi}( \hat{f}_{n}^{\mathbf{FNN}})\leq\frac{1}{4}\left\|\hat{f}_{n}^{\mathbf{FNN}}-f_{ \phi,P}^{*}\right\|_{\mathcal{L}^{2}_{P_{X}}}^{2}. \tag{3.5}\] Hence (3.3) actually establishes the same upper bound (up to a constant independent of \(n\) and \(P\)) for the excess \(\phi\)-risk of \(\hat{f}_{n}^{\mathbf{FNN}}\), leading to upper bounds for the excess misclassification error \(\mathcal{E}_{P}(\hat{f}_{n}^{\mathbf{FNN}})\) through the calibration inequality. The authors in [9] apply concentration techniques based on (empirical) _Rademacher complexity_ (cf. Section A.2 of [9] or [4, 26]) to derive the bound (3.3), which allows for removing the restriction of uniformly boundedness on the weights and biases in the neural network models, i.e., the hypothesis space generated by neural networks in their analysis can be of the form \(\mathcal{F}_{d}^{\mathbf{FNN}}\left(G,N,S,\infty,F\right)\). In our paper, we employ the covering number to measure the complexity of hypothesis space. Due to the lack of compactness, the covering numbers of \(\mathcal{F}_{d}^{\mathbf{FNN}}\left(G,N,S,\infty,F\right)\) are in general equal to infinity. Consequently, in our convergence analysis, we require the neural networks to possess bounded weights and biases. The assumption of bounded parameters may lead to additional optimization constraints in the training process. However, it has been found that the weights and biases of a trained neural network are typically around their initial values (cf. [11]). Thus the boundedness assumption matches what is observed in practice and has been adopted by most of the literature (see, e.g., [22, 37]). In particular, the work [37] considers nonparametric regression using neural networks with all parameters bounded by one (i.e., \(B=1\)). This assumption can be realized by projecting the parameters of the neural network onto \([-1,1]\) after each updating. Though the framework developed in this paper would not deliver generalization bounds without restriction of uniformly bounded parameters, we weaken this constraint in Theorem 2.2 by allowing the upper bound \(B\) to grow polynomially with the sample size \(n\), which simply requires \(1\leq B\lesssim n^{\nu}\) for any \(\nu>0\). It is worth mentioning that in our coming work [48], we actually establish oracle-type inequalities analogous to Theorem 2.1, with the covering number \(\mathcal{N}\left(\mathcal{F},\gamma\right)\) replaced by the supremum of some empirical \(L_{1}\)-covering numbers. These enable us to derive generalization bounds for the empirical \(\phi\)-risk minimizer \(\hat{f}_{n}^{\mathbf{FNN}}\) over \(\mathcal{F}_{d}^{\mathbf{FNN}}\left(G,N,S,\infty,F\right)\) because empirical \(L_{1}\)-covering numbers of \(\mathcal{F}_{d}^{\mathbf{FNN}}\left(G,N,S,\infty,F\right)\) can be well-controlled, as indicated by Lemma 4 and Lemma 6 of [9] (see also Theorem 9.4 of [14] and Theorem 7 of [5]). In addition, note that (3.3) can lead to probability bounds (i.e., confidence bounds) for the excess \(\phi\)-risk and misclassification error of \(\hat{f}_{n}^{\mathbf{FNN}}\), while the generalization bounds presented in this paper are only in expectation. Nonetheless, in [48], we obtain both probability bounds and expectation bounds for the empirical \(\phi\)-risk minimizer. As discussed in Section 1, the boundedness assumptions on the target function \(f_{\phi,P}^{*}\) and its derivatives, i.e., \(f_{\phi,P}^{*}\in\mathcal{B}_{1}^{\beta}\left([0,1]^{d}\right)\), are too restrictive. This assumption actually requires that there exists some \(\delta\in(0,1/2)\) such that the conditional class probability \(\eta(x)=P(\{1\}|x)\) satisfies \(\delta<\eta(x)<1-\delta\) for \(P_{X}\)-almost all \(x\in[0,1]^{d}\), which rules out the case when \(\eta\) takes values in \(0\) or \(1\) with positive probabilities. However, it is believed that the conditional class probability should be determined by the patterns that make the two classes mutually exclusive, implying that \(\eta(x)\) should be closed to either \(0\) or \(1\). This is also observed in many benchmark datasets for image recognition. For example, it is reported in [22], the conditional class probabilities of CIFAR10 data set estimated by neural networks with the logistic loss almost solely concentrate on \(0\) or \(1\) and very few are around \(0.5\) (see Fig.2 in [22]). Overall, the boundedness restriction on \(f_{\phi,P}^{*}\) is not expected to hold in binary classification as it would exclude the well classified data. We further point out that the techniques used in [9] cannot deal with the case when \(f_{\phi,P}^{*}\) is unbounded, or equivalently, when \(\eta\) can take values close to \(0\) or \(1\). Indeed, the authors apply approximation theory of neural networks developed in [47] to construct uniform approximations of \(f_{\phi,P}^{*}\), which requires \(f_{\phi,P}^{*}\in\mathcal{B}_{1}^{\beta}\left([0,1]^{d}\right)\) with \(\beta\in\mathbb{N}\). However, if \(f_{\phi,P}^{*}\) is unbounded, uniformly approximating \(f_{\phi,P}^{*}\) by neural networks on \([0,1]^{d}\) is impossible, which brings the essential difficulty in estimating the approximation error. Besides, the authors use Bernstein's inequality to bound the quantity \(\frac{1}{n}\sum_{i=1}^{n}\left(\phi(Y_{i}f_{1}^{*}(X_{i}))-\phi(Y_{i}f_{\phi,P }^{*}(X_{i}))\right)\) appearing in the error decomposition for \(\left\|\hat{f}_{n}^{\mathbf{FNN}}-f_{\phi,P}^{*}\right\|_{\mathcal{L}_{P_{X}} ^{2}}^{2}\) (see (A.1) in [9]), where \(f_{1}^{*}\in\arg\min_{f\in\mathcal{F}_{d}^{\mathbf{FNN}}\left(G,N,S,\infty,2 \right)}\|f-f_{\phi,P}^{*}\|_{[0,1]^{d}}\). We can see that the unboundedness of \(f_{\phi,P}^{*}\) will lead to the unboundedness of the random variable \(\left(\phi(Yf_{1}^{*}(X))-\phi(Yf_{\phi,P}^{*}(X))\right)\), which makes Bernstein's inequality invalid to bound its empirical mean by the expectation. In addition, the boundedness assumption on \(f_{\phi,P}^{*}\) ensures the inequality (3.5) on which the entire framework of convergence estimates in [9] is built (cf. Appendix A.1 and A.2 of [9]). Without this assumption, most of the theoretical arguments in [9] are not feasible. In contrast, we require \(\eta\xrightarrow{P_{X}\text{-a.s.}}\hat{\eta}\) for some \(\hat{\eta}\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d}\right)\) and \(r\in(0,\infty)\) in Theorem 2.2. This Holder smoothness condition on \(\eta\) is well adopted in the study of binary classifiers (see [3] and references therein). Note that \(f_{\phi,P}^{*}\in\mathcal{B}_{1}^{\beta}\left([0,1]^{d}\right)\) indeed implies \(\eta\xrightarrow{P_{X}\text{-a.s.}}\hat{\eta}\) for some \(\hat{\eta}\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d}\right)\) and \(r\in(0,\infty)\) which only depends on \((d,\beta)\). Therefore, the settings considered in Theorem 2.2 is more general than that of [9]. Moreover, the condition \(\eta\xrightarrow{P_{X}\text{-a.s.}}\hat{\eta}\in\mathcal{B}_{r}^{\beta}\left([0, 1]^{d}\right)\) is more nature, allowing \(\eta\) to take values close to \(0\) and \(1\) with positive probabilities. We finally point out that, under the same assumption (i.e., \(P\in\mathcal{H}_{7}^{d,\beta}\)), one can use Theorem 2.1 to establish a convergence rate which is slightly improved compared with (3.3). Actually, we can show that there exists a constant \(\mathrm{c}\in(0,\infty)\) only depending on \((d,\beta)\), such that for any \(\mu\in[1,\infty)\), and \(\nu\in[0,\infty)\), there holds \[\sup_{P\in\mathcal{H}_{7}^{d,\beta}}\boldsymbol{E}_{P^{\otimes n}}\left[\left\| \hat{f}_{n}^{\mathbf{FNN}}-f_{\phi,P}^{*}\right\|_{\mathcal{L}_{P_{X}}^{2}}^{2} \right]\lesssim\left(\frac{(\log n)^{3}}{n}\right)^{\frac{2\beta}{2\beta+d}}, \tag{3.6}\] where the estimator \(\hat{f}_{n}^{\mathbf{FNN}}\in\mathcal{F}_{d}^{\mathbf{FNN}}(G,N,S,B,F)\) is defined by (2.14) with \[\begin{split}&\mathrm{c}\log n\leq G\asymp\log n,\ N\asymp\left(\frac{n}{\log^{3}n}\right)^{\frac{d}{d+2\beta}},\ S\asymp\left(\frac{n}{\log^{3}n}\right)^{\frac{d}{d+2\beta}} \cdot\log n,\\ & 1\leq B\lesssim n^{\nu},\ \mathrm{and}\ 1\leq F\leq\mu.\end{split} \tag{3.7}\] Though we restrict the weights and biases to be bounded by \(B\), both the convergence rate and the network complexities in the result above refine the previous estimates established in (3.3) and (3.4). In particular, since \(\frac{6\beta}{2\beta+d}<3<4\), the convergence rate in (3.6) is indeed faster than that in (3.3) due to a smaller power exponent of the term \(\log n\). The proof of this claim is in Appendix A.3.7. We also remark that the convergence rate in (3.6) achieves the minimax optimal rate established in [41] up to log factors (so does the rate in (3.3)), which confirms that generalization analysis developed in this paper is also rate-optimal for bounded \(f_{\phi,P}^{*}\). In our work, we have established generalization bounds for ERMs over hypothesis spaces consisting of neural networks. However, such ERMs cannot be obtained in practice because the correspoding optimization problems (e.g., (2.2)) cannot be solved explicitly. Instead, practical neural network estimators are obtained from algorithms which numerically solve the empirical risk minimization problem. Therefore, it is better to conduct generalization analysis for estimators obtained from such algorithms. One typical work in this direction is [20]. In [20], for classification tasks, the authors establish excess \(\phi\)-risk bounds to show that classifiers obtained from solving empirical risk minimization with respect to the logistic loss over shallow neural networks using gradient descent with (or without) early stopping are consistent. Note that the setting of [20] is quite different from ours: We consider deep neural network models in our work, while [20] considers shallow ones. Besides, we utilize the smoothness of the conditional probability function \(\eta(\cdot)=P(\{1\}\,|\cdot)\) to characterize the regularity (or complexity) of the data distribution \(P\). Instead, in [20], for each \(\overline{U}_{\infty}:\mathbb{R}^{d}\to\mathbb{R}^{d}\), the authors construct a function \[f(\ \cdot\ ;\overline{U}_{\infty}):\mathbb{R}^{d}\to\mathbb{R},x\mapsto\int_{ \mathbb{R}^{d}}x^{\top}\overline{U}_{\infty}(v)\cdot\mathbb{1}_{[0,\infty)}(v ^{\top}x)\cdot\frac{1}{(2\pi)^{n/2}}\cdot\exp(-\frac{\|v\|_{2}^{2}}{2})\mathrm{ d}v\] called infinite-width random feature model. Then they use the norm of \(\overline{U}_{\infty}\) which makes \(\mathcal{E}_{P}^{\phi}(f(\ \cdot\ ;\overline{U}_{\infty}))\) small to characterize the regularity of data: the data distribution is regarded as simple if there is a \(\overline{U}_{\infty}\) with \(\mathcal{E}_{P}^{\phi}(f(\ \cdot\ ;\overline{U}_{\infty}))\approx 0\) and moreover has a low norm. More rigorously, the slower the quantity \[\inf\left\{\left\|\overline{U}_{\infty}\right\|_{\mathbb{R}^{d}}\left| \mathcal{E}_{P}^{\phi}(f(\ \cdot\ ;\overline{U}_{\infty}))\leq\varepsilon\right.\right\} \tag{3.8}\] grows as \(\varepsilon\to 0\), the more regular (simpler) the data distribution \(P\) is. In [20], the established excess \(\phi\)-risk bounds depend on the quantity \(\mathcal{E}_{P}^{\phi}(f(\ \cdot\ ;\overline{U}_{\infty}))\) and the norm \(\left\|\overline{U}_{\infty}\right\|_{\mathbb{R}^{d}}\). Hence by assuming certain growth rates of the quantity in (3.8) as \(\varepsilon\to 0\), we can obtain specific rates of convergence from the excess \(\phi\)-risk bounds in [20]. It is natural to ask is there any relation between these two characterizations of data regularity, that is, the smoothness of conditional probability function, and the rate of growth of the quantity in (3.8) as \(\varepsilon\to 0\). For example, will Holder smoothness of the conditional probability function imply certain growth rates of the quantity in (3.8) as \(\varepsilon\to 0\)? This question is worth considering because once we prove the equivalence of these two characterizations, then the generalization analysis in [20] will be able to be used in other settings requiring smoothness of the conditional probability function and vice versa. In addition, it is also interesting to study how can we use our new techniques developed in this paper to establish generalization bounds for deep neural network estimators obtained from learning algorithms (e.g., gradient descent) within the settings in this paper. ## 4 Conclusion In this paper, we develop a novel generalization analysis for binary classification with DNNs and logistic loss. The unboundedness of the target function in logistic classification poses challenges for the estimates of sample error and approximation error when deriving generalization bounds. To overcome these difficulties, we introduce a bivariate function \(\psi:[0,1]^{d}\times\{-1,1\}\rightarrow\mathbb{R}\) to establish an elegant oracle-type inequality, aiming to bound the excess risk with respect to the logistic loss. This inequality incorporates the estimation of sample error and enables us to propose a framework for generalization analysis, which avoids using the explicit form of the target function. By properly choosing \(\psi\) under this framework, we can eliminate the boundedness restriction of the target function and establish sharp rates of convergence. In particular, for fully connected DNN classifiers trained by minimizing the empirical logistic risk, we obtain an optimal (up to some logarithmic factor) rate of convergence of the excess logistic risk (which further yields a rate of convergence of the excess misclassification error via the calibration inequality) merely under the Holder smoothness assumption on the conditional probability function. If we instead assume that the conditional probability function is the composition of several vector-valued multivariate functions of which each component function is either a maximum value function of some of its input variables or a Holder smooth function only depending on a small number of its input variables, we can even establish dimension-free optimal (up to some logarithmic factor) convergence rates for the excess logistic risk of fully connected DNN classifiers, further leading to dimension-free rates of convergence of their excess misclassification error through the calibration inequality. This result serves to elucidate the remarkable achievements of DNNs in high-dimensional real-world classification tasks. In other circumstances such as when the data distribution has a piecewise smooth decision boundary and the input data are bounded away from it (i.e., \(s_{2}=\infty\) in (2.51)), dimension-free rates of convergence can also be derived. Besides the novel oracle-type inequality, the sharp estimates presented in our paper also owe to a tight error bound for approximating the natural logarithm function (which is unbounded near zero) by fully connected DNNs. All the claims for the optimality of rates in our paper are justified by corresponding minimax lower bounds. As far as we know, all these results are new to the literature, which further enrich the theoretical understanding of classification using deep neural networks. At last, we would like to emphasize that our framework of generalization analysis is very general and can be extended to many other settings (e.g., when the loss function, the hypothesis space, or the assumption on the data distribution is different from that in this current paper). In particular, in our forthcoming research [48], we have investigated generalization analysis for CNN classifiers trained with the logistic loss, exponential loss, or LUM loss on spheres under the Sobolev smooth conditional probability assumption. Motivated by recent work [12, 13, 28, 49], we will also study more efficient implementations of deep logistic classification for dealing with big data. ## Appendix In this appendix, we first bound the covering numbers of spaces of fully connected neural networks. Then we will provide some results on the approximation theory of fully connected neural networks. At last, we give proofs of results in the main body of this paper. ### Covering Numbers of Spaces of Fully Connected Neural Networks Recall that if \(\mathcal{F}\) consists of bounded real-valued functions defined on a domain containing \([0,1]^{d}\), the covering number of \(\mathcal{F}\) with respect to the radius \(\gamma\) and the metric given by \(\rho(f,g)=\sup_{x\in[0,1]^{d}}\left|f(x)-g(x)\right|,\forall f,g\in\mathcal{F}\) is denoted by \(\mathcal{N}(\mathcal{F},\gamma)\). For the space \(\mathcal{F}_{d}^{\mathbf{FNN}}\left(G,N,S,B,F\right)\) defined by (1.15), the covering number \(\mathcal{N}\left(\mathcal{F}_{d}^{\mathbf{FNN}}\left(G,N,S,B,F\right),\gamma\right)\) can be bounded from above in terms of \(G,N,S,B\), and the radius of covering \(\gamma\). The related results are stated below. **Theorem A.1**.: _For \(G\in[1,\infty)\), \((N,S,B)\in[0,\infty)^{3}\), and \(\gamma\in(0,1)\), there holds_ \[\log\left(\mathcal{N}\left(\mathcal{F}_{d}^{\mathbf{FNN}}\left(G,N,S,B,\infty\right),\gamma\right)\right)\] \[\leq(S+Gd+1)(2G+5)\cdot\log\frac{(\max\left\{N,d\right\}+1)(B \lor 1)(G+1)}{\gamma}.\] Theorem A.1 can be proved in the same manner as in the proof of Lemma 5 in [37]. Therefore, we omit the proof here. Similar results are also presented in Proposition A.1 of [22] and Lemma 3 of [42]. Corollary A.1 follows immediately from Theorem A.1 and Lemma 10.6 of [1]. **Corollary A.1**.: _For \(G\in[1,\infty)\), \((N,S,B)\in[0,\infty)^{3}\), \(F\in[0,\infty]\) and \(\gamma\in(0,1)\), there holds_ \[\log\left(\mathcal{N}\left(\mathcal{F}_{d}^{\mathbf{FNN}}\left(G,N,S,B,F\right),\gamma\right)\right)\] \[\leq(S+Gd+1)(2G+5)\cdot\log\frac{(\max\left\{N,d\right\}+1)(B\lor 1 )(2G+2)}{\gamma}.\] ### Approximation Theory of Fully Connected Neural Networks Theorem A.2 below gives error bounds for approximating Holder continuous functions by fully connected neural networks. Since it can be derived straightforwardly from Theorem 5 of [37], we omit its proof. **Theorem A.2**.: _Suppose that \(f\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d}\right)\) with some \((\beta,r)\in(0,\infty)^{2}\). Then for any positive integers \(m\) and \(M^{\prime}\) with \(M^{\prime}\geq\max\left\{(\beta+1)^{d},\left(r\sqrt{d}\left\lceil\beta\right \rceil^{d}+1\right)\mathrm{e}^{d}\right\}\), there exists_ \[\tilde{f}\in\mathcal{F}_{d}^{\mathbf{FNN}}\left(14m(2+\log_{2}\left(d\lor \beta\right)),6\left(d+\left\lceil\beta\right\rceil\right)M^{\prime},987(2d+ \beta)^{4d}M^{\prime}m,1,\infty\right)\] _such that_ \[\sup_{x\in[0,1]^{d}}\left|f(x)-\tilde{f}(x)\right|\] \[\leq r\sqrt{d}\left\lceil\beta\right\rceil^{d}\cdot 3^{\beta}M^{ \prime-\beta/d}+\left(1+2r\sqrt{d}\left\lceil\beta\right\rceil^{d}\right) \cdot 6^{d}\cdot(1+d^{2}+\beta^{2})\cdot M^{\prime}\cdot 2^{-m}.\] Corollary A.2 follows directly from Theorem A.2. **Corollary A.2**.: _Suppose that \(f\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d}\right)\) with some \((\beta,r)\in(0,\infty)^{2}\). Then for any \(\varepsilon\in(0,1/2]\), there exists_ \[\tilde{f}\in\mathcal{F}_{d}^{\mathbf{FNN}}\left(D_{1}\log\frac{1}{\varepsilon },D_{2}\varepsilon^{-\frac{d}{\beta}},D_{3}\varepsilon^{-\frac{d}{\beta}}\log \frac{1}{\varepsilon},1,\infty\right)\] _such that_ \[\sup_{x\in[0,1]^{d}}\left|f(x)-\tilde{f}(x)\right|\leq\varepsilon,\] _where \((D_{1},D_{2},D_{3})\in(0,\infty)^{3}\) are constants depending only on \(d\), \(\beta\) and \(r\)._ Proof.: Let \[E_{1} =\max\left\{(\beta+1)^{d},\left(r\sqrt{d}\left\lceil\beta\right \rceil^{d}+1\right)\mathrm{e}^{d},\left(\frac{1}{2r}\cdot 3^{-\beta}\cdot\frac{1}{ \sqrt{d}\left\lceil\beta\right\rceil^{d}}\right)^{-d/\beta}\right\},\] \[E_{2} =3\max\left\{1+\frac{d}{\beta},\frac{\log\left(4E_{1}\cdot \left(1+2r\sqrt{d}\left\lceil\beta\right\rceil^{d}\right)\left(1+d^{2}+\beta^{2 }\right)\cdot 6^{d}\right)}{\log 2}\right\},\] \[D_{1} =14\cdot(2+\log_{2}{(d\lor\beta)})\cdot(E_{2}+2),\] \[D_{2} =6\cdot(d+\lceil\beta\rceil)\cdot(E_{1}+1),\] \[D_{3} =987\cdot(2d+\beta)^{4d}\cdot(E_{1}+1)\cdot(E_{2}+2).\] Then \(D_{1},D_{2},D_{3}\) are constants only depending on \(d,\beta,r\). For \(f\in\mathcal{B}_{r}^{\beta}\left(\left[0,1\right]^{d}\right)\) and \(\varepsilon\in(0,1/2]\), choose \(M^{\prime}=\left\lceil E_{1}\cdot\varepsilon^{-d/\beta}\right\rceil\) and \(m=\lceil E_{2}\log(1/\varepsilon)\rceil\). Then \(m\) and \(M^{\prime}\) are positive integers satisfying that \[1 \leq\max\left\{(\beta+1)^{d},\left(r\sqrt{d}\left\lceil\beta \right\rceil^{d}+1\right)\mathrm{e}^{d}\right\}\leq E_{1}\leq E_{1}\cdot \varepsilon^{-d/\beta}\] (A.1) \[\leq M^{\prime}\leq 1+E_{1}\cdot\varepsilon^{-d/\beta}\leq(E_{1}+ 1)\cdot\varepsilon^{-d/\beta},\] \[M^{\prime-\beta/d}\leq\left(E_{1}\cdot\varepsilon^{-d/\beta} \right)^{-\beta/d}\leq\varepsilon\cdot\frac{1}{2r}\cdot 3^{-\beta}\cdot \frac{1}{\sqrt{d}\left\lceil\beta\right\rceil^{d}},\] (A.2) and \[m\leq E_{2}\log(1/\varepsilon)+2\log 2\leq E_{2}\log(1/\varepsilon)+2\log(1/ \varepsilon)=(2+E_{2})\cdot\log(1/\varepsilon).\] (A.3) Moreover, we have that \[2 \cdot\left(1+2r\sqrt{d}\left\lceil\beta\right\rceil^{d}\right) \cdot 6^{d}\cdot(1+d^{2}+\beta^{2})\cdot M^{\prime}\cdot\frac{1}{\varepsilon}\] (A.4) \[\leq 2\cdot\left(1+2r\sqrt{d}\left\lceil\beta\right\rceil^{d} \right)\cdot 6^{d}\cdot(1+d^{2}+\beta^{2})\cdot(E_{1}+1)\cdot\varepsilon^{-1-d/\beta}\] \[\leq 2\cdot\left(1+2r\sqrt{d}\left\lceil\beta\right\rceil^{d} \right)\cdot 6^{d}\cdot(1+d^{2}+\beta^{2})\cdot 2E_{1}\cdot\varepsilon^{-1-d/\beta}\] \[\leq 2^{\frac{1}{3}E_{2}}\cdot\varepsilon^{-1-d/\beta}\leq 2^{ \frac{1}{3}E_{2}}\cdot\varepsilon^{-\frac{1}{3}E_{2}}\leq\varepsilon^{-\frac{1 }{3}E_{2}}\cdot\varepsilon^{-\frac{1}{3}E_{2}}\] \[\leq\varepsilon^{-E_{2}\cdot\log 2}=2^{E_{2}\log(1/ \varepsilon)}\leq 2^{m}.\] Therefore, from (A.1), (A.2), (A.3), (A.4), and Theorem A.2, we conclude that there exists \[\tilde{f} \in\mathcal{F}_{d}^{\mathbf{FNN}}(14m(2+\log_{2}{(d\lor\beta)}),6 \left(d+\lceil\beta\rceil\right)M^{\prime},987(2d+\beta)^{4d}M^{\prime}m,1,\infty)\] \[=\mathcal{F}_{d}^{\mathbf{FNN}}\left(\frac{D_{1}}{E_{2}+2}\cdot m,\frac{D_{2}}{E_{1}+1}\cdot M^{\prime},\frac{D_{3}}{(E_{1}+1)\cdot(E_{2}+2)} \cdot M^{\prime}m,1,\infty\right)\] \[\subset\mathcal{F}_{d}^{\mathbf{FNN}}\left(D_{1}\log\frac{1}{ \varepsilon},D_{2}\varepsilon^{-\frac{d}{\beta}},D_{3}\varepsilon^{-\frac{d}{ \beta}}\log\frac{1}{\varepsilon},1,\infty\right)\] such that \[\sup_{x\in[0,1]^{d}}\left|f(x)-\tilde{f}(x)\right|\] \[\leq r\sqrt{d}\left\lceil\beta\right\rceil^{d}\cdot 3^{\beta}M^{ \prime-\beta/d}+\left(1+2r\sqrt{d}\left\lceil\beta\right\rceil^{d}\right) \cdot 6^{d}\cdot(1+d^{2}+\beta^{2})\cdot M^{\prime}\cdot 2^{-m}\leq\frac{ \varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon.\] Thus we complete the proof. ### Proofs of Results in the Main Body The proofs in this section will be organized in logical order in the sense that each result in this section is proved without relying on results that are presented after it. Throughout this section, we use \[C_{\mathtt{Parameter}_{1},\mathtt{Parameter}_{2},\cdots,\mathtt{Parameter}_{m}}\] to denote a positive constant only depending on \(\mathtt{Parameter}_{1}\), \(\mathtt{Parameter}_{2}\), \(\cdots\), \(\mathtt{Parameter}_{m}\). For example, we may use \(C_{d,\beta}\) to denote a positive constant only depending on \((d,\beta)\). The values of such constants appearing in the proofs may be different from line to line or even in the same line. Besides, we may use the same symbol with different meanings in different proofs. For example, the symbol \(I\) may denote a number in one proof, and denote a set in another proof. To avoid confusion, we will explicitly redefine these symbols in each proof. #### a.3.1 Proofs of some properties of the target function The following lemma justifies our claim in (1.7). **Lemma A.1**.: _Let \(d\in\mathbb{N}\), \(P\) be a probability measure on \([0,1]^{d}\times\{-1,1\}\), and \(\phi:\mathbb{R}\to[0,\infty)\) be a measurable function. Define_ \[\overline{\phi}:[-\infty,\infty]\to[0,\infty],\;z\mapsto\begin{cases} \varlimsup_{t\to+\infty}\phi(t),&\text{ if }z=\infty,\\ \phi(z),&\text{ if }z\in\mathbb{R},\\ \varlimsup_{t\to-\infty}\phi(t),&\text{ if }z=-\infty,\end{cases}\] _which is an extension of \(\phi\) to \([-\infty,\infty]\). Suppose \(f^{*}:[0,1]^{d}\to[-\infty,\infty]\) is a measurable function satisfying that_ \[f^{*}(x)\in\operatorname*{arg\,min}_{z\in[-\infty,\infty]}\int_{\{-1,1\}} \overline{\phi}(yz)\mathrm{d}P(y|x)\text{ for }P_{X}\text{-almost all }x\in[0,1]^{d}.\] (A.5) _Then there holds_ \[\int_{[0,1]^{d}\times\{-1,1\}}\overline{\phi}(yf^{*}(x))\mathrm{d}P(x,y)=\inf \left\{\left.\mathcal{R}^{\phi}_{P}(g)\right|g:[0,1]^{d}\to\mathbb{R}\text{ is measurable}\right\}.\] Proof.: Let \(\Omega_{0}:=\left\{\left.x\in[0,1]^{d}\right|f^{*}(x)\in\mathbb{R}\right\} \times\{-1,1\}\). Then for any \(m\in\mathbb{N}\) and any \((i,j)\in\{-1,1\}^{2}\), define \[f_{m}:[0,1]^{d}\to\mathbb{R},\;x\mapsto\begin{cases}m,&\text{ if }f^{*}(x)= \infty,\\ f^{*}(x),&\text{ if }f^{*}(x)\in\mathbb{R},\\ -m,&\text{ if }f^{*}(x)=-\infty,\end{cases}\] and \(\Omega_{i,j}=\left\{\left.x\in[0,1]^{d}\right|f^{*}(x)=i\cdot\infty\right\} \times\{j\}\). Obviously, \(yf^{*}(x)=ij\cdot\infty\) and \(yf_{m}(x)=ijm\) for any \((i,j)\in\{-1,1\}^{2}\), any \(m\in\mathbb{N}\), and any \((x,y)\in\Omega_{i,j}\). Therefore, \[\begin{split}&\varlimsup_{m\to+\infty}\int_{\Omega_{i,j}}\phi( yf_{m}(x))\mathrm{d}P(x,y)\\ &=\varlimsup_{m\to+\infty}\int_{\Omega_{i,j}}\phi(ijm)\mathrm{d}P (x,y)=P(\Omega_{i,j})\cdot\varlimsup_{m\to+\infty}\phi(ijm)\\ &\leq P(\Omega_{i,j})\cdot\varlimsup_{t\to ij\cdot\infty}\phi(t)= P(\Omega_{i,j})\cdot\overline{\phi}(ij\cdot\infty)=\int_{\Omega_{i,j}} \overline{\phi}(ij\cdot\infty)\mathrm{d}P(x,y)\\ &=\int_{\Omega_{i,j}}\overline{\phi}(yf^{*}(x))\mathrm{d}P(x,y), \;\forall\;(i,j)\in\{-1,1\}^{2}\,.\end{split}\] (A.6) Besides, it is easy to verify that \(yf_{m}(x)=yf^{*}(x)\in\mathbb{R}\) for any \((x,y)\in\Omega_{0}\) and any \(m\in\mathbb{N}\), which means that \[\int_{\Omega_{0}}\phi(yf_{m}(x))\mathrm{d}P(x,y)=\int_{\Omega_{0}}\overline{ \phi}(yf^{*}(x))\mathrm{d}P(x,y),\;\forall\;m\in\mathbb{N}.\] (A.7) Combining (A.6) and (A.7), we obtain \[\begin{split}&\inf\left\{\mathcal{R}^{\phi}_{P}(g)\Big{|}\,g:[0,1]^ {d}\to\mathbb{R}\text{ is measurable}\right\}\\ &\leq\varlimsup_{m\to+\infty}\mathcal{R}^{\phi}_{P}(f_{m})= \varlimsup_{m\to+\infty}\int_{[0,1]^{d}\times\{-1,1\}}\phi(yf_{m}(x))\mathrm{d }P(x,y)\\ &=\varlimsup_{m\to+\infty}\left(\int_{\Omega_{0}}\phi(yf_{m}(x)) \mathrm{d}P(x,y)+\sum_{i\in\{-1,1\}}\sum_{j\in\{-1,1\}}\int_{\Omega_{i,j}}\phi( yf_{m}(x))\mathrm{d}P(x,y)\right)\\ &\leq\varliminf_{m\to+\infty}\int_{\Omega_{0}}\phi(yf_{m}(x)) \mathrm{d}P(x,y)+\sum_{i\in\{-1,1\}}\sum_{j\in\{-1,1\}}\varlimsup_{m\to+ \infty}\int_{\Omega_{i,j}}\phi(yf_{m}(x))\mathrm{d}P(x,y)\\ &\leq\int_{\Omega_{0}}\overline{\phi}(yf^{*}(x))\mathrm{d}P(x,y )+\sum_{i\in\{-1,1\}}\sum_{j\in\{-1,1\}}\int_{\Omega_{i,j}}\overline{\phi}(yf ^{*}(x))\mathrm{d}P(x,y)\\ &=\int_{[0,1]^{d}\times\{-1,1\}}\overline{\phi}(yf^{*}(x)) \mathrm{d}P(x,y).\end{split}\] (A.8) On the other hand, for any measurable \(g:[0,1]^{d}\to\mathbb{R}\), it follows from (A.5) that \[\begin{split}&\int_{\{-1,1\}}\overline{\phi}(yf^{*}(x))\mathrm{d }P(y|x)=\inf_{z\in[-\infty,\infty]}\int_{\{-1,1\}}\overline{\phi}(yz)\mathrm{d }P(y|x)\leq\int_{\{-1,1\}}\overline{\phi}(yg(x))\mathrm{d}P(y|x)\\ &=\int_{\{-1,1\}}\phi(yg(x))\mathrm{d}P(y|x)\text{ for $P_{X}$- almost all $x\in[0,1]^{d}$}.\end{split}\] Integrating both sides, we obtain \[\begin{split}&\int_{[0,1]^{d}\times\{-1,1\}}\overline{\phi}(yf ^{*}(x))\mathrm{d}P(x,y)=\int_{[0,1]^{d}}\int_{\{-1,1\}}\overline{\phi}(yf^{*} (x))\mathrm{d}P(y|x)\mathrm{d}P_{X}(x)\\ &\leq\int_{[0,1]^{d}}\int_{\{-1,1\}}\phi(yg(x))\mathrm{d}P(y|x)P _{X}(x)=\int_{[0,1]^{d}\times\{-1,1\}}\phi(yg(x))\mathrm{d}P(x,y)=\mathcal{R} ^{\phi}_{P}(g).\end{split}\] Since \(g\) is arbitrary, we deduce that \[\int_{[0,1]^{d}\times\{-1,1\}}\overline{\phi}(yf^{*}(x))\mathrm{d}P(x,y)\leq \inf\left\{\,\mathcal{R}^{\phi}_{P}(g)\Big{|}\,g:[0,1]^{d}\to\mathbb{R}\text{ is measurable}\right\},\] which, together with (A.8), proves the desired result. The next lemma gives the explicit form of the target function of the logistic risk. **Lemma A.2**.: _Let \(\phi(t)=\log(1+\mathrm{e}^{-t})\) be the logistic loss, \(d\in\mathbb{N}\), \(P\) be a probability measure on \([0,1]^{d}\times\{-1,1\}\), and \(\eta\) be the conditional probability function \(P(\{1\}\,|\cdot)\) of \(P\). Define_ \[f^{*}:[0,1]^{d}\to[-\infty,\infty],\;x\mapsto\begin{cases}\infty,&\text{if $ \eta(x)=1$},\\ \log\frac{\eta(x)}{1-\eta(x)},&\text{if $\eta(x)\in(0,1)$},\\ -\infty,&\text{if $\eta(x)=0$},\end{cases}\] (A.9) _which is a natural extension of the map_ \[\left\{\,z\in[0,1]^{d}\right|\eta(z)\in(0,1)\right\}\ni x\mapsto\log\frac{\eta (x)}{1-\eta(x)}\in\mathbb{R}\] _to all of \([0,1]^{d}\). Then \(f^{*}\) is a target function of the \(\phi\)-risk under \(P\), i.e., (1.6) holds. In addition, the target function of the \(\phi\)-risk under \(P\) is unique up to a \(P_{X}\)-null set. In other words, for any target function \(f^{*}\) of the \(\phi\)-risk under \(P\), we must have_ \[P_{X}\left(\left\{\,x\in[0,1]^{d}\right|f^{*}(x)\neq f^{*}(x)\right\}\right)=0.\] Proof.: Define \[\overline{\phi}:[-\infty,\infty]\to[0,\infty],\ z\mapsto\begin{cases}0,&\text{if }z= \infty,\\ \phi(z),&\text{if }z\in\mathbb{R},\\ \infty,&\text{if }z=-\infty,\end{cases}\] (A.10) which is a natural extension of the logistic loss \(\phi\) to \([-\infty,\infty]\), and define \[V_{a}:[-\infty,\infty]\to[0,\infty],z\mapsto a\overline{\phi}(z)+(1-a) \overline{\phi}(-z)\] for any \(a\in[0,1]\). Then we have that \[\begin{split}&\int_{\{-1,1\}}\overline{\phi}(yz)\mathrm{d}P(y|x )=\eta(x)\overline{\phi}(z)+(1-\eta(x))\overline{\phi}(-z)\\ &=V_{\eta(x)}(z),\ \forall\ x\in[0,1]^{d},\ z\in[-\infty, \infty].\end{split}\] (A.11) For any \(a\in[0,1]\), we have that \(V_{a}\) is smooth on \(\mathbb{R}\), and an elementary calculation gives \[V_{a}^{\prime\prime}(t)=\frac{1}{2+\mathrm{e}^{t}+\mathrm{e}^{-t}}>0,\ \forall\ t\in\mathbb{R}.\] Therefore, \(V_{a}\) is strictly convex on \(\mathbb{R}\) and \[\begin{split}&\operatorname*{arg\,min}_{z\in\mathbb{R}}V_{a}(z)= \left\{z\in\mathbb{R}\big{|}V_{a}^{\prime}(z)=0\right\}=\left\{z\in\mathbb{R} \left|a\phi^{\prime}(z)-(1-a)\phi^{\prime}(-z)=0\right.\right\}\\ &=\left\{z\in\mathbb{R}\left|-a+\frac{\mathrm{e}^{z}}{1+\mathrm{ e}^{z}}=0\right.\right\}=\begin{cases}\left\{\log\frac{a}{1-a}\right\},&\text{if }a\in(0,1),\\ \varnothing,&\text{if }a\in\left\{0,1\right\}.\end{cases}\end{split}\] (A.12) Besides, it is easy to verify that \[V_{a}(z)=\infty,\ \forall\ a\in(0,1),\ \forall\ z\in\left\{\infty,-\infty \right\},\] which, together with (A.12), yields \[\operatorname*{arg\,min}_{z\in[-\infty,\infty]}V_{a}(z)=\operatorname*{arg\, min}_{z\in\mathbb{R}}V_{a}(z)=\left\{\log\frac{a}{1-a}\right\},\ \forall\ a\in(0,1).\] (A.13) In addition, it follows from \[\overline{\phi}(z)>0=\overline{\phi}(\infty),\ \forall\ z\in[-\infty,\infty)\] that \[\operatorname*{arg\,min}_{z\in[-\infty,\infty]}V_{1}(z)=\operatorname*{arg\, min}_{z\in[-\infty,\infty]}\overline{\phi}(z)=\left\{\infty\right\}\] (A.14) and \[\operatorname*{arg\,min}_{z\in[-\infty,\infty]}V_{0}(z)=\operatorname*{arg\, min}_{z\in[-\infty,\infty]}\overline{\phi}(-z)=\left\{-\infty\right\}.\] (A.15) Combining (A.11), (A.14) and (A.15), we obtain \[\begin{split}&\operatorname*{arg\,min}_{z\in[-\infty,\infty]} \int_{\{-1,1\}}\overline{\phi}(yz)\mathrm{d}P(y|x)=\operatorname*{arg\,min} _{z\in[-\infty,\infty]}V_{\eta(x)}(z)=\begin{cases}\left\{+\infty\right\},& \text{if }\eta(x)=1,\\ \left\{\log\frac{\eta(x)}{1-\eta(x)}\right\},&\text{if }\eta(x)\in(0,1),\\ \left\{-\infty\right\},&\text{if }\eta(x)=0\end{cases}\\ &=\left\{f^{*}(x)\right\},\ \forall\ x\in[0,1]^{d},\end{split}\] which implies (1.6). Therefore, \(f^{*}\) is a target function of the \(\phi\)-risk under the distribution \(P\). Moreover, the uniqueness of the target function of the \(\phi\)-risk under \(P\) follows immediately from the fact that for all \(x\in[0,1]^{d}\) the set \[\operatorname*{arg\,min}_{z\in[-\infty,\infty]}\int_{\{-1,1\}}\overline{\phi} (yz)\mathrm{d}P(y|x)=\left\{f^{*}(x)\right\}\] contains exactly one point and the uniqueness (up to some \(P_{X}\)-null set) of the conditional distribution \(P(\cdot|\cdot)\) of \(P\). This completes the proof. The Lemma A.3 below provides a formula for computing the infimum of the logistic risk over all real-valued measurable functions. **Lemma A.3**.: _Let \(\phi(t)=\log(1+\mathrm{e}^{-t})\) be the logistic loss, \(\delta\in(0,1/2]\), \(d\in\mathbb{N}\), \(P\) be a probability measure on \([0,1]^{d}\times\{-1,1\}\), \(\eta\) be the conditional probability function \(P(\{1\}\,|\cdot)\) of \(P\), \(f^{*}\) be defined by (A.9), \(\overline{\phi}\) be defined by (A.10), \(H\) be defined by_ \[H:[0,1]\to[0,\infty),t\mapsto\begin{cases}t\log\left(\frac{1}{t}\right)+(1-t) \log\left(\frac{1}{1-t}\right),&\text{ if }t\in(0,1),\\ 0,&\text{ if }t\in\{0,1\},\end{cases}\] _and \(\psi\) be defined by_ \[\psi:[0,1]^{d}\times\{-1,1\}\to[0,\infty),\] \[(x,y)\mapsto\begin{cases}\phi\left(y\log\frac{\eta(x)}{1-\eta(x) }\right),&\text{ if }\eta(x)\in[\delta,1-\delta],\\ 0,&\text{ if }\eta(x)\in\{0,1\},\\ \eta(x)\log\frac{1}{\eta(x)}+(1-\eta(x))\log\frac{1}{1-\eta(x)},&\text{ if }\eta(x)\in(0,\delta)\cup(1-\delta,1).\end{cases}\] _Then there holds_ \[\inf\Big{\{}\,\mathcal{R}_{P}^{\phi}(g)\Big{|}\,g:[0,1]^{d}\to \mathbb{R}\text{ is measurable}\Big{\}}=\int_{[0,1]^{d}\times\{-1,1\}}\overline{\phi}(yf^{*}(x ))\mathrm{d}P(x,y)\] \[=\int_{[0,1]^{d}}H(\eta(x))\mathrm{d}P_{X}(x)=\int_{[0,1]^{d} \times\{-1,1\}}\psi(x,y)\mathrm{d}P(x,y).\] Proof.: According to Lemma A.2, \(f^{*}\) is a target function of the \(\phi\)-risk under the distribution \(P\), meaning that \[f^{*}(x)\in\operatorname*{arg\,min}_{z\in[-\infty,\infty]}\int_{\{-1,1\}} \overline{\phi}(yz)\mathrm{d}P(y|x)\text{ for }P_{X}\text{-almost all }x\in[0,1]^{d}.\] Then it follows from Lemma A.1 that \[\inf\Big{\{}\,\mathcal{R}_{P}^{\phi}(g)\Big{|}\,g:[0,1]^{d}\to \mathbb{R}\text{ is measurable}\Big{\}}=\int_{[0,1]^{d}\times\{-1,1\}}\overline{\phi}(yf^{*}(x ))\mathrm{d}P(x,y)\] (A.16) \[=\int_{[0,1]^{d}}\int_{\{-1,1\}}\overline{\phi}(yf^{*}(x)) \mathrm{d}P(y|x)\mathrm{d}P_{X}(x)\] \[=\int_{[0,1]^{d}}\Big{(}\eta(x)\overline{\phi}(f^{*}(x))+(1-\eta (x))\overline{\phi}(-f^{*}(x))\Big{)}\mathrm{d}P_{X}(x).\] For any \(x\in[0,1]^{d}\), if \(\eta(x)=1\), then we have \[\eta(x)\overline{\phi}(f^{*}(x))+(1-\eta(x))\overline{\phi}(-f^{ *}(x))=\overline{\phi}(f^{*}(x))=\overline{\phi}(+\infty)=0=H(\eta(x))=0\] \[=1\cdot 0+(1-1)\cdot 0=\eta(x)\psi(x,1)+(1-\eta(x))\psi(x,-1)= \int_{\{-1,1\}}\psi(x,y)\mathrm{d}P(y|x);\] If \(\eta(x)=0\), then we have \[\eta(x)\overline{\phi}(f^{*}(x))+(1-\eta(x))\overline{\phi}(-f^{ *}(x))=\overline{\phi}(-f^{*}(x))=\overline{\phi}(+\infty)=0=H(\eta(x))=0\] \[=0\cdot 0+(1-0)\cdot 0=\eta(x)\psi(x,1)+(1-\eta(x))\psi(x,-1)= \int_{\{-1,1\}}\psi(x,y)\mathrm{d}P(y|x);\] If \(\eta(x)\in(0,\delta)\cup(1-\delta,1)\), then we have \[\eta(x)\overline{\phi}(f^{*}(x))+(1-\eta(x))\overline{\phi}(-f^{*}(x))\] \[=\eta(x)\phi\left(\log\frac{\eta(x)}{1-\eta(x)}\right)+(1-\eta(x)) \phi\left(-\log\frac{\eta(x)}{1-\eta(x)}\right)\] \[=\eta(x)\log\left(1+\frac{1-\eta(x)}{\eta(x)}\right)+(1-\eta(x)) \log\left(1+\frac{\eta(x)}{1-\eta(x)}\right)\] \[=\eta(x)\log\frac{1}{\eta(x)}+(1-\eta(x))\log\frac{1}{1-\eta(x)}\] \[=H(\eta(x))=\int_{\{-1,1\}}\left(\eta(x)\log\frac{1}{\eta(x)}+(1- \eta(x))\log\frac{1}{1-\eta(x)}\right)\mathrm{d}P(y|x)\] \[=\int_{\{-1,1\}}\psi(x,y)\mathrm{d}P(y|x);\] If \(\eta(x)\in[\delta,1-\delta]\), then we have that \[\eta(x)\overline{\phi}\left(f^{*}(x)\right)+(1-\eta(x))\overline {\phi}(-f^{*}(x))\] \[=\eta(x)\phi\left(\log\frac{\eta(x)}{1-\eta(x)}\right)+(1-\eta(x ))\phi\left(-\log\frac{\eta(x)}{1-\eta(x)}\right)\] \[=\eta(x)\log\left(1+\frac{1-\eta(x)}{\eta(x)}\right)+(1-\eta(x) )\log\left(1+\frac{\eta(x)}{1-\eta(x)}\right)\] \[=\eta(x)\log\frac{1}{\eta(x)}+(1-\eta(x))\log\frac{1}{1-\eta(x)}\] \[=H(\eta(x))=\eta(x)\phi\left(\log\frac{\eta(x)}{1-\eta(x)}\right) +(1-\eta(x))\phi\left(-\log\frac{\eta(x)}{1-\eta(x)}\right)\] \[=\eta(x)\psi(x,1)+(1-\eta(x))\psi(x,-1)=\int_{\{-1,1\}}\psi(x,y) \mathrm{d}P(y|x).\] In conclusion, we always have that \[\eta(x)\overline{\phi}(f^{*}(x))+(1-\eta(x))\overline{\phi}(-f^{*}(x))=H(\eta (x))=\int_{\{-1,1\}}\psi(x,y)\mathrm{d}P(y|x).\] Since \(x\) is arbitrary, we deduce that \[\int_{[0,1]^{d}}\left(\eta(x)\overline{\phi}(f^{*}(x))+(1-\eta(x ))\overline{\phi}(-f^{*}(x))\right)\mathrm{d}P_{X}(x)=\int_{[0,1]^{d}}H(\eta (x))\mathrm{d}P_{X}(x)\] \[=\int_{[0,1]^{d}}\int_{\{-1,1\}}\psi(x,y)\mathrm{d}P(y|x)\mathrm{ d}P_{X}(x)=\int_{[0,1]^{d}\times\{-1,1\}}\psi(x,y)\mathrm{d}P(x,y),\] which, together with (A.16), proves the desired result. #### a.3.2 Proof of Theorem 2.1 This subsection is devoted to the proof of Theorem 2.1. Proof of Theorem 2.1.: Throughout this proof, we denote \[\Psi:=\int_{[0,1]^{d}\times\{-1,1\}}\psi(x,y)\mathrm{d}P(x,y).\] Then it follows from (2.3) and (2.4) that \(0\leq\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n}\right)-\Psi\leq 2M<\infty\). Let \(\{(X_{k}^{\prime},Y_{k}^{\prime})\}_{k=1}^{n}\) be an i.i.d. sample from distribution \(P\) which is independent of \(\left\{(X_{k},Y_{k})\right\}_{k=1}^{n}\). By independence, we have \[\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n}\right)-\Psi\right]= \frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[\phi\left(Y_{i}^{\prime}\hat{f}_{n}(X_ {i}^{\prime})\right)-\psi\left(X_{i}^{\prime},Y_{i}^{\prime}\right)\right]\] with its empirical counterpart given by \[\hat{R}:=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[\phi\left(Y_{i}\hat{f}_{n}(X_{i}) \right)-\psi(X_{i},Y_{i})\right].\] Then we have \[\hat{R}-\left(\mathcal{R}_{P}^{\phi}(g)-\Psi\right) =\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[\phi\left(Y_{i}\hat{f}_ {n}(X_{i})\right)-\phi(Y_{i}g(X_{i}))\right]\] \[=\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\phi\left(Y_{i}\hat{f}_ {n}(X_{i})\right)-\frac{1}{n}\sum_{i=1}^{n}\phi\left(Y_{i}g(X_{i})\right) \right]\leq 0,\ \forall\ g\in\mathcal{F},\] where the last inequality follows from the fact that \(\hat{f}_{n}\) is an empirical \(\phi\)-risk minimizer which minimizes \(\frac{1}{n}\sum_{i=1}^{n}\phi\left(Y_{i}g(X_{i})\right)\) over all \(g\in\mathcal{F}\). Hence \(\hat{R}\leq\inf_{g\in\mathcal{F}}\left(\mathcal{R}_{P}^{\phi}(g)-\Psi\right)\), which means that \[\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n}\right)- \Psi\right]=\left(\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n} \right)-\Psi\right]-(1+\varepsilon)\cdot\hat{R}\right)+(1+\varepsilon)\cdot \hat{R}\] (A.17) \[\leq\left(\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n }\right)-\Psi\right]-(1+\varepsilon)\cdot\hat{R}\right)+(1+\varepsilon)\cdot \inf_{g\in\mathcal{F}}\left(\mathcal{R}_{P}^{\phi}(g)-\Psi\right),\ \forall\ \varepsilon\in[0,1).\] We then establish an upper bound for \(\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n}\right)-\Psi\right]-(1+ \varepsilon)\cdot\hat{R}\) by using a similar argument to that in the proof of Lemma 4 of [37]. The desired inequality (2.6) will follow from this bound and (A.17). Recall that \(W=\max\left\{3,\ \mathcal{N}\left(\mathcal{F},\gamma\right)\right\}\). From the definition of \(W\), there exist \(f_{1},\cdots,f_{W}\in\mathcal{F}\) such that for any \(f\in\mathcal{F}\), there exists some \(j\in\{1,\cdots,W\}\), such that \(\|f-f_{j}\|_{\infty}\leq\gamma\). Therefore, there holds \(\left\|\hat{f}_{n}-f_{j^{*}}\right\|_{[0,1]^{d}}\leq\gamma\) where \(j^{*}\) is a \(\{1,\cdots,W\}\)-valued statistic from the sample \(\{(X_{i},Y_{i})\}_{i=1}^{n}\). Denote \[A:=M\cdot\sqrt{\frac{\log W}{\Gamma n}}.\] (A.18) And for \(j=1,2\cdots,W\), let \[h_{j,1} :=\mathcal{R}_{P}^{\phi}(f_{j})-\Psi,\] (A.19) \[h_{j,2} :=\int_{[0,1]^{d}\times\{-1,1\}}\left(\phi(yf_{j}(x))-\psi(x,y) \right)^{2}\mathrm{d}P(x,y),\] \[V_{j} :=\left|\sum_{i=1}^{n}\left(\phi\left(Y_{i}f_{j}(X_{i})\right)- \psi\left(X_{i},Y_{i}\right)-\phi\left(Y_{i}^{\prime}f_{j}(X_{i}^{\prime}) \right)+\psi\left(X_{i}^{\prime},Y_{i}^{\prime}\right)\right)\right|,\] \[r_{j} :=A\vee\sqrt{h_{j,1}}.\] Define \(T:=\max\limits_{j=1,\cdots,W}\frac{V_{j}}{r_{j}}\). Denote by \(\mathbb{E}\left[\cdot|\left(X_{i},Y_{i}\right)_{i=1}^{n}\right]\) the conditional expectation with respect to \(\{(X_{i},Y_{i})\}_{i=1}^{n}\). Then \[r_{j^{*}} \leq A+\sqrt{h_{j^{*},1}}=A+\sqrt{\mathbb{E}\left[\phi\left(Y^{ \prime}f_{j^{*}}(X^{\prime})\right)-\psi(X^{\prime},Y^{\prime})\right]\left(X_ {i},Y_{i}\right)_{i=1}^{n}}\] \[\leq A+\sqrt{\gamma+\mathbb{E}\left[\phi\left(Y^{\prime}\hat{f}_ {n}(X^{\prime})\right)-\psi(X^{\prime},Y^{\prime})\right]\left(X_{i},Y_{i} \right)_{i=1}^{n}}\] \[=A+\sqrt{\gamma+\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n}\right)- \Psi}\leq A+\sqrt{\gamma}+\sqrt{\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n} \right)-\Psi},\] where \((X^{\prime},Y^{\prime})\) is an i.i.d. copy of \((X_{i},Y_{i})\left(1\leq i\leq n\right)\) and the second inequality follows from \[|\phi(t_{1})-\phi(t_{2})|\leq|t_{1}-t_{2}|\,,\ \forall\ t_{1},t_{2}\in\mathbb{R}\] (A.20) and \(\left\|f_{j^{*}}-\hat{f}\right\|_{[0,1]^{d}}\leq\gamma\). Consequently, \[\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n}\right)- \Psi\right]-\hat{R}\leq\left|\hat{R}-\mathbb{E}\left[\mathcal{R}_{P}^{\phi} \left(\hat{f}_{n}\right)-\Psi\right]\right|\] (A.21) \[=\frac{1}{n}\left|\mathbb{E}\left[\sum_{i=1}^{n}\left(\phi\left(Y _{i}\hat{f}_{n}(X_{i})\right)-\psi(X_{i},Y_{i})-\phi\left(Y_{i}^{\prime}\hat{f }_{n}(X_{i}^{\prime})\right)+\psi(X_{i}^{\prime},Y_{i}^{\prime})\right)\right]\right|\] \[\leq\frac{1}{n}\mathbb{E}\left[\left|\sum_{i=1}^{n}\left(\phi \left(Y_{i}f_{j^{*}}(X_{i})\right)-\psi(X_{i},Y_{i})-\phi\left(Y_{i}^{\prime} f_{j^{*}}(X_{i}^{\prime})\right)+\psi(X_{i}^{\prime},Y_{i}^{\prime})\right) \right|\right]+2\gamma\] \[=\frac{1}{n}\mathbb{E}\left[Y_{j^{*}}\right]+2\gamma\leq\frac{1 }{n}\mathbb{E}\left[T\cdot r_{j^{*}}\right]+2\gamma\] \[\leq\frac{1}{n}\mathbb{E}\left[T\cdot\sqrt{\mathcal{R}_{P}^{\phi }\left(\hat{f}_{n}\right)-\Psi}\right]+\frac{A+\sqrt{\gamma}}{n}\cdot\mathbb{E }\left[T\right]+2\gamma\] \[\leq\frac{\varepsilon\mathbb{E}\left[\mathcal{R}_{P}^{\phi} \left(\hat{f}_{n}\right)-\Psi\right]}{2+2\varepsilon}+\frac{(1+\varepsilon) \mathbb{E}\left[T^{2}\right]}{2\varepsilon\cdot n^{2}}+\frac{A+\sqrt{\gamma} }{n}\mathbb{E}\left[T\right]+2\gamma,\;\forall\;\varepsilon\in(0,1),\] where the last inequality follows from \(2\sqrt{ab}\leq\frac{1}{1+\varepsilon}a+\frac{1+\varepsilon}{d}\), \(\forall a>0,b>0\). We then bound \(\mathbb{E}\left[T\right]\) and \(\mathbb{E}\left[T^{2}\right]\) by Bernstein's inequality (see e.g., Chapter 3.1 of [6] and Chapter 6.2 of [40]). Indeed, it follows from (2.5) and (A.19) that \[h_{j,2}\leq\Gamma\cdot h_{j,1}\leq\Gamma\cdot\left(r_{j}\right)^{2},\;\forall \;j\in\{1,\cdots,W\}.\] For any \(j\in\{1,\cdots,W\}\) and \(t\geq 0\), we apply Bernstein's inequality to the zero mean i.i.d. random variables \[\left\{\phi\left(Y_{i}f_{j}(X_{i})\right)-\psi(X_{i},Y_{i})-\phi\left(Y_{i}^{ \prime}f_{j}(X_{i}^{\prime})\right)+\psi(X_{i}^{\prime},Y_{i}^{\prime})\right\} _{i=1}^{n}\] and obtain \[\mathbb{P}(V_{j}\geq t)\] \[=\mathbb{P}\left(\left|\sum_{i=1}^{n}\left(\phi\left(Y_{i}f_{j}(X _{i})\right)-\psi(X_{i},Y_{i})-\phi\left(Y_{i}^{\prime}f_{j}(X_{i}^{\prime}) \right)+\psi(X_{i}^{\prime},Y_{i}^{\prime})\right)\right|\geq t\right)\] \[\leq 2\exp\left(\frac{-t^{2}/2}{Mt+\sum_{i=1}^{n}\mathbb{E}\left[ \left(\phi\left(Y_{i}f_{j}(X_{i})\right)-\psi(X_{i},Y_{i})-\phi\left(Y_{i}^{ \prime}f_{j}(X_{i}^{\prime})\right)+\psi(X_{i}^{\prime},Y_{i}^{\prime})\right) ^{2}\right]}\right)\] \[\leq 2\exp\left(\frac{-t^{2}/2}{Mt+2\sum_{i=1}^{n}\mathbb{E} \left[\left(\phi\left(Y_{i}f_{j}(X_{i})\right)-\psi(X_{i},Y_{i})\right)^{2}+ \left(\phi\left(Y_{i}^{\prime}f_{j}(X_{i}^{\prime})\right)-\psi(X_{i}^{\prime},Y_{i}^{\prime})\right)^{2}\right]}\right)\] \[=2\exp\left(-\frac{t^{2}/2}{Mt+4\sum_{i=1}^{n}h_{j,2}}\right)=2 \exp\left(-\frac{t^{2}}{2Mt+8nh_{j,2}}\right)\] \[\leq 2\exp\left(-\frac{t^{2}}{2Mt+8n\Gamma\cdot\left(r_{j}\right)^ {2}}\right).\] Hence \[\mathbb{P}(T\geq t) \leq\sum_{j=1}^{W}\mathbb{P}(V_{j}/r_{j}\geq t)=\sum_{j=1}^{W} \mathbb{P}(V_{j}\geq tr_{j})\] \[\leq 2\sum_{j=1}^{W}\exp\left(-\frac{(tr_{j})^{2}}{2Mtr_{j}+8n \Gamma\cdot r_{j}^{2}}\right)=2\sum_{j=1}^{W}\exp\left(-\frac{t^{2}}{2Mt/r_{j}+ 8n\Gamma}\right)\] \[\leq 2\sum_{j=1}^{W}\exp\left(-\frac{t^{2}}{2Mt/A+8n\Gamma}\right)=2W\exp \left(-\frac{t^{2}}{2Mt/A+8n\Gamma}\right),\;\forall\;t\in[0,\infty).\] Therefore, for any \(\theta\in\{1,2\}\), by taking \[B:=\left(\frac{M}{A}\cdot\log W+\sqrt{\left(\frac{M}{A}\cdot\log W\right)^{2} +8n\Gamma\log W}\right)^{\theta}=4^{\theta}\cdot\left(n\Gamma\log W\right)^{ \theta/2},\] we derive \[\mathbb{E}\left[T^{\theta}\right] =\int_{0}^{\infty}\mathbb{P}\left(T\geq t^{1/\theta}\right) \mathrm{d}t\leq B+\int_{B}^{\infty}\mathbb{P}\left(T\geq t^{1/\theta}\right) \mathrm{d}t\] \[\leq B+\int_{B}^{\infty}\left(2W\exp\left(-\frac{t^{2/\theta}}{2 Mt^{1/\theta}/A+8n\Gamma}\right)\right)\mathrm{d}t\] \[\leq B+\int_{B}^{\infty}\left(2W\exp\left(-\frac{B^{1/\theta} \cdot t^{1/\theta}}{2MB^{1/\theta}/A+8n\Gamma}\right)\right)\mathrm{d}t\] \[=B+2WB\theta\cdot\left(\log W\right)^{-\theta}\int_{\log W}^{ \infty}\mathrm{e}^{-u}u^{\theta-1}\mathrm{d}u\] \[\leq B+2WB\theta\cdot\left(\log W\right)^{-\theta}\cdot\theta \cdot\mathrm{e}^{-\log W}\left(\log W\right)^{\theta-1}\] \[\leq 5\theta B\leq 5\theta\cdot 4^{\theta}\cdot\left(n\Gamma \log W\right)^{\theta/2}.\] Plugging the inequality above and (A.18) into (A.21), we obtain \[\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n}\right)- \Psi\right]-\hat{R}\leq\left|\hat{R}-\mathbb{E}\left[\mathcal{R}_{P}^{\phi} \left(\hat{f}_{n}\right)-\Psi\right]\right|\] \[\leq\frac{\varepsilon\mathbb{E}\left[\mathcal{R}_{P}^{\phi} \left(\hat{f}_{n}\right)-\Psi\right]}{2+2\varepsilon}+\frac{(1+\varepsilon) \mathbb{E}\left[T^{2}\right]}{2\varepsilon\cdot n^{2}}+\frac{A+\sqrt{\gamma}} {n}\mathbb{E}\left[T\right]+2\gamma\] \[\leq\frac{\varepsilon}{1+\varepsilon}\mathbb{E}\left[\mathcal{R} _{P}^{\phi}\left(\hat{f}_{n}\right)-\Psi\right]+20\cdot\sqrt{\gamma}\cdot\sqrt {\frac{\Gamma\log W}{n}}\] \[\quad+20M\cdot\frac{\log W}{n}+80\cdot\frac{\Gamma\log W}{n} \cdot\frac{1+\varepsilon}{\varepsilon}+2\gamma,\;\forall\;\varepsilon\in(0,1).\] Multiplying the above inequality by \((1+\varepsilon)\) and then rearranging, we obtain that \[\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n}\right)- \Psi\right]-(1+\varepsilon)\cdot\hat{R}\leq 20\cdot(1+\varepsilon)\cdot \sqrt{\gamma}\cdot\sqrt{\frac{\Gamma\log W}{n}}\] (A.22) \[\quad+20\cdot(1+\varepsilon)\cdot M\cdot\frac{\log W}{n}+80\cdot \frac{\Gamma\log W}{n}\cdot\frac{(1+\varepsilon)^{2}}{\varepsilon}+(2+2 \varepsilon)\cdot\gamma,\;\forall\;\varepsilon\in(0,1).\] Combining (A.22) and (A.17), we deduce that \[\mathbb{E}\left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n}\right)- \Psi\right]\leq(1+\varepsilon)\cdot\inf_{g\in\mathcal{F}}\left(\mathcal{R}_{P} ^{\phi}(g)-\Psi\right)+20\cdot(1+\varepsilon)\cdot\sqrt{\gamma}\cdot\sqrt{ \frac{\Gamma\log W}{n}}\] \[\quad+20\cdot(1+\varepsilon)\cdot M\cdot\frac{\log W}{n}+80\cdot \frac{\Gamma\log W}{n}\cdot\frac{(1+\varepsilon)^{2}}{\varepsilon}+(2+2 \varepsilon)\cdot\gamma,\;\forall\;\varepsilon\in(0,1).\] This proves the desired inequality (2.6) and completes the proof of Theorem 2.1. #### a.3.3 Proof of Theorem 2.4 To prove Theorem 2.4, we need the following Lemma A.4 and Lemma A.5. Lemma A.4, which describes neural networks that approximate the multiplication operator, can be derived directly from Lemma A.2 of [37]. Thus we omit its proof. One can also find a similar result to Lemma A.4 in the earlier paper [47] (cf. Proposition 3 therein). **Lemma A.4**.: _For any \(\varepsilon\in(0,1/2]\), there exists a neural network_ \[\mathrm{M}\in\mathcal{F}_{2}^{\mathbf{FNN}}\left(15\log\frac{1}{\varepsilon},6,9 00\log\frac{1}{\varepsilon},1,1\right)\] _such that for any \(t,t^{\prime}\in[0,1]\), there hold \(\mathrm{M}(t,t^{\prime})\in[0,1]\), \(\mathrm{M}(t,0)=\mathrm{M}(0,t^{\prime})=0\) and_ \[\left|\mathrm{M}(t,t^{\prime})-t\cdot t^{\prime}\right|\leq\varepsilon.\] In Lemma A.5, we construct a neural network which performs the operation of multiplying the inputs by \(2^{k}\). **Lemma A.5**.: _Let \(k\) be a positive integer and \(f\) be a univariate function given by \(f(x)=2^{k}\cdot\max\left\{x,0\right\}\). Then_ \[f\in\mathcal{F}_{1}^{\mathbf{FNN}}\left(k,2,4k,1,\infty\right).\] Proof.: For any \(1\leq i\leq k-1\), let \(v_{i}=(0,0)^{\top}\) and \[\mathbf{W}_{i}=\begin{pmatrix}1&1\\ 1&1\end{pmatrix}.\] In addition, take \[\mathbf{W}_{0}=(1,1)^{\top},\mathbf{W}_{k}=(1,1),\text{ and }v_{k}=(0,0)^{\top}.\] Then we have \[f=x\mapsto\mathbf{W}_{k}\sigma_{v_{k}}\mathbf{W}_{k-1}\sigma_{v_{k-1}}\cdots\mathbf{W}_{1} \sigma_{v_{1}}\mathbf{W}_{0}x\in\mathcal{F}_{1}^{\mathbf{FNN}}\left(k,2,4k,1, \infty\right),\] which proves this lemma. Now we are in the position to prove Theorem 2.4. Proof of Theorem 2.4.: Given \(a\in(0,1/2]\), let \(I:=\left\lceil-\log_{2}a\right\rceil\) and \(J_{k}:=\left\lfloor\frac{1}{3\cdot 2^{k}},\frac{1}{2k}\right\rfloor\) for \(k=0,1,2,\cdots\). Then \(1\leq I\leq 1-\log_{2}a\leq 4\log\frac{1}{a}\). The idea of proof is to construct neural networks \(\left\{\tilde{h}_{k}\right\}_{k}\) which satisfy that \(0\leq\tilde{h}_{k}\left(t\right)\leq 1\) and \((8\log a)\cdot\tilde{h}_{k}\) approximates the natural logarithm function on \(J_{k}\). Then the function \[x\mapsto(8\log a)\cdot\sum_{k}\mathrm{M}\left(\tilde{h}_{k}(x),\tilde{f}_{k}( x)\right)\] is the desired neural network in Theorem 2.4, where \(\mathrm{M}\) is the neural network that approximates multiplication operators given in Lemma A.4 and \(\{\tilde{f}_{k}\}_{k}\) are neural networks representing piecewise linear function supported on \(J_{k}\) which constitutes a partition of unity. Specifically, given \(\alpha\in(0,\infty)\), there exists some \(r_{\alpha}>0\) only depending on \(\alpha\) such that \[x\mapsto\log\left(\frac{2x}{3}+\frac{1}{3}\right)\in\mathcal{B}_{r_{\alpha}}^{ \alpha}\left([0,1]\right).\] Hence it follows from Corollary A.2 that there exists \[\tilde{g}_{1} \in\mathcal{F}_{1}^{\mathbf{FNN}}\left(C_{\alpha}\log\frac{2}{ \varepsilon},C_{\alpha}\left(\frac{2}{\varepsilon}\right)^{1/\alpha},C_{ \alpha}\left(\frac{2}{\varepsilon}\right)^{1/\alpha}\log\frac{2}{\varepsilon},1,\infty\right)\] \[\subset\mathcal{F}_{1}^{\mathbf{FNN}}\left(C_{\alpha}\log\frac{1}{ \varepsilon},C_{\alpha}\left(\frac{1}{\varepsilon}\right)^{1/\alpha},C_{ \alpha}\left(\frac{1}{\varepsilon}\right)^{1/\alpha}\log\frac{1}{\varepsilon},1,\infty\right)\] such that \[\sup_{x\in[0,1]}\left|\tilde{g}_{1}(x)-\log\left(\frac{2x}{3}+\frac{1}{3} \right)\right|\leq\varepsilon/2.\] Recall that the ReLU function is given by \(\sigma(t)=\max\left\{t,0\right\}\). Let \[\tilde{g}_{2}:\mathbb{R}\rightarrow\mathbb{R},\quad x\mapsto-\sigma\left(- \sigma\left(\tilde{g}_{1}(x)+\log 3\right)+\log 3\right).\] Then \[\tilde{g}_{2}\in\mathcal{F}_{1}^{\mathbf{FNN}}\left(C_{\alpha}\log\frac{1}{ \varepsilon},C_{\alpha}\left(\frac{1}{\varepsilon}\right)^{1/\alpha},C_{ \alpha}\left(\frac{1}{\varepsilon}\right)^{1/\alpha}\log\frac{1}{\varepsilon},1,\infty\right),\] (A.23) and for \(x\in\mathbb{R}\), there holds \[-\log 3\leq\tilde{g}_{2}(x)=\begin{cases}-\log 3,&\text{ if }\tilde{g}_{1}(x)<-\log 3,\\ \tilde{g}_{1}(x),&\text{ if }-\log 3\leq\tilde{g}_{1}(x)\leq 0,\\ 0,&\text{ if }\tilde{g}_{1}(x)>0.\end{cases}\] Moreover, since \(-\log 3\leq\log\left(\frac{2\pi}{3}+\frac{1}{3}\right)\leq 0\) whenever \(x\in[0,1]\), we have \[\sup_{x\in[0,1]}\left|\tilde{g}_{2}(x)-\log\left(\frac{2x}{3}+\frac{1}{3} \right)\right|\leq\sup_{x\in[0,1]}\left|\tilde{g}_{1}(x)-\log\left(\frac{2x}{3 }+\frac{1}{3}\right)\right|\leq\varepsilon/2.\] Let \(x=\frac{3\cdot 2^{k}\cdot t-1}{2}\) in the above inequality, we obtain \[\sup_{t\in J_{k}}\left|\tilde{g}_{2}\left(\frac{3\cdot 2^{k}\cdot t-1}{2} \right)-k\log 2-\log t\right|\leq\varepsilon/2,\quad\forall\ k=0,1,2,\cdots.\] (A.24) For any \(0\leq k\leq I\), define \[\tilde{h}_{k}:\mathbb{R}\rightarrow\mathbb{R},\quad t\mapsto\sigma\left( \frac{\sigma\left(-\tilde{g}_{2}\left(\sigma\left(\frac{3}{4\cdot 2^{I-k}}\cdot 2^{I+1} \cdot\sigma(t)-\frac{1}{2}\right)\right)\right)}{8\log\frac{1}{a}}+\frac{k\log 2 }{8\log\frac{1}{a}}\right).\] Then we have \[0 \leq\tilde{h}_{k}(t)\leq\left|\frac{\sigma\left(-\tilde{g}_{2} \left(\sigma\left(\frac{3}{4\cdot 2^{I-k}}\cdot 2^{I+1}\cdot\sigma(t)-\frac{1}{2} \right)\right)\right)}{8\log\frac{1}{a}}+\frac{k\log 2}{8\log\frac{1}{a}}\right|\] (A.25) \[\leq\frac{\sup_{x\in\mathbb{R}}\left|\tilde{g}_{2}(x)\right|}{8 \log\frac{1}{a}}+\frac{I}{8\log\frac{1}{a}}\leq\frac{\log 3+4\log\frac{1}{a}}{8 \log\frac{1}{a}}\leq 1,\ \forall\ t\in\mathbb{R}.\] Therefore, it follows from (A.23), the definition of \(\tilde{h}_{k}\), and Lemma A.5 that (cf. Figure A.1) \[\tilde{h}_{k} \in\mathcal{F}_{1}^{\mathbf{FNN}}\left(C_{\alpha}\log\frac{1}{ \varepsilon}+I,C_{\alpha}\left(\frac{1}{\varepsilon}\right)^{\frac{1}{\alpha} },C_{\alpha}\left(\frac{1}{\varepsilon}\right)^{\frac{1}{\alpha}}\log\frac{1}{ \varepsilon}+4I,1,1\right)\] (A.26) \[\subset\mathcal{F}_{1}^{\mathbf{FNN}}\left(C_{\alpha}\log\frac{1}{ \varepsilon}+4\log\frac{1}{a},C_{\alpha}\left(\frac{1}{\varepsilon}\right)^{ \frac{1}{\alpha}},C_{\alpha}\left(\frac{1}{\varepsilon}\right)^{\frac{1}{ \alpha}}\log\frac{1}{\varepsilon}+16\log\frac{1}{a},1,1\right)\] for all \(0\leq k\leq I\). Besides, according to (A.24), it is easy to verify that for \(0\leq k\leq I\), there holds \[\left|(8\log a)\cdot\tilde{h}_{k}(t)-\log t\right|=\left|\tilde{g}_{2}\left( \frac{3}{2}\cdot 2^{k}\cdot t-1/2\right)-k\log 2-\log t\right|\leq\varepsilon/2,\quad \forall\ t\in J_{k}.\] Define \[\tilde{f}_{0}:\mathbb{R}\rightarrow[0,1],\quad x\mapsto\begin{cases}0,&\text{ if }x\in(-\infty,1/3),\\ 6\cdot\left(x-\frac{1}{3}\right),&\text{ if }x\in[1/3,1/2],\\ 1,&\text{ if }x\in(1/2,\infty),\end{cases}\] and for \(k\in\mathbb{N}\), \[\tilde{f}_{k}:\mathbb{R}\to[0,1],\quad x\mapsto\begin{cases}0,&\text{ if }x\in \mathbb{R}\setminus J_{k},\\ 6\cdot 2^{k}\cdot\left(x-\frac{1}{3\cdot 2^{k}}\right),&\text{ if }x\in \left[\frac{1}{3\cdot 2^{k}},\frac{1}{2^{k+1}}\right),\\ 1,&\text{ if }x\in\left[\frac{1}{2^{k+1}},\frac{1}{3\cdot 2^{k-1}}\right],\\ -3\cdot 2^{k}\cdot\left(x-\frac{1}{2^{k}}\right),&\text{ if }x\in \left(\frac{1}{3\cdot 2^{k-1}},\frac{1}{2^{k}}\right].\end{cases}\] Figure A.1: Networks representing functions \(\tilde{h}_{k}\). Figure A.2: Graphs of functions \(\tilde{f}_{k}\). Then it is easy to show that for any \(x\in\mathbb{R}\) and \(k\in\mathbb{N}\), there hold \[\tilde{f}_{k}(x) =\frac{6}{2^{I-k+3}}\cdot 2^{I+3}\cdot\sigma\left(x-\frac{1}{3\cdot 2 ^{k}}\right)-\frac{6}{2^{I-k+3}}\cdot 2^{I+3}\cdot\sigma\left(x-\frac{1}{2^{k+1}}\right)\] \[\quad+\frac{6}{2^{I-k+4}}\cdot 2^{I+3}\cdot\sigma\left(x-\frac{1}{ 2^{k}}\right)-\frac{6}{2^{I-k+3}}\cdot 2^{I+3}\cdot\sigma\left(x-\frac{1}{3 \cdot 2^{k-1}}\right),\] and \[\tilde{f}_{0}(x)=\frac{6}{2^{I+3}}\cdot 2^{I+3}\cdot\sigma(x-1/3)-\frac{6}{2^{ I+3}}\cdot 2^{I+3}\cdot\sigma(x-1/2).\] Hence it follows from Lemma A.5 that (cf. Figure A.3) \[\tilde{f}_{k} \in\mathcal{F}_{1}^{\mathbf{FNN}}\left(I+5,8,16I+60,1,\infty\right)\] (A.27) \[\subset\mathcal{F}_{1}^{\mathbf{FNN}}\left(12\log\frac{1}{a},8,1 52\log\frac{1}{a},1,\infty\right),\quad\ \forall\ 0\leq k\leq I.\] Next, we show that \[\sup_{t\in[a,1]}\left|\log(t)+8\log\left(\frac{1}{a}\right)\sum_{k=0}^{I} \tilde{h}_{k}(t)\tilde{f}_{k}(t)\right|\leq\varepsilon/2.\] (A.28) Indeed, we have the following inequalities: \[\left|\log(t)+8\log\left(\frac{1}{a}\right)\sum_{k=0}^{I}\tilde{h }_{k}(t)\tilde{f}_{k}(t)\right| =\left|\log t+8\log\left(\frac{1}{a}\right)\tilde{h}_{0}(t)\tilde {f}_{0}(t)\right|\] (A.29) \[=\left|\log t+8\log\left(\frac{1}{a}\right)\tilde{h}_{0}(t) \right|\leq\varepsilon/2,\ \forall\ t\in[1/2,1];\] Figure A.3: Networks representing functions \(\tilde{f}_{k}\). \[\begin{split}\left|\log(t)+8\log\left(\frac{1}{a}\right)\sum_{k=0}^{I }\tilde{h}_{k}(t)\tilde{f}_{k}(t)\right|&=\left|\log(t)+8\log \left(\frac{1}{a}\right)\tilde{h}_{m-1}(t)\right|\leq\varepsilon/2,\\ &\forall t\in\left[\frac{1}{2^{m}},\frac{1}{3\cdot 2^{m-2}} \right]\cap[a,1]\text{ with }2\leq m\leq I;\end{split}\] (A.30) and \[\begin{split}&\left|\log(t)+8\log\left(\frac{1}{a}\right)\sum_{k=0} ^{I}\tilde{h}_{k}(t)\tilde{f}_{k}(t)\right|\\ &=\left|\log\left(t\right)(\tilde{f}_{m}(t)+\tilde{f}_{m-1}(t))-8 \log\left(a\right)\left(\tilde{h}_{m}(t)\tilde{f}_{m}(t)+\tilde{h}_{m-1}(t) \tilde{f}_{m-1}(t)\right)\right|\\ &\leq\tilde{f}_{m}(t)\left|\log(t)-8\log\left(a\right)\tilde{h}_{ m}(t)\right|+\tilde{f}_{m-1}(t)\left|\log(t)-8\log\left(a\right)\tilde{h}_{m-1}(t) \right|\\ &\leq\tilde{f}_{m}(t)\cdot\frac{\varepsilon}{2}+\tilde{f}_{m-1}( t)\cdot\frac{\varepsilon}{2}=\frac{\varepsilon}{2},\quad\forall\;t\in\left[ \frac{1}{3\cdot 2^{m-1}},\frac{1}{2^{m}}\right]\cap[a,1]\text{ with }1\leq m\leq I.\end{split}\] (A.31) Note that \[[a,1]\subset[1/2,1]\cup\left(\bigcup_{m=1}^{I}\left[\frac{1}{3\cdot 2^{m-1}},\frac{1}{2^{m}}\right]\right)\cup\left(\bigcup_{m=2}^{I}\left[\frac{1}{2^{m}},\frac{1}{3\cdot 2^{m-2}}\right]\right).\] Consequently, (A.28) follows immediately from (A.29), (A.30) and (A.31). From Lemma A.4 we know that there exists \[\mathrm{M}\in\mathcal{F}_{2}^{\mathbf{FNN}}\left(15\log\frac{96\left(\log a \right)^{2}}{\varepsilon},6,900\log\frac{96\left(\log a\right)^{2}}{ \varepsilon},1,1\right)\] (A.32) such that for any \(t,t^{\prime}\in[0,1]\), there hold \(\mathrm{M}(t,t^{\prime})\in[0,1]\), \(\mathrm{M}(t,0)=\mathrm{M}(0,t^{\prime})=0\) and \[\left|\mathrm{M}(t,t^{\prime})-t\cdot t^{\prime}\right|\leq\frac{\varepsilon }{96\left(\log a\right)^{2}}.\] (A.33) Define \[\tilde{g}_{3}:\mathbb{R}\rightarrow\mathbb{R},\quad x\mapsto\sum_{k=0}^{I} \mathrm{M}\left(\tilde{h}_{k}(x),\tilde{f}_{k}(x)\right),\] and \[\begin{split}\tilde{f}:\mathbb{R}&\rightarrow\mathbb{ R},\\ x&\mapsto\sum_{k=1}^{8I}\left[\frac{\log(a)}{I}\cdot \sigma\left(\frac{\log b}{8\log a}+\sigma\left(\sigma\left(\tilde{g}_{3}(x) \right)-\frac{\log b}{8\log a}\right)-\sigma\left(\sigma\left(\tilde{g}_{3}(x) \right)-\frac{1}{8}\right)\right)\right].\end{split}\] Then it follows from (A.25),(A.33), (A.28), the definitions of \(\tilde{f}_{k}\) and \(\tilde{g}_{3}\) that \[\begin{split}&|\log t-8\log(a)\cdot\tilde{g}_{3}(t)|\\ &\leq 8\log\left(\frac{1}{a}\right)\cdot\left|\tilde{g}_{3}(t)-\sum_{k= 0}^{I}\tilde{h}_{k}(t)\tilde{f}_{k}(t)\right|+\left|\log t+8\log\left(\frac{1} {a}\right)\sum_{k=0}^{I}\tilde{h}_{k}(t)\tilde{f}_{k}(t)\right|\\ &\leq 8\log\left(\frac{1}{a}\right)\cdot\left|\tilde{g}_{3}(t)-\sum_{ k=0}^{I}\tilde{h}_{k}(t)\tilde{f}_{k}(t)\right|+\varepsilon/2\\ &\leq\varepsilon/2+|8\log a|\cdot\sum_{k=0}^{I}\left|\mathrm{M} \left(\tilde{h}_{k}(t),\tilde{f}_{k}(t)\right)-\tilde{h}_{k}(t)\tilde{f}_{k}( t)\right|\\ &\leq\varepsilon/2+|8\log a|\cdot(I+1)\cdot\frac{\varepsilon}{96 \left(\log a\right)^{2}}\leq\varepsilon,\ \forall\ t\in[a,1].\end{split}\] (A.34) However, for any \(t\in\mathbb{R}\), by the definition of \(\tilde{f}\), we have \[\begin{split}\tilde{f}(t)=\begin{cases}8\log(a)\cdot\tilde{g}_{3 }(t),&\text{ if }8\log(a)\cdot\tilde{g}_{3}(t)\in[\log a,\log b],\\ \log a,&\text{ if }8\log(a)\cdot\tilde{g}_{3}(t)<\log a,\\ \log b,&\text{ if }8\log(a)\cdot\tilde{g}_{3}(t)>\log b, \end{cases}\\ \text{satisfying }\ \log a\leq\tilde{f}(t)\leq\log b\leq 0.\end{split}\] (A.35) Then by (A.34), (A.35) and the fact that \(\log t\in[\log a,\log b],\ \forall\ t\in[a,b]\), we obtain \[\left|\log t-\tilde{f}(t)\right|\leq|\log t-8\log(a)\cdot\tilde{g}_{3}(t)|\leq \varepsilon,\ \forall\ t\in[a,b].\] That is, \[\sup_{t\in[a,b]}\left|\log t-\tilde{f}(t)\right|\leq\varepsilon.\] (A.36) Figure A.4: The network representing the function \(\tilde{g}_{3}\). On the other hand, it follows from (A.26), (A.27), (A.32), the definition of \(\tilde{g}_{3}\), and \(1\leq I\leq 4\log\frac{1}{a}\) that \[\tilde{g}_{3}\in\mathcal{F}_{1}^{\mathbf{FNN}}\left(C_{\alpha}\log\frac{1}{ \varepsilon}+I+15\log\left(96\left(\log a\right)^{2}\right),C_{\alpha}\left( \frac{1}{\varepsilon}\right)^{\frac{1}{\alpha}}I,\right.\] \[\left.\left(I+1\right)\cdot\left(20I+C_{\alpha}\left(\frac{1}{\varepsilon} \right)^{\frac{1}{\alpha}}\cdot\log\frac{1}{\varepsilon}+900\log\left(96 \left(\log a\right)^{2}\right)\right),1,\infty\right)\] \[\subset\mathcal{F}_{1}^{\mathbf{FNN}}\left(C_{\alpha}\log\frac{1}{\varepsilon} +139\log\frac{1}{a},C_{\alpha}\left(\frac{1}{\varepsilon}\right)^{\frac{1}{a} }\log\frac{1}{a},\right.\] \[\left.C_{\alpha}\left(\frac{1}{\varepsilon}\right)^{\frac{1}{\alpha}}\cdot \left(\log\frac{1}{\varepsilon}\right)\cdot\left(\log\frac{1}{a}\right)+65440 \left(\log a\right)^{2},1,\infty\right).\] Then by the definition of \(\tilde{f}\) we obtain (cf. Figure A.5) \[\tilde{f}\in\mathcal{F}_{1}^{\mathbf{FNN}}\left(C_{\alpha}\log \frac{1}{\varepsilon}+139\log\frac{1}{a},C_{\alpha}\left(\frac{1}{\varepsilon }\right)^{\frac{1}{\alpha}}\log\frac{1}{a},\right.\] \[\left.C_{\alpha}\left(\frac{1}{\varepsilon}\right)^{\frac{1}{ \alpha}}\cdot\left(\log\frac{1}{\varepsilon}\right)\cdot\left(\log\frac{1}{a} \right)+65440\left(\log a\right)^{2},1,\infty\right).\] This, together with (A.35) and (A.36), completes the proof of Theorem 2.4. #### a.3.4 Proof of Theorem 2.2 and Theorem 2.3 This subsection is devoted to the proof of Theorem 2.2 and Theorem 2.3. We will first establish several lemmas. We then use these lemmas to prove Theorem 2.3. Finally, we derive Theorem 2.2 by applying Theorem 2.3 with \(q=0\), \(d_{*}=d\) and \(d_{*}=K=1\). Figure A.5: The network representing the function \(\tilde{f}\). **Lemma A.6**.: _Let \(\phi(t)=\log(1+\mathrm{e}^{-t})\) be the logistic loss. Suppose real numbers \(a,f,A,B\) satisfy that \(0<a<1\) and \(A\leq\min\left\{f,\log\frac{a}{1-a}\right\}\leq\max\left\{f,\log\frac{a}{1-a} \right\}\leq B\). Then there holds_ \[\min\left\{\frac{1}{4+2\mathrm{e}^{A}+2\mathrm{e}^{-A}},\frac{1}{4 +2\mathrm{e}^{B}+2\mathrm{e}^{-B}}\right\}\cdot\left|f-\log\frac{a}{1-a} \right|^{2}\] \[\leq a\phi(f)+(1-a)\phi(-f)-a\log\frac{1}{a}-(1-a)\log\frac{1}{1-a}\] \[\leq\sup\left\{\frac{1}{4+2\mathrm{e}^{z}+2\mathrm{e}^{-z}} \middle|z\in[A,B]\right\}\cdot\left|f-\log\frac{a}{1-a}\right|^{2}\leq\frac{1} {8}\cdot\left|f-\log\frac{a}{1-a}\right|^{2}.\] Proof.: Consider the map \(G:\mathbb{R}\to[0,\infty),z\mapsto a\phi(z)+(1-a)\phi(-z)\). Obviously \(G\) is twice continuously differentiable on \(\mathbb{R}\) with \(G^{\prime}\left(\log\frac{a}{1-a}\right)=0\) and \(G^{\prime\prime}(z)=\frac{1}{2+\mathrm{e}^{z}+\mathrm{e}^{-z}}\) for any real number \(z\). Then it follows from Taylor's theorem that there exists a real number \(\xi\) between \(\log\frac{a}{1-a}\) and \(f\), such that \[a\phi(f)+(1-a)\phi(-f)-a\log\frac{1}{a}-(1-a)\log\frac{1}{1-a}=G (f)-G\left(\log\frac{a}{1-a}\right)\] (A.37) \[=\left(f-\log\frac{a}{1-a}\right)\cdot G^{\prime}\left(\log \frac{a}{1-a}\right)+\frac{G^{\prime\prime}(\xi)}{2}\cdot\left|f-\log\frac{a} {1-a}\right|^{2}\] \[=\frac{G^{\prime\prime}(\xi)}{2}\cdot\left|f-\log\frac{a}{1-a} \right|^{2}=\frac{\left|f-\log\frac{a}{1-a}\right|^{2}}{4+2\mathrm{e}^{\xi}+2 \mathrm{e}^{-\xi}}.\] Since \(A\leq\min\left\{f,\log\frac{a}{1-a}\right\}\leq\max\left\{f,\log\frac{a}{1-a} \right\}\leq B\), we must have \(\xi\in[A,B]\), which, together with (A.37), yields \[\min\left\{\frac{1}{4+2\mathrm{e}^{A}+2\mathrm{e}^{-A}},\frac{1}{ 4+2\mathrm{e}^{B}+2\mathrm{e}^{-B}}\right\}\cdot\left|f-\log\frac{a}{1-a} \right|^{2}\] (A.38) \[=\left(\inf_{t\in[A,B]}\frac{1}{4+2\mathrm{e}^{t}+\mathrm{e}^{-t} }\right)\cdot\left|f-\log\frac{a}{1-a}\right|^{2}\leq\frac{\left|f-\log\frac{a }{1-a}\right|^{2}}{4+2\mathrm{e}^{\xi}+2\mathrm{e}^{-\xi}}\] \[=a\phi(f)+(1-a)\phi(-f)-a\log\frac{1}{a}-(1-a)\log\frac{1}{1-a}= \frac{\left|f-\log\frac{a}{1-a}\right|^{2}}{4+2\mathrm{e}^{\xi}+2\mathrm{e}^{- \xi}}\] \[\leq\sup\left\{\frac{1}{4+2\mathrm{e}^{z}+2\mathrm{e}^{-z}} \middle|z\in[A,B]\right\}\cdot\left|f-\log\frac{a}{1-a}\right|^{2}\leq\frac{1} {8}\cdot\left|f-\log\frac{a}{1-a}\right|^{2}.\] This completes the proof. **Lemma A.7**.: _Let \(\phi(t)=\log\left(1+\mathrm{e}^{-t}\right)\) be the logistic loss, \(f\) be a real number, \(d\in\mathbb{N}\), and \(P\) be a Borel probability measure on \([0,1]^{d}\times\{-1,1\}\) of which the conditional probability function \([0,1]^{d}\ni z\mapsto P(\{1\}\,|z)\in[0,1]\) is denoted by \(\eta\). Then for \(x\in[0,1]^{d}\) such that \(\eta(x)\notin\{0,1\}\), there holds_ \[\left|\inf_{t\in\left[f\wedge\log\frac{\eta(x)}{1-\eta(x)},f\lor \log\frac{\eta(x)}{1-\eta(x)}\right]}\frac{1}{2(2+\mathrm{e}^{t}+\mathrm{e}^{-t })}\right|\cdot\left|f-\log\frac{\eta(x)}{1-\eta(x)}\right|^{2}\] \[\leq\int_{\{-1,1\}}\left(\phi\left(yf\right)-\phi\left(y\log \frac{\eta(x)}{1-\eta(x)}\right)\right)\mathrm{d}P(y|x)\] \[\leq\left|\sup_{t\in\left[f\wedge\log\frac{\eta(x)}{1-\eta(x)},f \lor\log\frac{\eta(x)}{1-\eta(x)}\right]}\frac{1}{2(2+\mathrm{e}^{t}+\mathrm{e }^{-t})}\right|\cdot\left|f-\log\frac{\eta(x)}{1-\eta(x)}\right|^{2}\leq\frac{1 }{4}\left|f-\log\frac{\eta(x)}{1-\eta(x)}\right|^{2}.\] Proof.: Given \(x\in[0,1]^{d}\) such that \(\eta(x)\notin\{0,1\}\), define \[V_{x}:\mathbb{R}\to(0,\infty),\quad t\mapsto\eta(x)\phi(t)+(1-\eta(x))\phi(-t).\] Then it is easy to verify that \[\int_{\{-1,1\}}\phi\left(y{t}\right)\mathrm{d}P(y|x)=\phi(t)P(Y=1|X=x)+\phi(-t)P( Y=-1|X=x)=V_{x}(t)\] for all \(t\in\mathbb{R}\). Consequently, \[\int_{\{-1,1\}}\left(\phi\left(y{f}\right)-\phi\left(y\log\frac{ \eta(x)}{1-\eta(x)}\right)\right)\mathrm{d}P(y|x)=V_{x}(f)-V_{x}\left(\log \frac{\eta(x)}{1-\eta(x)}\right)\] \[=\eta(x)\phi(f)+(1-\eta(x))\phi(-f)-\eta(x)\log\frac{1}{\eta(x)} -(1-\eta(x))\log\frac{1}{1-\eta(x)}.\] The desired inequalities then follow immediately by applying Lemma A.6. **Lemma A.8**.: _Let \(\phi(t)=\log\left(1+\mathrm{e}^{-t}\right)\) be the logistic loss, \(d\in\mathbb{N}\), \(f:[0,1]^{d}\to\mathbb{R}\) be a measurable function, and \(P\) be a Borel probability measure on \([0,1]^{d}\times\{-1,1\}\) of which the conditional probability function \([0,1]^{d}\ni z\mapsto P(\{1\}\,|z)\in[0,1]\) is denoted by \(\eta\). Assume that there exist constants \((a,b)\in\mathbb{R}^{2}\), \(\delta\in(0,1/2)\), and a measurable function \(\hat{\eta}:[0,1]^{d}\to\mathbb{R}\), such that \(\hat{\eta}=\eta\), \(P_{X}\)-a.s.,_ \[\log\frac{\delta}{1-\delta}\leq f(x)\leq-a,\;\forall\;x\in[0,1]^{d}\text{ satisfying }0\leq\hat{\eta}(x)=\eta(x)<\delta,\] _and_ \[b\leq f(x)\leq\log\frac{1-\delta}{\delta},\;\forall\;x\in[0,1]^{d}\text{ satisfying }1-\delta<\hat{\eta}(x)=\eta(x)\leq 1.\] _Then_ \[\mathcal{E}_{P}^{\phi}\left(f\right)-\phi(a)P_{X}(\Omega_{2})- \phi(b)P_{X}(\Omega_{3})\] \[\leq\int_{\Omega_{1}}\sup\left\{\frac{\left|f(x)-\log\frac{\eta(x )}{1-\eta(x)}\right|^{2}}{2(2+\mathrm{e}^{t}+\mathrm{e}^{-t})}\Bigg{|}t\in \left[f(x)\wedge\log\frac{\eta(x)}{1-\eta(x)},f(x)\vee\log\frac{\eta(x)}{1- \eta(x)}\right]\right\}\mathrm{d}P_{X}(x)\] \[\leq\int_{\Omega_{1}}\left|f(x)-\log\frac{\eta(x)}{1-\eta(x)} \right|^{2}\mathrm{d}P_{X}(x),\] _where_ \[\Omega_{1} :=\left\{\left.x\in[0,1]^{d}\right|\delta\leq\hat{\eta}(x)=\eta(x )\leq 1-\delta\right\},\] (A.39) \[\Omega_{2} :=\left\{\left.x\in[0,1]^{d}\right|0\leq\hat{\eta}(x)=\eta(x)< \delta\right\},\] \[\Omega_{3} :=\left\{\left.x\in[0,1]^{d}\right|1-\delta<\hat{\eta}(x)=\eta(x) \leq 1\right\}.\] Proof.: Define \[\psi:[0,1]^{d}\times\{-1,1\} \to[0,\infty),\] \[(x,y) \mapsto\begin{cases}\phi\left(y\log\frac{\eta(x)}{1-\eta(x)} \right),&\text{if }\eta(x)\in[\delta,1-\delta],\\ 0,&\text{if }\eta(x)\in\{0,1\},\\ \eta(x)\log\frac{1}{\eta(x)}+(1-\eta(x))\log\frac{1}{1-\eta(x)},&\text{if } \eta(x)\in(0,\delta)\cup(1-\delta,1).\end{cases}\] Since \(\hat{\eta}=\eta\in[0,1]\), \(P_{X}\)-a.s., we have that \(P_{X}([0,1]^{d}\setminus(\Omega_{1}\cup\Omega_{2}\cup\Omega_{3}))=0\). Then it follows from lemma A.3 that \[\mathcal{E}_{P}^{\phi}(f)=\mathcal{R}_{P}^{\phi}(f)-\inf\left\{ \left.\mathcal{R}_{P}^{\phi}(g)\right|g:[0,1]^{d}\to\mathbb{R}\text{ is measurable}\right\}\] (A.40) \[=\int_{[0,1]^{d}\times\{-1,1\}}\phi(yf(x))\mathrm{d}P(x,y)-\int_{[ 0,1]^{d}\times\{-1,1\}}\psi(x,y)\mathrm{d}P(x,y)=I_{1}+I_{2}+I_{3},\] where \[I_{i}:=\int_{\Omega_{i}\times\{-1,1\}}\left(\phi\left(yf(x)\right)-\psi(x,y) \right)\mathrm{d}P(x,y),\;i=1,2,3.\] According to Lemma A.7, we have \[\begin{split} I_{1}&=\int_{\Omega_{1}}\int_{\{-1,1 \}}\left(\phi\left(yf(x)\right)-\phi\left(y\log\frac{\eta(x)}{1-\eta(x)}\right) \right)\mathrm{d}P(y|x)\mathrm{d}P_{X}(x)\\ &\leq\int_{\Omega_{1}}\sup\left\{\left.\frac{\left|f(x)-\log \frac{\eta(x)}{1-\eta(x)}\right|^{2}}{2(2+\mathrm{e}^{t}+\mathrm{e}^{-t})} \right|\begin{aligned} & t\in\left[f(x)\wedge\log\frac{\eta(x)}{1- \eta(x)},\infty\right)\text{ and }\\ & t\in\left(-\infty,f(x)\vee\log\frac{\eta(x)}{1-\eta(x)}\right] \end{aligned}\right\}\mathrm{d}P_{X}(x).\end{split}\] (A.41) Then it remains to bound \(I_{2}\) and \(I_{3}\). Indeed, for any \(x\in\Omega_{2}\), if \(\eta(x)=0\), then \[\int_{\{-1,1\}}\left(\phi(yf(x))-\psi(x,y)\right)\mathrm{d}P(y|x)=\phi(-f(x)) \leq\phi(a).\] Otherwise, we have \[\begin{split}&\int_{\{-1,1\}}\left(\phi(yf(x))-\psi(x,y)\right) \mathrm{d}P(y|x)\\ &=\left(\phi(f(x))-\log\frac{1}{\eta(x)}\right)\eta(x)+\left( \phi(-f(x))-\log\frac{1}{1-\eta(x)}\right)(1-\eta(x))\\ &=\left(\phi\left(f(x)\right)-\phi\left(\log\frac{\eta(x)}{1- \eta(x)}\right)\right)\eta(x)+\left(\phi\left(-f(x)\right)-\phi\left(-\log \frac{\eta(x)}{1-\eta(x)}\right)\right)(1-\eta(x))\\ &\leq\left(\phi\left(\log\frac{\delta}{1-\delta}\right)-\phi \left(\log\frac{\eta(x)}{1-\eta(x)}\right)\right)\eta(x)+\phi(-f(x))(1-\eta(x) )\\ &\leq\phi(-f(x))(1-\eta(x))\leq\phi(-f(x))\leq\phi(a).\end{split}\] Therefore, no matter whether \(\eta(x)=0\) or \(\eta(x)\neq 0\), there always holds \[\int_{\{-1,1\}}\left(\phi(yf(x))-\psi(x,y)\right)\mathrm{d}P(y|x)\leq\phi(a),\] which means that \[\begin{split} I_{2}&=\int_{\Omega_{2}}\int_{\{-1,1 \}}\left(\phi(yf(x))-\psi(x,y)\right)\mathrm{d}P(y|x)\mathrm{d}P_{X}(x)\\ &\leq\int_{\Omega_{2}}\phi(a)\mathrm{d}P_{X}(x)=\phi(a)P_{X}( \Omega_{2}).\end{split}\] (A.42) Similarly, for any \(x\in\Omega_{3}\), if \(\eta(x)=1\), then \[\int_{\{-1,1\}}\left(\phi(yf(x))-\psi(x,y)\right)\mathrm{d}P(y|x)=\phi(f(x)) \leq\phi(b).\] Otherwise, we have \[\begin{split}&\int_{\{-1,1\}}\left(\phi(yf(x))-\psi(x,y)\right) \mathrm{d}P(y|x)\\ &=\left(\phi(f(x))-\log\frac{1}{\eta(x)}\right)\eta(x)+\left( \phi(-f(x))-\log\frac{1}{1-\eta(x)}\right)(1-\eta(x))\\ &=\left(\phi\left(f(x)\right)-\phi\left(\log\frac{\eta(x)}{1- \eta(x)}\right)\right)\eta(x)+\left(\phi\left(-f(x)\right)-\phi\left(-\log \frac{\eta(x)}{1-\eta(x)}\right)\right)(1-\eta(x))\end{split}\] \[\leq\phi(f(x))\eta(x)+\left(\phi\left(\log\frac{\delta}{1-\delta} \right)-\phi\left(\log\frac{1-\eta(x)}{\eta(x)}\right)\right)(1-\eta(x))\] \[\leq\phi(f(x))\eta(x)\leq\phi(f(x))\leq\phi(b).\] Therefore, no matter whether \(\eta(x)=1\) or \(\eta(x)\neq 1\), we have \[\int_{\{-1,1\}}\left(\phi(yf(x))-\psi(x,y)\right)\mathrm{d}P(y|x)\leq\phi(b),\] which means that \[\begin{split} I_{3}&=\int_{\Omega_{3}}\int_{\{-1, 1\}}\left(\phi(yf(x))-\psi(x,y)\right)\mathrm{d}P(y|x)\mathrm{d}P_{X}(x)\\ &\leq\int_{\Omega_{3}}\phi(b)\mathrm{d}P_{X}(x)=\phi(b)P_{X}( \Omega_{3}).\end{split}\] (A.43) The desired inequality then follows immediately from (A.41), (A.42), (A.43) and (A.40). Thus we complete the proof. **Lemma A.9**.: _Let \(\delta\in(0,1/2)\), \(a\in[\delta,1-\delta]\), \(f\in\left[-\log\frac{1-\delta}{\delta},\log\frac{1-\delta}{\delta}\right]\), and \(\phi(t)=\log(1+\mathrm{e}^{-t})\) be the logistic loss. Then there hold_ \[H(a,f)\leq\Gamma\cdot G(a,f)\] _with \(\Gamma=5000\left|\log\delta\right|^{2}\),_ \[H(a,f):=a\cdot\left|\phi(f)-\phi\left(\log\frac{a}{1-a}\right)\right|^{2}+(1-a )\cdot\left|\phi(-f)-\phi\left(-\log\frac{a}{1-a}\right)\right|^{2},\] _and_ \[G(a,f) :=a\phi(f)+(1-a)\phi(-f)-a\phi\left(\log\frac{a}{1-a}\right)-(1-a) \phi\left(-\log\frac{a}{1-a}\right)\] \[=a\phi(f)+(1-a)\phi(-f)-a\log\frac{1}{a}-(1-a)\log\frac{1}{1-a}.\] Proof.: In this proof, we will frequently use elementary inequalities \[x\log\frac{1}{x}\leq\min\left\{1-x,(1-x)\cdot\log\frac{1}{1-x}\right\},\;\forall \;x\in[1/2,1),\] (A.44) and \[\begin{split}&-\log\frac{1}{1-x}-2<-\log 7\leq-\log\left(\exp \left(\frac{3-3x}{x}\log\frac{1}{1-x}\right)-1\right)\\ &<\log\frac{x}{1-x}<2+\log\frac{1}{1-x},\;\forall\;x\in[1/2,1). \end{split}\] (A.45) We first show that \[\begin{split} G(a,f)&\geq\frac{a\phi(f)}{3}\\ &\text{provided }\frac{1}{2}\leq a\leq 1-\delta\text{ and }f\leq-\log\left(\exp\left(\frac{3-3a}{a}\log\frac{1}{1-a}\right)-1 \right).\end{split}\] (A.46) Indeed, if \(1/2\leq a\leq 1-\delta\) and \(f\leq-\log\left(\exp\left(\frac{3-3a}{a}\log\frac{1}{1-a}\right)-1\right)\), then \[\begin{split}&\frac{2}{3}\cdot a\phi(f)\geq\frac{2}{3}\cdot a \phi\left(-\log\left(\exp\left(\frac{3-3a}{a}\log\frac{1}{1-a}\right)-1 \right)\right)=(2-2a)\cdot\log\frac{1}{1-a}\\ &\geq a\log\frac{1}{a}+(1-a)\log\frac{1}{1-a},\end{split}\] which means that \[G(a,f)\geq a\phi(f)-a\log\frac{1}{a}-(1-a)\log\frac{1}{1-a}\geq\frac{a\phi(f)}{3}.\] This proves (A.46). We next show that \[\begin{split}& G(a,f)\geq\frac{1-a}{18}\left|f-\log\frac{a}{1-a} \right|^{2}\\ &\text{provided }\frac{1}{2}\leq a\leq 1-\delta\text{ and }-2-\log\frac{1}{1-a}\leq f\leq 2+\log\frac{1}{1-a}.\end{split}\] (A.47) Indeed, if \(1/2\leq a\leq 1-\delta\) and \(-2-\log\frac{1}{1-a}\leq f\leq 2+\log\frac{1}{1-a}\), then it follows from Lemma A.6 that \[\begin{split}& G(a,f)\geq\frac{\left|f-\log\frac{a}{1-a}\right|^{2 }}{4+2\exp\left(2+\log\frac{1}{1-a}\right)+2\exp\left(-2-\log\frac{1}{1-a} \right)}\\ &\geq\frac{\left|f-\log\frac{a}{1-a}\right|^{2}}{5+15\cdot\frac{1 }{1-a}}\geq\frac{(1-a)\cdot\left|f-\log\frac{a}{1-a}\right|^{2}}{5-5a+15}\geq \frac{(1-a)\cdot\left|f-\log\frac{a}{1-a}\right|^{2}}{18},\end{split}\] which proves (A.47). We then show \[H(a,f)\leq\Gamma\cdot G(a,f)\text{ provided }1/2\leq a\leq 1-\delta\text{ and }- \log\frac{1-\delta}{\delta}\leq f\leq\log\frac{1-\delta}{\delta}\] (A.48) by considering the following four cases. **Case I.**\(1/2\leq a\leq 1-\delta\) and \(2+\log\frac{1}{1-a}\leq f\leq\log\frac{1-\delta}{\delta}\). In this case we have \[\begin{split}&\log\frac{1}{\delta}=\phi\left(\log\frac{\delta}{1- \delta}\right)\geq\phi(-f)=\log(1+\mathrm{e}^{f})\geq f\geq 2+\log\frac{1}{1-a} \\ &>\phi\left(-\log\frac{a}{1-a}\right)=\log\frac{1}{1-a}\geq\log \frac{1}{a}>0,\end{split}\] (A.49) which, together with (A.44), yields \[a\log\frac{1}{a}+(1-a)\log\frac{1}{1-a}\leq(1-a)\cdot\left(1+\log\frac{1}{1- a}\right)\leq(1-a)\cdot\frac{1+\log\frac{1}{1-a}}{2+\log\frac{1}{1-a}}\cdot \phi(-f).\] Consequently, \[\begin{split}& G(a,f)\geq(1-a)\cdot\phi(-f)-a\log\frac{1}{a}-(1-a )\log\frac{1}{1-a}\\ &\geq(1-a)\cdot\phi(-f)-(1-a)\cdot\frac{1+\log\frac{1}{1-a}}{2+ \log\frac{1}{1-a}}\cdot\phi(-f)\\ &=\frac{(1-a)\cdot\phi(-f)}{2+\log\frac{1}{1-a}}\geq\frac{(1-a) \cdot\phi(-f)}{4\log\frac{1}{\delta}}.\end{split}\] (A.50) On the other hand, it follows from \(f\geq 2+\log\frac{1}{1-a}>\log\frac{a}{1-a}\) that \[0\leq\phi\left(\log\frac{a}{1-a}\right)-\phi(f)<\phi\left(\log\frac{a}{1-a} \right),\] which, together with (A.44) and (A.49), yields \[\begin{split}& a\cdot\left|\phi(f)-\phi\left(\log\frac{a}{1-a} \right)\right|^{2}\leq a\cdot\left|\phi\left(\log\frac{a}{1-a}\right)\right|^ {2}\\ &=a\cdot\left|\log\frac{1}{a}\right|^{2}\leq(1-a)\cdot\log\frac{1 }{a}\leq(1-a)\cdot\phi(-f).\end{split}\] (A.51) Besides, it follows from (A.50) that \(0\leq\phi(-f)-\phi\left(-\log\frac{a}{1-a}\right)\leq\phi(-f)\). Consequently, \[(1-a)\cdot\left|\phi(-f)-\phi\left(-\log\frac{a}{1-a}\right)\right|^{2}\leq(1- a)\cdot\phi(-f)^{2}\leq(1-a)\cdot\phi(-f)\cdot\log\frac{1}{\delta}.\] (A.52) Combining (A.50), (A.51) and (A.52), we deduce that \[H(a,f)\leq(1-a)\cdot\phi(-f)\cdot\left|1+\log\frac{1}{\delta}\right|\leq(1-a) \cdot\phi(-f)\cdot\frac{\Gamma}{4\log\frac{1}{\delta}}\leq\Gamma\cdot G(a,f),\] which proves the desired inequality. **Case II.**\(1/2\leq a\leq 1-\delta\) and \(-\log\left(\exp\left(\frac{3-3a}{a}\log\frac{1}{1-a}\right)-1\right)\leq f<2+ \log\frac{1}{1-a}\). In this case, we have \(-2-\log\frac{1}{1-a}\leq f\leq 2+\log\frac{1}{1-a}\), where we have used (A.45). Therefore, it follows from (A.47) that \(G(a,f)\geq\frac{1-a}{18}\left|f-\log\frac{a}{1-a}\right|^{2}\). On the other hand, it follow from (A.45) and Taylor's Theorem that there exists \[-\log 7\leq-\log\left(\exp\left(\frac{3-3a}{a}\log\frac{1}{1-a} \right)-1\right)\] \[\leq f\wedge\log\frac{a}{1-a}\leq\xi\leq f\vee\log\frac{a}{1-a} \leq 2+\log\frac{1}{1-a},\] such that \[a\cdot\left|\phi(f)-\phi\left(\log\frac{a}{1-a}\right)\right|^{2}\] (A.53) \[=a\cdot\left|\phi^{\prime}(\xi)\right|^{2}\cdot\left|f-\log\frac{ a}{1-a}\right|^{2}\leq a\cdot\mathrm{e}^{-2\xi}\cdot\left|f-\log\frac{a}{1-a} \right|^{2}\] \[\leq a\cdot\exp(\log 7)\cdot\exp\left(\log\left(\exp\left(\frac{3-3 a}{a}\log\frac{1}{1-a}\right)-1\right)\right)\cdot\left|f-\log\frac{a}{1-a} \right|^{2}\] \[=7a\cdot\int_{0}^{\frac{3-3a}{a}\log\frac{1}{1-a}}\mathrm{e}^{t} \mathrm{d}t\cdot\left|f-\log\frac{a}{1-a}\right|^{2}\] \[\leq 7a\cdot\left|\frac{3-3a}{a}\log\frac{1}{1-a}\right|\cdot \exp\left(\frac{3-3a}{a}\log\frac{1}{1-a}\right)\cdot\left|f-\log\frac{a}{1-a }\right|^{2}\] \[\leq 7a\cdot\left|\frac{3-3a}{a}\log\frac{1}{1-a}\right|\cdot (1+\exp\left(\log 7\right))\cdot\left|f-\log\frac{a}{1-a}\right|^{2}\] \[\leq 168\cdot\left|(1-a)\cdot\log\frac{1}{1-a}\right|\cdot \left|f-\log\frac{a}{1-a}\right|^{2}\] \[\leq 168\cdot\left|(1-a)\cdot\log\frac{1}{\delta}\right|\cdot \left|f-\log\frac{a}{1-a}\right|^{2}.\] Besides, we have \[(1-a)\cdot\left|\phi(-f)-\phi\left(-\log\frac{a}{1-a}\right) \right|^{2}\] (A.54) \[\leq\left|1-a\right|\cdot\left|\phi^{\prime}\right|_{\mathbb{R}} \cdot\left|f-\log\frac{a}{1-a}\right|^{2}\leq\left|1-a\right|\cdot\left|f- \log\frac{a}{1-a}\right|^{2}.\] Combining (A.53), (A.54) and the fact that \(G(a,f)\geq\frac{1-a}{18}\left|f-\log\frac{a}{1-a}\right|^{2}\), we deduce that \[H(a,f)\leq 168\cdot\left|(1-a)\cdot\log\frac{1}{\delta}\right| \cdot\left|f-\log\frac{a}{1-a}\right|^{2}+\left|1-a\right|\cdot\left|f-\log \frac{a}{1-a}\right|^{2}\] \[\leq 170\cdot\left|(1-a)\cdot\log\frac{1}{\delta}\right|\cdot \left|f-\log\frac{a}{1-a}\right|^{2}\leq\Gamma\cdot\frac{1-a}{18}\cdot\left|f- \log\frac{a}{1-a}\right|^{2}\leq\Gamma\cdot G(a,f),\] which proves the desired inequality. **Case III.**\(1/2\leq a\leq 1-\delta\) and \(-\log\frac{a}{1-a}\leq f<-\log\left(\exp\left(\frac{3-3a}{a}\log\frac{1}{1-a} \right)-1\right)\). In this case, we still have (A.54). Besides, it follows from (A.46) that \(G(a,f)\geq\frac{a\phi(f)}{3}\). Moreover, by (A.45) we obtain \(-2-\log\frac{1}{1-a}<f<2+\log\frac{1}{1-a}\), which, together with (A.47), yields \(G(a,f)\geq\frac{1-a}{18}\left|f-\log\frac{a}{1-a}\right|^{2}\). In addition, since \(f<-\log\left(\exp\left(\frac{3-3a}{a}\log\frac{1}{1-a}\right)-1\right)\leq\log \frac{a}{1-a}\), we have that \(0<\phi(f)-\phi\left(\log\frac{a}{1-a}\right)<\phi(f)\), which means that \[\begin{split}& a\cdot\left|\phi(f)-\phi\left(\log\frac{a}{1-a} \right)\right|^{2}\leq a\cdot\left|\phi(f)\right|^{2}\\ &\leq a\phi(f)\phi\left(-\log\frac{a}{1-a}\right)=a\phi(f)\log \frac{1}{1-a}\leq a\phi(f)\log\frac{1}{\delta}.\end{split}\] (A.55) Combining all these inequalities, we obtain \[\begin{split}& H(a,f)\leq a\phi(f)\cdot\left|\log\frac{1}{ \delta}\right|+\left|1-a\right|\cdot\left|f-\log\frac{a}{1-a}\right|^{2}\\ &\leq\frac{\Gamma a\phi(f)}{6}+\Gamma\cdot\frac{1-a}{36}\cdot \left|f-\log\frac{a}{1-a}\right|^{2}\\ &\leq\frac{\Gamma\cdot G(a,f)}{2}+\frac{\Gamma\cdot G(a,f)}{2}= \Gamma\cdot G(a,f),\end{split}\] which proves the desired inequality. **Case IV.**\(-\log\frac{1-\delta}{\delta}\leq f<\min\left\{-\log\frac{a}{1-a},-\log \left(\exp\left(\frac{3-3a}{a}\log\frac{1}{1-a}\right)-1\right)\right\}\) and \(1/2\leq a\leq 1-\delta\). In this case, we still have \(G(a,f)\geq\frac{a\phi(f)}{3}\) according to (A.46). Besides, it follows from \[f<\min\left\{-\log\frac{a}{1-a},-\log\left(\exp\left(\frac{3-3a}{a}\log\frac{ 1}{1-a}\right)-1\right)\right\}\leq-\log\frac{a}{1-a}\leq\log\frac{a}{1-a}\] that \[\begin{split} 0&\leq\min\left\{\phi\left(-\log\frac{a}{1-a} \right)-\phi(-f),\phi(f)-\phi\left(\log\frac{a}{1-a}\right)\right\}\\ &\leq\max\left\{\phi\left(-\log\frac{a}{1-a}\right)-\phi(-f),\phi (f)-\phi\left(\log\frac{a}{1-a}\right)\right\}\\ &\leq\max\left\{\phi\left(-\log\frac{a}{1-a}\right),\phi(f) \right\}=\phi(f).\end{split}\] (A.56) Combining (A.56) and the fact that \(G(a,f)\geq\frac{a\phi(f)}{3}\), we deduce that \[\begin{split}& H(a,f)\leq a\cdot\left|\phi(f)\right|^{2}+(1-a) \cdot\left|\phi(f)\right|^{2}\leq\phi(f)\phi\left(-\log\frac{1-\delta}{\delta }\right)\\ &=\phi(f)\log\frac{1}{\delta}\leq\frac{\Gamma a\phi(f)}{3}\leq \Gamma\cdot G(a,f),\end{split}\] which proves the desired inequality. Combining all these four cases, we conclude that (A.48) has been proved. Furthermore, (A.48) yields that \[H(a,f)=H(1-a,-f)\leq\Gamma\cdot G(1-a,-f)=\Gamma\cdot G(a,f)\] provided \(\delta\leq a\leq 1/2\) and \(-\log\frac{1-\delta}{\delta}\leq f\leq\log\frac{1-\delta}{\delta}\), which, together with (A.48), proves this lemma. **Lemma A.10**.: _Let \(\phi(t)=\log\left(1+\mathrm{e}^{-t}\right)\) be the logistic loss, \(\delta_{0}\in(0,1/3),\)\(d\in\mathbb{N}\) and \(P\) be a Borel probability measure on \([0,1]^{d}\times\{-1,1\}\) of which the conditional probability function \([0,1]^{d}\ni z\mapsto P(\{1\}\left|z\right)\in[0,1]\) is denoted by \(\eta\). Then there exists a measurable function_ \[\psi:[0,1]^{d}\times\{-1,1\}\rightarrow\left[0,\log\frac{10\log(1/\delta_{0})} {\delta_{0}}\right]\] _such that_ \[\int_{[0,1]^{d}\times\{-1,1\}}\psi\left(x,y\right)\mathrm{d}P(x,y)=\inf\Big{\{} \,\mathcal{R}_{P}^{\phi}(g)\Big{|}\,g:[0,1]^{d}\to\mathbb{R}\text{ is measurable}\Big{\}}\] (A.57) _and_ \[\int_{[0,1]^{d}\times\{-1,1\}}\left(\phi\left(yf(x)\right)-\psi(x, y)\right)^{2}\mathrm{d}P(x,y)\] (A.58) \[\leq 125000\left|\log\delta_{0}\right|^{2}\cdot\int_{[0,1]^{d} \times\{-1,1\}}\left(\phi\left(yf(x)\right)-\psi(x,y)\right)\mathrm{d}P(x,y)\] _for any measurable \(f:[0,1]^{d}\to\left[\log\frac{\delta_{0}}{1-\delta_{0}},\log\frac{1-\delta_{0} }{\delta_{0}}\right]\)._ Proof.: Let \[H:[0,1]\to[0,\infty),\quad t\mapsto\begin{cases}t\log\left(\frac{1}{t}\right) +(1-t)\log\left(\frac{1}{1-t}\right),&\text{ if }\in(0,1),\\ 0,&\text{ if }t\in\{0,1\}.\end{cases}\] Then it is easy to show that \(H\left(\frac{\delta_{0}}{10\log(1/\delta_{0})}\right)\leq\frac{4}{5}\log \left(\frac{1}{1-\delta_{0}}\right)\leq H\left(\frac{\delta_{0}}{\log(1/\delta _{0})}\right)\). Thus there exists \(\delta_{1}\in\left(0,\frac{1}{3}\right)\) such that \[H(\delta_{1})\leq\frac{4}{5}\log\left(\frac{1}{1-\delta_{0}}\right)\] and \[0<\frac{\delta_{0}}{10\log\left(1/\delta_{0}\right)}\leq\delta_{1}\leq\frac{ \delta_{0}}{\log(1/\delta_{0})}\leq\delta_{0}<1/3.\] Take \[\psi:[0,1]^{d}\times\{-1,1\}\to\mathbb{R},\quad(x,y)\mapsto\begin{cases}\phi \left(y\log\frac{\eta(x)}{1-\eta(x)}\right),&\text{if }\eta(x)\in[\delta_{1},1-\delta_{1}],\\ H(\eta(x)),&\text{if }\eta(x)\notin[\delta_{1},1-\delta_{1}],\end{cases}\] which can be further expressed as \[\psi:[0,1]^{d}\times\{-1,1\}\to\mathbb{R},\] \[(x,y)\mapsto\begin{cases}\phi\left(y\log\frac{\eta(x)}{1-\eta(x) }\right),&\text{if }\eta(x)\in[\delta_{1},1-\delta_{1}],\\ 0,&\text{if }\eta(x)\in\{0,1\},\\ \eta(x)\log\frac{1}{\eta(x)}+(1-\eta(x))\log\frac{1}{1-\eta(x)},&\text{if } \eta(x)\in(0,\delta_{1})\cup(1-\delta_{1},1).\end{cases}\] Obviously, \(\psi\) is a measurable function such that \[0\leq\psi(x,y)\leq\log\frac{1}{\delta_{1}}\leq\log\frac{10\log(1/\delta_{0})}{ \delta_{0}},\quad\forall\;(x,y)\in[0,1]^{d}\times\{-1,1\},\] and it follows immediately from Lemma A.3 that (A.57) holds. We next show (A.58). For any measurable function \(f:[0,1]^{d}\to\left[\log\frac{\delta_{0}}{1-\delta_{0}},\log\frac{1-\delta_{0 }}{\delta_{0}}\right]\) and any \(x\in[0,1]^{d}\), if \(\eta(x)\notin[\delta_{1},1-\delta_{1}]\), then we have \[0 \leq\psi(x,y)=H(\eta(x))\leq H(\delta_{1})\leq\frac{4}{5}\log \frac{1}{1-\delta_{0}}\] \[=\frac{4}{5}\phi\left(\log\frac{1-\delta_{0}}{\delta_{0}}\right) \leq\frac{4}{5}\phi(yf(x))\leq\phi(yf(x)),\quad\forall\;y\in\{-1,1\}.\] Hence \(0\leq\frac{1}{5}\phi(yf(x))\leq\phi(yf(x))-\psi(x,y)\leq\phi(yf(x)),\ \forall\ y\in \{-1,1\}\), which means that \[(\phi(yf(x))-\psi(x,y))^{2}\leq\phi(yf(x))^{2}\leq\phi(yf(x))\phi \left(-\log\frac{1-\delta_{0}}{\delta_{0}}\right)\] \[=\frac{1}{5}\phi(yf(x))\cdot 5\log\frac{1}{\delta_{0}}\leq(\phi(yf(x ))-\psi(x,y))\cdot 5000\left|\log\delta_{1}\right|^{2},\ \ \ \ \forall\ y\in\{-1,1\}.\] Integrating both sides with respect to \(y\), we obtain \[\begin{split}&\int_{\{-1,1\}}\left(\phi(yf(x))-\psi(x,y)\right)^{2} \mathrm{d}P(y|x)\\ &\leq 5000\left|\log\delta_{1}\right|^{2}\cdot\int_{\{-1,1\}}\left( \phi(yf(x))-\psi(x,y)\right)\mathrm{d}P(y|x).\end{split}\] (A.59) If \(\eta(x)\in[\delta_{1},1-\delta_{1}]\), then it follows from Lemma A.9 that \[\begin{split}&\int_{\{-1,1\}}\left(\phi(yf(x))-\psi(x,y)\right)^{ 2}\mathrm{d}P(y|x)\\ &=\eta(x)\left|\phi(f(x))-\phi\left(\log\frac{\eta(x)}{1-\eta(x) }\right)\right|^{2}+(1-\eta(x))\left|\phi(-f(x))-\phi\left(-\log\frac{\eta(x) }{1-\eta(x)}\right)\right|^{2}\\ &\leq 5000\left|\log\delta_{1}\right|^{2}\cdot\left(\eta(x)\phi(f(x)) +(1-\eta(x))\phi(-f(x))\right.\\ &\hskip 113.811024pt-\eta(x)\phi\Big{(}\log\frac{\eta(x)}{1-\eta( x)}\Big{)}-(1-\eta(x))\phi\Big{(}-\log\frac{\eta(x)}{1-\eta(x)}\Big{)}\Bigg{)}\\ &=5000\left|\log\delta_{1}\right|^{2}\int_{\{-1,1\}}\left(\phi(yf (x))-\psi(x,y)\right)\mathrm{d}P(y|x),\end{split}\] which means that (A.59) still holds. Therefore, (A.59) holds for all \(x\in[0,1]^{d}\). We then integrate both sides of (A.59) with respect to \(x\) and obtain \[\begin{split}&\int_{[0,1]^{d}\times\{-1,1\}}\left(\phi(yf(x))- \psi(x,y)\right)^{2}\mathrm{d}P(x,y)\\ &\leq 5000\left|\log\delta_{1}\right|^{2}\int_{[0,1]^{d}\times\{-1,1\} }\left(\phi(yf(x))-\psi(x,y)\right)\mathrm{d}P(x,y)\\ &\leq 125000\left|\log\delta_{0}\right|^{2}\int_{[0,1]^{d}\times\{-1,1 \}}\left(\phi(yf(x))-\psi(x,y)\right)\mathrm{d}P(x,y),\end{split}\] which yields (A.58). In conclusion, the function \(\psi\) defined above has all the desired properties. Thus we complete the proof. The following Lemma A.11 is similar to Lemma 3 of [37]. **Lemma A.11**.: _Let \((d,d_{\star},d_{\star},K)\in\mathbb{N}^{4}\), \(\beta\in(0,\infty)\), \(r\in[1,\infty)\), and \(q\in\mathbb{N}\cup\{0\}\). Suppose \(h_{0},h_{1},\ldots,h_{q},\tilde{h}_{0},\tilde{h}_{1},\ldots,\tilde{h}_{q}\) are functions satisfying that_ * \(\mathbf{dom}(h_{i})=\mathbf{dom}(\tilde{h}_{i})=[0,1]^{K}\) _for_ \(0<i\leq q\) _and_ \(\mathbf{dom}(h_{0})=\)__ \(\mathbf{dom}(\tilde{h}_{0})=[0,1]^{d}\)_;_ * \(\mathbf{ran}(h_{i})\cup\mathbf{ran}(\tilde{h}_{i})\subset[0,1]^{K}\) _for_ \(0\leq i<q\) _and_ \(\mathbf{ran}(h_{q})\cup\mathbf{ran}(\tilde{h}_{q})\subset\mathbb{R}\)_;_ * \(h_{q}\in\mathcal{G}_{\infty}^{\mathbf{H}}(d_{\star},\beta,r)\cup\mathcal{G}_ {\infty}^{\mathbf{M}}(d_{\star})\)_;_ * _For_ \(0\leq i<q\) _and_ \(1\leq j\leq K\)_, the_ \(j\)_-th coordinate function of_ \(h_{i}\) _given by_ \(\mathbf{dom}(h_{i})\ni x\mapsto(h_{i}(x))_{j}\in\mathbb{R}\) _belongs to_ \(\mathcal{G}_{\infty}^{\mathbf{H}}(d_{\star},\beta,r)\cup\mathcal{G}_{\infty}^{ \mathbf{M}}(d_{\star})\)_._ _Then there holds_ \[\begin{split}&\left\|h_{q}\circ h_{q-1}\circ\cdots\circ h_{1}\circ h _{0}-\tilde{h}_{q}\circ\tilde{h}_{q-1}\circ\cdots\circ\tilde{h}_{1}\circ\tilde {h}_{0}\right\|_{[0,1]^{d}}\\ &\leq\left|r\cdot d_{*}^{1\wedge\beta}\right|^{\sum_{k=0}^{q-1}(1 \wedge\beta)^{k}}\cdot\sum_{k=0}^{q}\left\|\tilde{h}_{k}-h_{k}\right\|_{ \mathbf{dom}(h_{k})}^{(1\wedge\beta)^{q-k}}.\end{split}\] (A.60) Proof.: We will prove this lemma by induction on \(q\). The case \(q=0\) is trivial. Now assume that \(q>0\) and that the desired result holds for \(q-1\). Consider the case \(q\). For each \(0\leq i<q\) and \(1\leq j\leq K\), denote \[\tilde{h}_{i,j}:\mathbf{dom}(\tilde{h}_{i})\rightarrow\mathbb{R},\;\;x\mapsto \big{(}\tilde{h}_{i}(x)\big{)}_{j},\] and \[h_{i,j}:\mathbf{dom}(h_{i})\rightarrow\mathbb{R},\;\;x\mapsto\big{(}h_{i}(x) \big{)}_{j}.\] Obviously, \(\mathbf{ran}(\tilde{h}_{i,j})\cup\mathbf{ran}(h_{i,j})\subset[0,1]\). By induction hypothesis (that is, the case \(q-1\) of this lemma), we have that \[\begin{split}&\left\|h_{q-1,j}\circ h_{q-2}\circ h_{q-3}\circ \cdots\circ h_{0}-\tilde{h}_{q-1,j}\circ\tilde{h}_{q-2}\circ\tilde{h}_{q-3} \circ\cdots\circ\tilde{h}_{0}\right\|_{[0,1]^{d}}\\ &\leq\left|r\cdot d_{*}^{1\wedge\beta}\right|^{\sum_{k=0}^{q-2}(1 \wedge\beta)^{k}}\cdot\left(\left\|\tilde{h}_{q-1,j}-h_{q-1,j}\right\|_{ \mathbf{dom}(h_{q-1,j})}+\sum_{k=0}^{q-2}\left\|\tilde{h}_{k}-h_{k}\right\|_{ \mathbf{dom}(h_{k})}^{(1\wedge\beta)^{q-1-k}}\right)\\ &\leq\left|r\cdot d_{*}^{1\wedge\beta}\right|^{\sum_{k=0}^{q-2}(1 \wedge\beta)^{k}}\cdot\sum_{k=0}^{q-1}\left\|\tilde{h}_{k}-h_{k}\right\|_{ \mathbf{dom}(h_{k})}^{(1\wedge\beta)^{q-1-k}},\;\forall\;j\in\mathbb{Z}\cap(0, K].\end{split}\] Therefore, \[\begin{split}&\left\|h_{q-1}\circ h_{q-2}\circ h_{q-3}\circ \cdots\circ h_{0}-\tilde{h}_{q-1}\circ\tilde{h}_{q-2}\circ\tilde{h}_{q-3} \circ\cdots\circ\tilde{h}_{0}\right\|_{[0,1]^{d}}\\ &=\sup_{j\in\mathbb{Z}\cap(0,K]}\left\|h_{q-1,j}\circ h_{q-2} \circ h_{q-3}\circ\cdots\circ h_{0}-\tilde{h}_{q-1,j}\circ\tilde{h}_{q-2} \circ\tilde{h}_{q-3}\circ\cdots\circ\tilde{h}_{0}\right\|_{[0,1]^{d}}\\ &\leq\left|r\cdot d_{*}^{1\wedge\beta}\right|^{\sum_{k=0}^{q-2}(1 \wedge\beta)^{k}}\cdot\sum_{k=0}^{q-1}\left\|\tilde{h}_{k}-h_{k}\right\|_{ \mathbf{dom}(h_{k})}^{(1\wedge\beta)^{q-1-k}}.\end{split}\] (A.61) We next show that \[\left|h_{q}(x)-h_{q}(x^{\prime})\right|\leq r\cdot d_{*}^{1\wedge\beta}\cdot \left\|x-x^{\prime}\right\|_{\infty}^{1\wedge\beta},\;\forall\;x,x^{\prime} \in[0,1]^{K}\] (A.62) by considering three cases. **Case I:**\(h_{q}\in\mathcal{G}_{\infty}^{\mathbf{H}}(d_{*},\beta,r)\) and \(\beta>1\). In this case, we must have that \(h_{q}\in\mathcal{G}_{K}^{\mathbf{H}}(d_{*},\beta,r)\) since \(\mathbf{dom}(h_{q})=[0,1]^{K}\). Therefore, there exist \(I\subset\{1,2,\ldots,K\}\) and \(g\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d_{*}}\right)\) such that \(\#(I)=d_{*}\) and \(h_{q}(x)=g((x)_{I})\) for all \(x\in[0,1]^{K}\). Denote \(\lambda:=\beta+1-\lceil\beta\rceil\). We then use Taylor's formula to deduce that \[\begin{split}&\left|h_{q}(x)-h_{q}(x^{\prime})\right|=\left|g((x)_ {I})-g((x^{\prime})_{I})\right|\stackrel{{\exists\in[0,1]^{d_{*} }}}{{=}}\left|\nabla g(\xi)\cdot((x)_{I}-(x^{\prime})_{I})\right|\\ &\leq\left\|\nabla g(\xi)\right\|_{\infty}\cdot\left\|(x)_{I}-(x^ {\prime})_{I}\right\|_{1}\leq\left\|\nabla g\right\|_{[0,1]^{d}}\cdot d_{*} \cdot\left\|(x)_{I}-(x^{\prime})_{I}\right\|_{\infty}\\ &\leq\left\|g\right\|_{\mathcal{C}^{\beta-\lambda,\lambda}([0,1]^ {d})}\cdot d_{*}\cdot\left\|(x)_{I}-(x^{\prime})_{I}\right\|_{\infty}\leq r \cdot d_{*}\cdot\left\|(x)_{I}-(x^{\prime})_{I}\right\|_{\infty}\\ &\leq r\cdot d_{*}^{1\wedge\beta}\cdot\left\|x-x^{\prime}\right\|_ {\infty}^{1\wedge\beta},\;\forall\;x,x^{\prime}\in[0,1]^{K},\end{split}\] which yields (A.62). **Case II:**\(h_{q}\in\mathcal{G}_{\infty}^{\mathbf{H}}(d_{*},\beta,r)\) and \(\beta\leq 1\). In this case, we still have that \(h_{q}\in\mathcal{G}_{K}^{\mathbf{H}}(d_{*},\beta,r)\). Therefore, there exist \(I\subset\{1,2,\ldots,K\}\) and \(g\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d_{*}}\right)\) such that \(\#(I)=d_{*}\) and \(h_{q}(x)=g((x)_{I})\) for all \(x\in[0,1]^{K}\). Consequently, \[\left|h_{q}(x)-h_{q}(x^{\prime})\right|=\left|g((x)_{I})-g((x^{\prime})_{I}) \right|\leq\left\|(x)_{I}-(x^{\prime})_{I}\right\|_{2}^{\beta}\cdot\sup_{[0,1]^{d_ {*}}\ni z\neq z^{\prime}\in[0,1]^{d_{*}}}\frac{\left|g(z)-g(z^{\prime})\right|}{ \left\|z-z^{\prime}\right\|_{2}^{\beta}}\] \[\leq r\cdot d_{*}^{1\wedge\beta}\cdot\left\|r\cdot d_{*}^{1\wedge \beta}\right|^{\sum_{k=0}^{q-2}(1\wedge\beta)^{k+1}}\cdot\sum_{k=0}^{q-1}\left\| \tilde{h}_{k}-h_{k}\right\|_{\mathbf{dom}(h_{k})}^{(1\wedge\beta)^{q-1-k}}+ \left\|h_{q}-\tilde{h}_{q}\right\|_{\mathbf{dom}(h_{q})}\] \[=r\cdot d_{*}^{1\wedge\beta}\cdot\left|r\cdot d_{*}^{1\wedge \beta}\right|^{\sum_{k=0}^{q-2}(1\wedge\beta)^{k+1}}\cdot\sum_{k=0}^{q-1} \left\|\tilde{h}_{k}-h_{k}\right\|_{\mathbf{dom}(h_{k})}^{(1\wedge\beta)^{q-1- k}}+\left\|h_{q}-\tilde{h}_{q}\right\|_{\mathbf{dom}(h_{q})}\] \[\leq r\cdot d_{*}^{1\wedge\beta}\cdot\left|r\cdot d_{*}^{1\wedge \beta}\right|^{\sum_{k=0}^{q-2}(1\wedge\beta)^{k+1}}\cdot\sum_{k=0}^{q-1} \left\|\tilde{h}_{k}-h_{k}\right\|_{\mathbf{dom}(h_{k})}^{(1\wedge\beta)^{q-1- k}}+\left\|h_{q}-\tilde{h}_{q}\right\|_{\mathbf{dom}(h_{q})}\] \[\leq r\cdot d_{*}^{1\wedge\beta}\cdot\left|r\cdot d_{*}^{1\wedge \beta}\right|^{\sum_{k=0}^{q-2}(1\wedge\beta)^{k+1}}\cdot\sum_{k=0}^{q-1} \left\|\tilde{h}_{k}-h_{k}\right\|_{\mathbf{dom}(h_{k})}^{(1\wedge\beta)^{q-1- k}}+\left\|h_{q}-\tilde{h}_{q}\right\|_{\mathbf{dom}(h_{q})}\] \[=\left|r\cdot d_{*}^{1\wedge\beta}\right|^{\sum_{k=0}^{q-1}(1 \wedge\beta)^{k}}\cdot\sum_{k=0}^{q}\left\|\tilde{h}_{k}-h_{k}\right\|_{ \mathbf{dom}(h_{k})}^{(1\wedge\beta)^{q-k}},\ \forall\ x\in[0,1]^{d}.\] Therefore, \[\left\|h_{q}\circ h_{q-1}\circ\cdots\circ h_{1}\circ h_{0}-\tilde {h}_{q}\circ\tilde{h}_{q-1}\circ\cdots\circ\tilde{h}_{1}\circ\tilde{h}_{0} \right\|_{[0,1]^{d}}\] \[=\sup_{x\in[0,1]^{d}}\left|h_{q}\circ h_{q-1}\circ\cdots\circ h_{ 1}\circ h_{0}(x)-\tilde{h}_{q}\circ\tilde{h}_{q-1}\circ\cdots\circ\tilde{h}_{ 1}\circ\tilde{h}_{0}(x)\right|\] \[\leq\left|r\cdot d_{*}^{1\wedge\beta}\right|^{\sum_{k=0}^{q-1}(1 \wedge\beta)^{k}}\cdot\sum_{k=0}^{q}\left\|\tilde{h}_{k}-h_{k}\right\|_{ \mathbf{dom}(h_{k})}^{(1\wedge\beta)^{q-k}},\] meaning that the desired result holds for \(q\). In conclusion, according to mathematical induction, we have that the desired result holds for all \(q\in\mathbb{N}\cup\{0\}\). This completes the proof. **Lemma A.12**.: _Let \(k\) be an positive integer. Then there exists a neural network_ \[\tilde{f}\in\mathcal{F}_{k}^{\mathbf{FNN}}\left(1+2\cdot\left\lceil\frac{\log k}{ \log 2}\right\rceil,2k,26\cdot 2^{\left\lceil\frac{\log k}{\log 2}\right\rceil}-20-2 \cdot\left\lceil\frac{\log k}{\log 2}\right\rceil,1,1\right)\] _such that_ \[\tilde{f}(x)=\left\|x\right\|_{\infty},\;\forall\;x\in\mathbb{R}^{k}.\] Proof.: We argue by induction. Firstly, consider the case \(k=1\). Define \[\tilde{f}_{1}:\mathbb{R}\rightarrow\mathbb{R},x\mapsto\sigma(x)+\sigma(-x).\] Obviously, \[\tilde{f}_{1} \in\mathcal{F}_{1}^{\mathbf{FNN}}(1,2,6,1,1)\] \[\subset\mathcal{F}_{1}^{\mathbf{FNN}}\left(1+2\cdot\left\lceil \frac{\log 1}{\log 2}\right\rceil,2\cdot 1,26\cdot 2^{\left\lceil\frac{\log 1 }{\log 2}\right\rceil}-20-2\cdot\left\lceil\frac{\log 1}{\log 2}\right\rceil,1,1\right)\] and \(\tilde{f}(x)=\sigma(x)+\sigma(-x)=\left|x\right|=\left\|x\right\|_{\infty}\) for all \(x\in\mathbb{R}=\mathbb{R}^{1}\). This proves the \(k=1\) case. Now assume that the desired result holds for \(k=1,2,3,\ldots,m-1\) (\(m\geq 2\)), and consider the case \(k=m\). Define \[\tilde{g}_{1}:\mathbb{R}^{m} \rightarrow\mathbb{R}^{\left\lfloor\frac{m}{2}\right\rfloor},\] \[x \mapsto\left((x)_{1},(x)_{2},\cdots,(x)_{\left\lfloor\frac{m}{2} \right\rfloor-^{1}},(x)_{\left\lfloor\frac{m}{2}\right\rfloor}\right),\] \[\tilde{g}_{2}:\mathbb{R}^{m} \rightarrow\mathbb{R}^{\left\lceil\frac{m}{2}\right\rceil},\] \[x \mapsto\left((x)_{\left\lfloor\frac{m}{2}\right\rfloor+1},(x)_{ \left\lfloor\frac{m}{2}\right\rfloor+2},\cdots,(x)_{m-1},(x)_{m}\right),\] and \[\tilde{f}_{m}:\mathbb{R}^{m} \rightarrow\mathbb{R},\] (A.63) \[x \mapsto\sigma\left(\frac{1}{2}\cdot\sigma\Big{(}\tilde{f}_{\left \lfloor\frac{m}{2}\right\rfloor}(\tilde{g}_{1}(x))\Big{)}-\frac{1}{2}\cdot \sigma\Big{(}\tilde{f}_{\left\lfloor\frac{m}{2}\right\rfloor}(\tilde{g}_{2}(x) )\Big{)}\right)\] \[\qquad+\frac{1}{2}\cdot\sigma\Big{(}\tilde{f}_{\left\lfloor\frac{ m}{2}\right\rfloor}(\tilde{g}_{1}(x))\Big{)}+\frac{1}{2}\cdot\sigma\Big{(} \tilde{f}_{\left\lfloor\frac{m}{2}\right\rfloor}(\tilde{g}_{2}(x))\Big{)}.\] It follows from the induction hypothesis that \[\tilde{f}_{\left\lceil\frac{m}{2}\right\rceil}\circ\tilde{g}_{2} \in\mathcal{F}_{m}^{\mathbf{FNN}}\left(1+2\left\lceil\frac{\log \left\lceil\frac{m}{2}\right\rceil}{\log 2}\right\rceil,2\left\lceil\frac{m}{2} \right\rceil,26\cdot 2^{\left\lceil\frac{\log\left\lceil\frac{m}{2}\right\rceil}{\log 2} \right\rceil}-20-2\left\lceil\frac{\log\left\lceil\frac{m}{2}\right\rceil}{ \log 2}\right\rceil,1,1\right)\] \[=\mathcal{F}_{m}^{\mathbf{FNN}}\left(-1+2\left\lceil\frac{\log m}{ \log 2}\right\rceil,2\left\lceil\frac{m}{2}\right\rceil,13\cdot 2^{\left\lceil \frac{\log m}{\log 2}\right\rceil}-18-2\left\lceil\frac{\log m}{\log 2}\right\rceil,1,1\right)\] and \[\tilde{f}_{\left\lfloor\frac{m}{2}\right\rfloor}\circ\tilde{g}_{1}\in\mathcal{ F}_{m}^{\mathbf{FNN}}\left(1+2\left\lceil\frac{\log\left\lfloor\frac{m}{2} \right\rfloor}{\log 2}\right\rceil,2\left\lfloor\frac{m}{2}\right\rfloor,26\cdot 2^{ \left\lceil\frac{\log\left\lfloor\frac{m}{2}\right\rfloor}{\log 2}\right\rceil}-20-2\left\lceil \frac{\log\left\lfloor\frac{m}{2}\right\rfloor}{\log 2}\right\rceil,1,1\right)\] \[\subset\mathcal{F}_{m}^{\mathbf{FNN}}\left(-1+2\left\lceil\frac{\log m}{\log 2} \right\rceil,2\left\lfloor\frac{m}{2}\right\rfloor,13\cdot 2^{\left\lceil\frac{\log m }{\log 2}\right\rceil}-18-2\left\lceil\frac{\log m}{\log 2}\right\rceil,1,1\right),\] which, together with (A.63), yield \[\tilde{f}_{m}\in\mathcal{F}_{m}^{\mathbf{FNN}}\left(2-1+2\left\lceil \frac{\log m}{\log 2}\right\rceil,2\left\lceil\frac{m}{2}\right\rceil+2 \left\lfloor\frac{m}{2}\right\rfloor,\right.\] (A.64) \[\left.\qquad\qquad\qquad 2\cdot\left|13\cdot 2^{\left\lceil\frac{\log m }{\log 2}\right\rceil}-18-2\left\lceil\frac{\log m}{\log 2}\right\rceil \right|+2\left\lceil\frac{\log m}{\log 2}\right\rceil+16,1,\infty\right)\] \[=\mathcal{F}_{m}^{\mathbf{FNN}}\left(1+\left\lceil\frac{\log m}{ \log 2}\right\rceil,2m,26\cdot 2^{\left\lceil\frac{\log m}{\log 2}\right\rceil}-20-2 \left\lceil\frac{\log m}{\log 2}\right\rceil,1,\infty\right)\] (cf. Figure A.6). Besides, it is easy to verify that Figure A.6: The network \(\tilde{f}_{m}\). \[\begin{split}&\tilde{f}_{m}(x)=\max\left\{\sigma\Big{(}\tilde{f}_{\big{\lfloor} \frac{n}{2}\big{\rfloor}}(\tilde{g}_{1}(x))\Big{)},\sigma\Big{(}\tilde{f}_{ \big{\lceil}\frac{n}{2}\big{\rceil}}(\tilde{g}_{2}(x))\Big{)}\right\}\\ &=\max\left\{\sigma\Bigg{(}\left\|\Big{(}(x)_{1},\ldots,(x)_{ \big{\lfloor}\frac{n}{2}\big{\rfloor}}\Big{)}\right\|_{\infty}\right),\sigma \Bigg{(}\left\|\Big{(}(x)_{\big{\lfloor}\frac{n}{2}\big{\rfloor}+1},\ldots,(x) _{m}\Big{)}\right\|_{\infty}\Bigg{)}\right\}\\ &=\max\left\{\left\|\Big{(}(x)_{1},\ldots,(x)_{\big{\lfloor} \frac{n}{2}\big{\rfloor}}\Big{)}\right\|_{\infty},\left\|\Big{(}(x)_{\big{ \lfloor}\frac{n}{2}\big{\rfloor}+1},\ldots,(x)_{m}\Big{)}\right\|_{\infty} \right\}\\ &=\max\left\{\max_{1\leq i\leq\lfloor\frac{n}{2}\rfloor}|(x)_{i }|\,,\max_{\lfloor\frac{n}{2}\rfloor+1\leq i\leq m}|(x)_{i}|\right\}=\max_{1 \leq i\leq m}|(x)_{i}|=\|x\|_{\infty}\,,\;\forall\;x\in\mathbb{R}^{m}.\end{split}\] (A.65) Combining (A.64) and (A.65), we deduce that the desired result holds for \(k=m\). Therefore, according to mathematical induction, we have that the desired result hold for all positive integer \(k\). This completes the proof. **Lemma A.13**.: _Let \((\varepsilon,d,d_{\star},d_{\star},\beta,r)\in(0,1/2]\times\mathbb{N}\times \mathbb{N}\times(0,\infty)\times(0,\infty)\) and \(f\) be a function from \([0,1]^{d}\) to \(\mathbb{R}\). Suppose \(f\in\mathcal{G}_{\infty}^{\mathbf{H}}(d_{\star},\beta,r\lor 1)\cup\mathcal{G}_{ \infty}^{\mathbf{M}}(d_{\star})\). Then there exist constants \(E_{1},E_{2},E_{3}\in(0,\infty)\) only depending on \((d_{\star},\beta,r)\) and a neural network_ \[\tilde{f}\in\mathcal{F}_{d}^{\mathbf{FNN}}\left(3\log d_{\star}+E_{1}\log \frac{1}{\varepsilon},2d_{\star}+E_{2}\varepsilon^{-\frac{d_{\star}}{\beta}},52d_{\star}+E_{3}\varepsilon^{-\frac{d_{\star}}{\beta}}\log\frac{1}{ \varepsilon},1,\infty\right)\] _such that_ \[\sup_{x\in[0,1]^{d}}\Big{|}\tilde{f}(x)-f(x)\Big{|}<2\varepsilon.\] Proof.: According to Corollary A.2, there exist constants \(E_{1},E_{2},E_{3}\in(6,\infty)\) only depending on \((d_{\star},\beta,r)\), such that \[\begin{split}&\inf\left\{\sup_{x\in[0,1]^{d_{\star}}}|g(x)- \tilde{g}(x)|\,\left|\tilde{g}\in\mathcal{F}_{d_{\star}}^{\mathbf{FNN}}\left(E _{1}\log\frac{1}{t},E_{2}t^{-\frac{d_{\star}}{\beta}},E_{3}t^{-\frac{d_{\star }}{\beta}}\log\frac{1}{t},1,\infty\right)\right.\right\}\\ &\leq t,\;\forall\;g\in\mathcal{G}_{r\lor 1}^{\beta} \left([0,1]^{d_{\star}}\right),\;\forall\;t\in(0,1/2].\end{split}\] (A.66) We next consider two cases. **Case I:**\(f\in\mathcal{G}_{\infty}^{\mathbf{M}}(d_{\star})\). In this case, we must have \(f\in\mathcal{G}_{d}^{\mathbf{M}}(d_{\star})\), since \(\mathbf{dom}(f)=[0,1]^{d}\). Therefore, there exists \(I\subset\{1,2,\ldots,d\}\), such that \(1\leq\#(I)\leq d_{\star}\) and \[f(x)=\max\left\{(x)_{i}\big{|}i\in I\right\},\;\forall\;x\in[0,1]^{d}.\] According to Lemma A.12, there exists \[\begin{split}\tilde{g}&\in\mathcal{F}_{\#(I)}^{ \mathbf{FNN}}\left(1+2\cdot\left\lceil\frac{\log\#(I)}{\log 2}\right\rceil,2\cdot\#(I),26\cdot 2^{\left\lceil\frac{ \log\#(I)}{\log 2}\right\rceil}-20-2\cdot\left\lceil\frac{\log\#(I)}{\log 2} \right\rceil,1,1\right)\\ &\subset\mathcal{F}_{\#(I)}^{\mathbf{FNN}}\left(1+2\cdot\left\lceil \frac{\log d_{\star}}{\log 2}\right\rceil,2d_{\star},26\cdot 2^{\left\lceil\frac{ \log d_{\star}}{\log 2}\right\rceil},1,1\right)\\ &\subset\mathcal{F}_{\#(I)}^{\mathbf{FNN}}\left(3+3\log d_{\star},2 d_{\star},52d_{\star},1,1\right)\end{split}\] (A.67) such that \[\tilde{g}(x)=\left\|x\right\|_{\infty},\;\forall\;x\in\mathbb{R}^{\#(I)}.\] Define \(\tilde{f}:\mathbb{R}^{d}\to\mathbb{R},\;x\mapsto\tilde{g}((x)_{I})\). Then it follows from (A.67) that \[\begin{split}\tilde{f}&\in\mathcal{F}_{d}^{\mathbf{FNN }}\left(3+3\log d_{\star},2d_{\star},52d_{\star},1,1\right)\\ &\subset\mathcal{F}_{d}^{\mathbf{FNN}}\left(3\log d_{\star}+E_{1} \log\frac{1}{\varepsilon},2d_{\star}+E_{2}\varepsilon^{-\frac{d_{\star}}{ \beta}},52d_{\star}+E_{3}\varepsilon^{-\frac{d_{\star}}{\beta}}\log\frac{1}{ \varepsilon},1,\infty\right)\end{split}\] and \[\sup_{x\in[0,1]^{d}}\left|f(x)-\tilde{f}(x)\right|=\sup_{x\in[0,1]^{d} }\left|\max\big{\{}(x)_{i}\big{|}i\in I\big{\}}-\tilde{g}((x)_{I})\right|\] \[=\sup_{x\in[0,1]^{d}}\left|\max\big{\{}|(x)_{i}\big{|}\left|i\in I \right\}-\left\|(x)_{I}\right\|_{\infty}\right|=0<2\varepsilon,\] which yield the desired result. **Case II:**\(f\in\mathcal{G}_{\infty}^{\mathbf{H}}(d_{*},\beta,r\lor 1)\). In this case, we must have \(f\in\mathcal{G}_{d}^{\mathbf{H}}(d_{*},\beta,r\lor 1)\), since \(\mathbf{dom}(f)=[0,1]^{d}\). By definition, there exist \(I\subset\{1,2,\ldots,d\}\) and \(g\in\mathcal{B}_{r\lor 1}^{\beta}\left([0,1]^{d_{*}}\right)\) such that \(\#(I)=d_{*}\) and \(f(x)=g\left((x)_{I}\right)\) for all \(x\in[0,1]^{d}\). Then it follows from (A.66) that there exists \(\tilde{g}\in\mathcal{F}_{d_{*}}^{\mathbf{FNN}}\left(E_{1}\log\frac{1}{ \varepsilon},E_{2}\varepsilon^{-\frac{d_{*}}{\beta}},E_{3}\varepsilon^{- \frac{d_{*}}{\beta}}\log\frac{1}{\varepsilon},1\right)\) such that \[\sup_{x\in[0,1]^{d_{*}}}|g(x)-\tilde{g}(x)|<2\varepsilon.\] Define \(\tilde{f}:\mathbb{R}^{d}\to\mathbb{R},\ x\mapsto\tilde{g}((x)_{I})\). Then we have that \[\tilde{f} \in\mathcal{F}_{d}^{\mathbf{FNN}}\left(E_{1}\log\frac{1}{ \varepsilon},E_{2}\varepsilon^{-\frac{d_{*}}{\beta}},E_{3}\varepsilon^{- \frac{d_{*}}{\beta}}\log\frac{1}{\varepsilon},1,\infty\right)\] \[\subset\mathcal{F}_{d}^{\mathbf{FNN}}\left(3\log d_{*}+E_{1}\log \frac{1}{\varepsilon},2d_{*}+E_{2}\varepsilon^{-\frac{d_{*}}{\beta}},52d_{*}+ E_{3}\varepsilon^{-\frac{d_{*}}{\beta}}\log\frac{1}{\varepsilon},1,\infty\right)\] and \[\sup_{x\in[0,1]^{d}}\left|f(x)-\tilde{f}(x)\right|=\sup_{x\in[0,1]^{d}}|g((x) _{I})-\tilde{g}((x)_{I})|=\sup_{x\in[0,1]^{d_{*}}}|g(x)-\tilde{g}(x)|<2\varepsilon.\] These yield the desired result again. In conclusion, the desired result always holds. Thus we completes the proof of this lemma. **Lemma A.14**.: _Let \(\beta\in(0,\infty)\), \(r\in(0,\infty)\), \(q\in\mathbb{N}\cup\{0\}\), and \((d,d_{*},d_{*},K)\in\mathbb{N}^{4}\) with \(d_{*}\leq\min\big{\{}d,K+\mathbb{1}_{\{0\}}(q)\cdot(d-K)\big{\}}\). Suppose \(f\in\mathcal{G}_{d}^{\mathbf{CHOM}}(q,K,d_{*},d_{*},\beta,r)\) and \(\varepsilon\in(0,1/2]\). Then there exist \(E_{7}\in(0,\infty)\) only depending on \((d_{*},\beta,r,q)\) and_ \[\tilde{f}\in\mathcal{F}_{d}^{\mathbf{FNN}}\left((q+1)\cdot\left|3 \log d_{*}+E_{7}\log\frac{1}{\varepsilon}\right|,2Kd_{*}+KE_{7}\varepsilon^{- \frac{d_{*}}{\beta\cdot(1\wedge\beta)t}},\right.\] (A.68) \[\left.(Kq+1)\cdot\left|63d_{*}+E_{7}\varepsilon^{-\frac{d_{*}}{ \beta\cdot(1\wedge\beta)t}}\log\frac{1}{\varepsilon}\right|,1,\infty\right)\] _such that_ \[\sup_{x\in[0,1]^{d}}\left|f(x)-\tilde{f}(x)\right|\leq\frac{\varepsilon}{8}.\] (A.69) Proof.: By the definition of \(\mathcal{G}_{d}^{\mathbf{CHOM}}(q,K,d_{*},d_{*},\beta,r)\), there exist functions \(h_{0},h_{1},\ldots,h_{q}\) such that * \(\mathbf{dom}(h_{i})=[0,1]^{K}\) for \(0<i\leq q\) and \(\mathbf{dom}(h_{0})=[0,1]^{d}\); * \(\mathbf{ran}(h_{i})\subset[0,1]^{K}\) for \(0\leq i<q\) and \(\mathbf{ran}(h_{q})\subset\mathbb{R}\); * \(h_{q}\in\mathcal{G}_{\infty}^{\mathbf{H}}(d_{*},\beta,r\lor 1)\cup\mathcal{G}_{ \infty}^{\mathbf{M}}(d_{*})\); * For \(0\leq i<q\) and \(1\leq j\leq K\), the \(j\)-th coordinate function of \(h_{i}\) given by \(\mathbf{dom}(h_{i})\ni x\mapsto(h_{i}(x))_{j}\in\mathbb{R}\) belongs to \(\mathcal{G}_{\infty}^{\mathbf{H}}(d_{*},\beta,r\lor 1)\cup\mathcal{G}_{\infty}^{ \mathbf{M}}(d_{*})\); * \(f=h_{q}\circ h_{q-1}\circ\cdots\circ h_{2}\circ h_{1}\circ h_{0}\). Define \(\Omega:=\big{\{}(i,j)\in\mathbb{Z}^{2}\big{|}0\leq i\leq q,1\leq j\leq K, \mathbb{1}_{\{q\}}(i)\leq\mathbb{1}_{\{1\}}(j)\big{\}}\). For each \((i,j)\in\Omega\), denote \(d_{i,j}:=K+\mathbb{1}_{\{0\}}(i)\cdot(d-K)\) and \[h_{i,j}:\mathbf{dom}(h_{i})\to\mathbb{R},\ \ x\mapsto\big{(}h_{i}(x)\big{)}_{j}.\] Then it is easy to verify that, \[\mathbf{dom}(h_{i,j})=[0,1]^{d_{i,j}}\text{ and }h_{i,j}\in\mathcal{G}_{\infty}^{ \mathbf{H}}(d_{*},\beta,r\lor 1)\cup\mathcal{G}_{\infty}^{\mathbf{M}}(d_{*}),\; \forall\;(i,j)\in\Omega,\] (A.70) and \[\mathbf{ran}\left(h_{i,j}\right)\subset[0,1],\;\forall\;(i,j)\in\Omega\setminus \left\{(q,1)\right\}.\] (A.71) Fix \(\varepsilon\in(0,1/2]\). Take \[\delta:=\frac{1}{2}\cdot\left|\frac{\varepsilon}{8\cdot\left|(1\lor r)\cdot d _{*}\right|^{q}\cdot(q+1)}\right|^{\frac{1}{(1\wedge j)q}}\leq\frac{ \varepsilon/2}{8\cdot\left|(1\lor r)\cdot d_{*}\right|^{q}\cdot(q+1)}\leq \frac{\varepsilon}{8}\leq\frac{1}{16}.\] According to (A.70) and Lemma A.13, there exists a constant \(E_{1}\in(6,\infty)\) only depending on \((d_{*},\beta,r)\) and a set of functions \(\left\{\tilde{g}_{i,j}:\mathbb{R}^{d_{i,j}}\rightarrow\mathbb{R}\right\}_{(i,j)\in\Omega}\), such that \[\begin{split}\tilde{g}_{i,j}\in\mathcal{F}_{d_{i,j}}^{\mathbf{FNN }}\left(3\log d_{\star}+E_{1}\log\frac{1}{\delta},2d_{\star}+E_{1}\delta^{- \frac{d_{*}}{\beta}},\right.\\ \left.52d_{\star}+E_{1}\delta^{-\frac{d_{*}}{\beta}}\log\frac{1}{ \delta},1,\infty\right),\forall(i,j)\in\Omega\end{split}\] (A.72) and \[\sup\left\{\left|\tilde{g}_{i,j}(x)-h_{i,j}(x)\right|\left|x\in[0,1]^{d_{i,j}} \right.\right\}\leq 2\delta,\;\forall\;(i,j)\in\Omega.\] (A.73) Define \[E_{4} :=8\cdot|(1\lor r)\cdot d_{*}|^{q}\cdot(q+1),\] \[E_{5} :=2^{\frac{d_{*}}{\beta}}\cdot E_{4}^{\frac{d_{*}}{9(1\wedge j)^{ 7}}},\] \[E_{6} :=\frac{1}{(1\wedge\beta)^{q}}+\frac{2\log E_{4}}{(1\wedge\beta)^ {q}}+2\log 2,\] \[E_{7} :=E_{1}E_{6}+E_{1}E_{5}+2E_{1}E_{5}E_{6}+6,\] Obviously, \(E_{4},E_{5},E_{6},E_{7}\) are constants only depending on \((d_{*},\beta,r,q)\). Next, define \[\tilde{h}_{i,j}:\mathbb{R}^{d_{i,j}}\rightarrow\mathbb{R},\;x\mapsto\sigma \big{(}\sigma\left(\tilde{g}_{i,j}(x)\right)\big{)}-\sigma\big{(}\sigma\left( \tilde{g}_{i,j}(x)\right)-1\big{)}\] for each \((i,j)\in\Omega\setminus\left\{(q,1)\right\}\), and define \(\tilde{h}_{q,1}:=\tilde{g}_{q,1}\). It follows from the fact \[\sigma\big{(}\sigma\left(z\right)\big{)}-\sigma\big{(}\sigma\left(z\right)-1 \big{)}\in[0,1],\;\forall\;z\in\mathbb{R}\] and (A.72) that \[\mathbf{ran}(\tilde{h}_{i,j})\subset[0,1],\;\forall\;(i,j)\in\Omega\setminus (q,1)\] (A.74) and \[\begin{split}\tilde{h}_{i,j}\in\mathcal{F}_{d_{i,j}}^{\mathbf{FNN }}\left(2+3\log d_{\star}+E_{1}\log\frac{1}{\delta},2d_{\star}+E_{1}\delta^{- \frac{d_{*}}{\beta}},\right.\\ \left.58d_{\star}+E_{1}\delta^{-\frac{d_{*}}{\beta}}\log\frac{1}{ \delta},1,\infty\right),\forall(i,j)\in\Omega.\end{split}\] (A.75) Besides, it follows from the fact that \[\left|\sigma\big{(}\sigma\left(z\right)\big{)}-\sigma\big{(}\sigma\left(z \right)-1\big{)}-w\right|\leq|w-z|\,,\;\forall\;z\in\mathbb{R},\;\forall\;w\in[0,1]\] and (A.73) that \[\begin{split}&\sup\left\{\left|\tilde{h}_{i,j}(x)-h_{i,j}(x) \right|\left|x\in[0,1]^{d_{i,j}}\right.\right\}\\ &\leq\sup\left\{\left|\tilde{g}_{i,j}(x)-h_{i,j}(x)\right|\left|x \in[0,1]^{d_{i,j}}\right.\right\}\leq 2\delta.\end{split}\] (A.76) We then define \[\tilde{h}_{i}:\mathbb{R}^{d_{i,1}}\to\mathbb{R}^{K},x\mapsto\left(\tilde{h}_{i,1}( x),\tilde{h}_{i,2}(x),\ldots,\tilde{h}_{i,K}(x)\right)^{\top}\] for each \(i\in\{0,1,\ldots,q-1\}\), and \(\tilde{h}_{q}:=\tilde{h}_{q,1}\). From (A.74) we obtain \[\mathbf{ran}(\tilde{h}_{i})\subset[0,1]^{K}\subset\mathbf{dom}(\tilde{h}_{i+1 }),\ \forall\ i\in\{0,1,\ldots,q-1\}\,.\] (A.77) Thus we can well define the function \(\tilde{f}:=\tilde{h}_{q}\circ\tilde{h}_{q-1}\circ\cdots\circ\tilde{h}_{1} \circ\tilde{h}_{0}\), which is from \(\mathbb{R}^{d}\) to \(\mathbb{R}\). Since all the functions \(\tilde{h}_{i,j}\) (\((i,j)\in\Omega\)) are neural networks satisfying (A.75), we deduce that \(\tilde{f}\) is also a neural network, which is comprised of all those networks \(\tilde{h}_{i,j}\) through series and parallel connection. Obviously, the depth of \(\tilde{f}\) is less than or equal to \[\sum_{i}\left(1+\max_{j}\Big{(}\text{the depth of }\tilde{h}_{i,j}\Big{)} \right),\] the width of \(\tilde{f}\) is less than or equal to \[\max_{i}\sum_{j}\Big{(}\text{the width of }\tilde{h}_{i,j}\Big{)},\] the number of nonzero parameters of \(\tilde{f}\) is less than or equal to \[\sum_{i,j}\left(\Big{(}\text{the number of nonzero parameters }\tilde{h}_{i,j} \Big{)}+\max_{k}\Big{(}\text{the depth of }\tilde{h}_{i,k}\Big{)}\right),\] and the parameters of \(\tilde{f}\) is bounded by \(1\) in absolute value. Thus we have that \[\tilde{f} \in\mathcal{F}_{d}^{\mathbf{FNN}}\left((q+1)\cdot\left|3+3\log d_{ \star}+E_{1}\log\frac{1}{\delta}\right|,2Kd_{\star}+KE_{1}\delta^{-\frac{d_{ \star}}{\beta}},\right.\] \[\qquad\qquad\qquad\left.(Kq+1)\cdot\left|63d_{\star}+2E_{1}\delta ^{-\frac{d_{\star}}{\beta}}\log\frac{1}{\delta}\right|,1,\infty\right)\] \[=\mathcal{F}_{d}^{\mathbf{FNN}}\left((q+1)\cdot\left|3+3\log d_{ \star}+E_{1}\cdot\Big{(}\log 2+\frac{\log\frac{E_{4}}{\varepsilon}}{(1\wedge \beta)^{q}}\Big{)}\right|,2Kd_{\star}+KE_{1}E_{5}\varepsilon^{-\frac{d_{ \star}}{\beta(1\wedge\beta)^{q}}},\right.\] \[\qquad\qquad\qquad\left.(Kq+1)\cdot\left|63d_{\star}+2E_{1}E_{5} \varepsilon^{-\frac{d_{\star}}{\beta(1\wedge\beta)^{q}}}\cdot\Big{(}\log 2+ \frac{\log\frac{E_{4}}{\varepsilon}}{(1\wedge\beta)^{q}}\Big{)}\right|,1,\infty\right)\] \[\subset\mathcal{F}_{d}^{\mathbf{FNN}}\left((q+1)\cdot\left|3+3 \log d_{\star}+E_{1}E_{6}\log\frac{1}{\varepsilon}\right|,2Kd_{\star}+KE_{1}E_ {5}\varepsilon^{-\frac{d_{\star}}{\beta(1\wedge\beta)^{q}}},\right.\] \[\qquad\qquad\qquad\left.(Kq+1)\cdot\left|63d_{\star}+2E_{1}E_{5} \varepsilon^{-\frac{d_{\star}}{\beta(1\wedge\beta)^{q}}}E_{6}\log\frac{1}{ \varepsilon}\right|,1,\infty\right)\] \[\subset\mathcal{F}_{d}^{\mathbf{FNN}}\left((q+1)\cdot\left|3\log d _{\star}+E_{7}\log\frac{1}{\varepsilon}\right|,2Kd_{\star}+KE_{7}\varepsilon^ {-\frac{d_{\star}}{\beta(1\wedge\beta)^{q}}},\right.\] \[\qquad\qquad\qquad\left.(Kq+1)\cdot\left|63d_{\star}+E_{7} \varepsilon^{-\frac{d_{\star}}{\beta(1\wedge\beta)^{q}}}\log\frac{1}{ \varepsilon}\right|,1,\infty\right),\] leading to (A.68). Moreover, it follows from (A.76) and Lemma A.11 that \[\sup_{x\in[0,1]^{d}}\left|\tilde{f}(x)-f(x)\right|=\sup_{x\in[0,1 ]^{d}}\left|\tilde{h}_{q}\circ\cdots\circ\tilde{h}_{0}(x)-h_{q}\circ\cdots \circ h_{0}(x)\right|\] \[\leq\left|(1\lor r)\cdot d_{\star}^{1\wedge\beta}\right|^{\sum_{i= 0}^{q-1}(1\wedge\beta)^{i}}\cdot\sum_{i=0}^{q}\left|\sup_{x\in[0,1]^{d_{i,1}}} \left\|\tilde{h}_{i}(x)-h_{i}(x)\right\|_{\infty}\right|^{(1\wedge\beta)^{q-i}}\] \[\leq|(1\lor r)\cdot d_{*}|^{q}\cdot\sum_{i=0}^{q}|2\delta|^{(1\wedge\beta)^{q-i}} \leq|(1\lor r)\cdot d_{*}|^{q}\cdot\sum_{i=0}^{q}|2\delta|^{(1\wedge\beta)^{q}}= \frac{\varepsilon}{8},\] which yields (A.69). In conclusion, the constant \(E_{7}\) and the neural network \(\tilde{f}\) have all the desired properties. The proof of this lemma is then completed. The next lemma aims to estimate the approximation error. **Lemma A.15**.: _Let \(\phi(t)=\log\left(1+\mathrm{e}^{-t}\right)\) be the logistic loss, \(q\in\mathbb{N}\cup\{0\}\), \((\beta,r)\in(0,\infty)^{2}\), \((d,d_{*},d_{*},K)\in\mathbb{N}^{4}\) with \(d_{*}\leq\min\left\{d,K+1_{\{0\}}(q)\cdot(d-K)\right\}\), and \(P\) be a Borel probability measure on \([0,1]^{d}\times\{-1,1\}\). Suppose that there exists an \(\hat{\eta}\in\mathcal{G}_{d}^{\mathbf{CHON}}(q,K,d_{*},d_{*},\beta,r)\) such that \(P_{X}(\left\{x\in[0,1]^{d}\right|\hat{\eta}(x)=P(\left\{1\right\}|x))\right\}=1\). Then there exist constants \(D_{1},D_{2},D_{3}\) only depending on \((d_{*},d_{*},\beta,r,q)\) such that for any \(\delta\in(0,1/3)\),_ \[\inf\left\{\mathcal{E}_{P}^{\phi}\left(f\right)\left|f\in\mathcal{ F}_{d}^{\mathbf{FNN}}\left(D_{1}\log\frac{1}{\delta},KD_{2}\delta^{\frac{-d_{*} /\delta}{(1\wedge\beta)^{q}}},KD_{3}\delta^{\frac{-d_{*}/\delta}{(1\wedge\beta )^{q}}}\cdot\log\frac{1}{\delta},1,\log\frac{1-\delta}{\delta}\right)\right\}\] (A.78) \[\leq 8\delta.\] Proof.: Denote by \(\eta\) the conditional probability function \([0,1]^{d}\ni x\mapsto P(\left\{1\right\}|x)\in[0,1]\). Fix \(\delta\in(0,1/3)\). Then it follows from Lemma A.14 that there exists \[\tilde{\eta}\in\mathcal{F}_{d}^{\mathbf{FNN}}\left(C_{d_{*},d_{* },\beta,r,q}\log\frac{1}{\delta},KC_{d_{*},d_{*},\beta,r,q}\delta^{-\frac{d_{* }}{\beta\cdot(1\wedge\beta)^{q}}},\right.\] (A.79) \[\left.KC_{d_{*},d_{*},\beta,r,q}\delta^{-\frac{d_{*}}{\beta\cdot (1\wedge\beta)^{q}}}\log\frac{1}{\delta},1,\infty\right)\] such that \[\sup_{x\in[0,1]^{d}}|\tilde{\eta}(x)-\hat{\eta}(x)|\leq\delta/8.\] (A.80) Also, by Theorem 2.4 with \(a=\varepsilon=\delta\), \(b=1-\delta\) and \(\alpha=\frac{2\beta}{d_{*}}\), there exists \[\tilde{l}\in\mathcal{F}_{1}^{\mathbf{FNN}}\left(C_{d_{*},\beta} \log\frac{1}{\delta}+139\log\frac{1}{\delta},C_{d_{*},\beta}\cdot\left(\frac{ 1}{\delta}\right)^{\frac{1}{2\beta/d_{*}}}\log\frac{1}{\delta},\right.\] (A.81) \[\left.C_{d_{*},\beta}\cdot\left(\frac{1}{\delta}\right)^{\frac{ 1}{2\beta/d_{*}}}\cdot\left(\log\frac{1}{\delta}\right)\cdot\left(\log\frac{ 1}{\delta}\right)+65440\left(\log\delta\right)^{2},1,\infty\right)\] \[\subset\mathcal{F}_{1}^{\mathbf{FNN}}\left(C_{d_{*},\beta}\log \frac{1}{\delta},C_{d_{*},\beta}\delta^{-\frac{d_{*}}{\beta\cdot(1\wedge\beta )^{q}}},C_{d_{*},\beta}\delta^{-\frac{d_{*}}{\beta\cdot(1\wedge\beta)^{q}}} \log\frac{1}{\delta},1,\infty\right)\] \[\subset\mathcal{F}_{1}^{\mathbf{FNN}}\left(C_{d_{*},\beta}\log \frac{1}{\delta},C_{d_{*},\beta}\delta^{-\frac{d_{*}}{\beta\cdot(1\wedge\beta) ^{q}}},C_{d_{*},\beta}\delta^{-\frac{d_{*}}{\beta\cdot(1\wedge\beta)^{q}}} \log\frac{1}{\delta},1,\infty\right)\] such that \[\sup_{t\in[\delta,1-\delta]}\left|\tilde{l}(t)-\log t\right|\leq\delta\] (A.82) and \[\log\delta\leq\tilde{l}(t)\leq\log\left(1-\delta\right)<0,\;\forall\;t\in \mathbb{R}.\] (A.83) Recall that the clipping function \(\Pi_{\delta}\) is given by \[\Pi_{\delta}:\mathbb{R}\rightarrow[\delta,1-\delta],\quad t\mapsto\begin{cases}1- \delta,&\text{ if }t>1-\delta,\\ \delta,&\text{ if }t<\delta,\\ t,&\text{ otherwise.}\end{cases}\] Define \(\tilde{f}:\mathbb{R}\rightarrow\mathbb{R},x\mapsto\tilde{l}\left(\Pi_{\delta}\left( \tilde{\eta}(x)\right)\right)-\tilde{l}\left(1-\Pi_{\delta}\left(\tilde{\eta}( x)\right)\right)\). Consequently, we know from (A.79), (A.81) and (A.83) that (cf. Figure A.7) \[\tilde{f}\in\mathcal{F}_{d}^{\mathbf{FNN}}\left(C_{d_{*},d_{*}, \beta,r,q}\log\frac{1}{\delta},KC_{d_{*},d_{*},\beta,r,q}\delta^{-\frac{d_{*}} {\beta-(1\wedge\beta)^{\mathsf{T}}}},\right.\] \[KC_{d_{*},d_{*},\beta,r,q}\delta^{-\frac{d_{*}}{\beta-(1\wedge \beta)^{\mathsf{T}}}}\log\frac{1}{\delta},1,\log\frac{1-\delta}{\delta}\right).\] Let \(\Omega_{1},\Omega_{2},\Omega_{3}\) be defined in (A.39). Then it follows from (A.80) that \[|\Pi_{\delta}(\tilde{\eta}(x))-\eta(x)|=|\Pi_{\delta}(\tilde{ \eta}(x))-\Pi_{\delta}\left(\tilde{\eta}(x)\right)|\leq|\tilde{\eta}(x)- \tilde{\eta}(x)|\] \[\leq\frac{\delta}{8}\leq\frac{\min\left\{\eta(x),1-\eta(x) \right\}}{8},\;\forall\;x\in\Omega_{1},\] which means that \[\min\left\{\frac{\Pi_{\delta}\left(\tilde{\eta}(x)\right)}{\eta(x)},\frac{1- \Pi_{\delta}\left(\tilde{\eta}(x)\right)}{1-\eta(x)}\right\}\geq 7/8,\;\forall \;x\in\Omega_{1}.\] (A.84) Combining (A.82) and (A.84), we obtain that \[\left|\tilde{f}(x)-\log\frac{\eta(x)}{1-\eta(x)}\right|\] \[\leq\left|\tilde{l}\left(\Pi_{\delta}\left(\tilde{\eta}(x)\right) \right)-\log\left(\eta(x)\right)\right|+\left|\tilde{l}\left(1-\Pi_{\delta} \left(\tilde{\eta}(x)\right)\right)-\log\left(1-\eta(x)\right)\right|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x),1-\eta(x)\right\},\infty)]}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta}\left( \tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x),1-\eta(x)\right\},\infty)]}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x),1-\eta(x)\right\},\infty)]}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x) ),1-\eta(x)\right\},\infty)}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta}(\tilde{\eta}(x),1-\eta(x)\right\},\infty)]}\left|\log^{\prime}(t)\right|\cdot|\Pi_{\delta} \left(\tilde{\eta}(x)\right)-\eta(x)|\] \[\leq\delta+\sup_{t\in[\min\left\{1-\Pi_{\delta Besides, note that \[x\in\Omega_{2}\Rightarrow\tilde{\eta}(x)\in[-\xi_{1},\delta+\xi_{1}] \Rightarrow\Pi_{\delta}\left(\tilde{\eta}(x)\right)\in[\delta,\delta+\xi_{1}]\] \[\Rightarrow\tilde{l}\left(\Pi_{\delta}\left(\tilde{\eta}(x) \right)\right)\in[\log\delta,\delta+\log\left(\delta+\xi_{1}\right)]\] \[\quad\text{as well as }\tilde{l}\left(1-\Pi_{\delta}\left(\tilde{\eta}(x) \right)\right)\in[-\delta+\log(1-\delta-\xi_{1}),\log(1-\delta)]\] \[\Rightarrow\tilde{f}(x)\leq 2\delta+\log\frac{\xi_{1}+\delta}{1- \xi_{1}-\delta}\leq\log 2+\log\frac{2\delta}{1-2\delta}=\log\frac{4\delta}{1-2 \delta}.\] Therefore, by (A.83) and the definition of \(\tilde{f}\), we have \[\log\frac{\delta}{1-\delta}\leq\tilde{f}(x)\leq\log\frac{4\delta}{1-2\delta} =-\log\frac{1-2\delta}{4\delta},\ \forall\ x\in\Omega_{2}.\] (A.86) Similarly, we can show that \[\log\frac{1-2\delta}{4\delta}\leq\tilde{f}(x)\leq\log\frac{1-\delta}{\delta}, \ \forall\ x\in\Omega_{3}.\] (A.87) Then it follows from (A.85), (A.86), (A.87) and Lemma A.8 that \[\left.\inf\left\{\mathcal{E}_{P}^{\phi}\left(f\right)\left|f\in \mathcal{F}_{d}^{\mathbf{FNN}}\left(C_{d_{*},d_{*},\beta,r_{*}q}\log\frac{1}{ \delta},KC_{d_{*},d_{*},\beta,r_{*}q}\delta^{-\frac{d_{*}}{\beta\cdot(1+\beta) \eta}},\right.\right.\right\}\] \[\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left. \[\leq\int_{\Omega_{1}}\sup\left\{\left.\frac{\left|\tilde{f}(x)-\log \frac{\eta(x)}{1-\eta(x)}\right|^{2}}{2(2+\mathrm{e}^{t}+\mathrm{e}^{-t})}\right| ^{2}\right|t\in\left[-1+\log\frac{\eta(x)}{1-\eta(x)},1+\log\frac{\eta(x)}{1- \eta(x)}\right]\right\}\mathrm{d}P_{X}(x)\] \[\qquad+P_{X}(\Omega_{2}\cup\Omega_{3})\cdot\log\frac{1+2\delta}{1 -2\delta}\] \[\leq\int_{\Omega_{1}}\left|\tilde{f}\left(x\right)-\log\frac{\eta (x)}{1-\eta(x)}\right|^{2}\cdot 2\cdot(1-\eta(x))\eta(x)\mathrm{d}P_{X}(x)+6\delta\] \[\leq\int_{\Omega_{1}}\left|2\delta+\frac{\delta}{7\eta(x)(1-\eta (x))}\right|^{2}\cdot 2\cdot(1-\eta(x))\eta(x)\mathrm{d}P_{X}(x)+6\delta\] \[\leq\int_{\Omega_{1}}\frac{\delta^{2}}{(1-\eta(x))\eta(x)} \mathrm{d}P_{X}(x)+6\delta\leq\frac{\delta^{2}}{\delta(1-\delta)}+6\delta<8\delta,\] which proves this lemma. Now we are in the position to prove Theorem 2.2 and Theorem 2.3. Proof of Theorem 2.2 and Theorem 2.3.: We first prove Theorem 2.3. According to Lemma A.15, there exist \((D_{1},D_{2},D_{3})\in(0,\infty)^{3}\) only depending on \((d_{\star},d_{\star},\beta,r,q)\) such that (A.78) holds for any \(\delta\in(0,1/3)\) and any \(P\in\mathcal{H}_{4,q,K,d_{\star},d_{\star}}^{d,\beta,r}\). Take \(E_{1}=1+D_{1}\), then \(E_{1}>0\) only depends on \((d_{\star},d_{\star},\beta,r,q)\). We next show that for any constants \(\mathbf{a}:=(a_{2},a_{3})\in(0,\infty)^{2}\) and \(\mathbf{b}:=(b_{1},b_{2},b_{3},b_{4},b_{5})\in(0,\infty)^{5}\), there exist constants \(E_{2}\in(3,\infty)\) only depends on \((\mathbf{a},d_{\star},d_{\star},\beta,r,q,K)\) and \(E_{3}\in(0,\infty)\) only depending on \((\mathbf{a},\mathbf{b},\nu,d,d_{\star},d_{\star},\beta,r,q,K)\) such that when \(n\geq E_{2}\), the \(\phi\)-ERM \(\hat{f}_{n}^{\mathbf{FNN}}\) defined by (2.14) with \[\begin{split}& E_{1}\cdot\log n\leq G\leq b_{1}\cdot\log n,\\ & a_{2}\cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d_{\star }}{d_{\star}+\beta\cdot(1\wedge\beta)^{\eta}}}\leq N\leq b_{2}\cdot\left(\frac{ (\log n)^{5}}{n}\right)^{\frac{-d_{\star}}{d_{\star}+\beta\cdot(1\wedge\beta)^ {\eta}}},\\ & a_{3}\cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d_{\star }}{d_{\star}+\beta\cdot(1\wedge\beta)^{\eta}}}\cdot\log n\leq S\leq b_{3} \cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d_{\star}}{d_{\star}+\beta \cdot(1\wedge\beta)^{\eta}}}\cdot\log n,\\ &\frac{\beta\cdot(1\wedge\beta)^{q}}{d_{\star}+\beta\cdot(1 \wedge\beta)^{q}}\cdot\log n\leq F\leq b_{4}\log n,\text{ and }1\leq B\leq b_{5}\cdot n^{\nu}\end{split}\] (A.88) must satisfy \[\begin{split}&\sup_{P\in\mathcal{H}_{4,q,K,d_{\star},d_{\star}}^{d,\beta,r}}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}^{\phi}\left(\hat{f}_{n}^{ \mathbf{FNN}}\right)\right]\leq E_{3}\cdot\left(\frac{(\log n)^{5}}{n}\right)^ {\frac{\beta\cdot(1\wedge\beta)^{q}}{d_{\star}+\beta\cdot(1\wedge\beta)^{ \eta}}}\\ &\text{ and }\sup_{P\in\mathcal{H}_{4,q,K,d_{\star},d_{\star}}^{d, \beta,r}}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}\left(\hat{f}_{n}^{ \mathbf{FNN}}\right)\right]\leq E_{3}\cdot\left(\frac{(\log n)^{5}}{n}\right)^ {\frac{\beta\cdot(1\wedge\beta)^{q}}{d_{\star}+2\beta\cdot(1\wedge\beta)^{ \eta}}},\end{split}\] (A.89) which will lead to the results of Theorem 2.3. Let \(\mathbf{a}:=(a_{2},a_{3})\in(0,\infty)^{2}\) and \(\mathbf{b}:=(b_{1},b_{2},b_{3},b_{4},b_{5})\in(0,\infty)^{5}\) be arbitrary and fixed. Take \[D_{4}=1\vee\left(\frac{D_{2}K}{a_{2}}\right)^{\frac{\beta\cdot(1\wedge\beta)^{ q}}{d_{\star}}}\vee\left(\frac{D_{3}E_{1}K}{D_{1}a_{3}}\right)^{\frac{\beta\cdot(1 \wedge\beta)^{q}}{d_{\star}}},\] then \(D_{4}>0\) only depends on \((\mathbf{a},d_{\star},d_{\star},\beta,r,q,K)\). Hence there exists \(E_{2}\in(3,\infty)\) only depending on \((\mathbf{a},d_{\star},d_{\star},\beta,r,q,K)\) such that \[\begin{split} 0&<\frac{(\log t)^{5}}{t}<D_{4}\cdot\left(\frac{( \log t)^{5}}{t}\right)^{\frac{\beta\cdot(1\wedge\beta)^{q}}{d_{\star}+\beta \cdot(1\wedge\beta)^{\eta}}}<1/4\\ &<1<\log t,\;\forall\;t\in[E_{2},\infty).\end{split}\] (A.90) From now on we assume that \(n\geq E_{2}\), and (A.88) holds. We have to show that there exists \(E_{3}\in(0,\infty)\) only depending on \((\mathbf{a},\mathbf{b},\nu,d,d_{\star},d_{\star},\beta,r,q,K)\) such that (A.89) holds. Let \(P\) be an arbitrary probability in \(\mathcal{H}_{d,0,K,d_{\star},d_{\star},d_{\star}}^{d,\beta,r}\). Denote by \(\eta\) the conditional probability function \(x\mapsto P(\left\{1\right\}|x)\) of \(P\). Then there exists an \(\hat{\eta}\in\mathcal{G}_{d}^{\mathbf{CHOM}}(q,K,d_{\star},d_{\star},\beta,r)\) such that \(\hat{\eta}=\eta\), \(P_{X}\)-a.s.. Define \[\zeta:=D_{4}\cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta \cdot(1\wedge\beta)\eta}{d_{\star}+\beta\cdot(1\wedge\beta)\eta}}.\] (A.91) By (A.90), \(0<n^{\frac{-\beta\cdot(1\wedge\beta)\eta}{d_{\star}+\beta\cdot(1\wedge\beta) \eta}}\leq\zeta<\frac{1}{4}\) and there hold inequalities \[\log 2<\log\frac{1-\zeta}{\zeta}\leq\log\frac{1}{\zeta}\leq\log \left(n^{\frac{\beta\cdot(1\wedge\beta)\eta}{d_{\star}+\beta\cdot(1\wedge\beta) \eta}}\right)\leq F,\] (A.92) \[D_{1}\log\frac{1}{\zeta}\leq D_{1}\log\left(n^{\frac{\beta\cdot (1\wedge\beta)\eta}{d_{\star}+\beta\cdot(1\wedge\beta)\eta}}\right)\] (A.93) \[\leq D_{1}\log n\leq\max\left\{1,D_{1}\log n\right\}\leq E_{1} \log n\leq G,\] and \[KD_{2}\zeta^{\frac{-d_{\star}/\beta}{(1\wedge\beta)\eta}}=KD_{2} \cdot D_{4}^{\frac{-d_{\star}/\beta}{(1\wedge\beta)\eta}}\cdot\left(\frac{( \log n)^{5}}{n}\right)^{\frac{-d_{\star}}{d_{\star}+\beta\cdot(1\wedge\beta) \eta}}\] (A.94) \[\leq KD_{2}\cdot\left|\left(\frac{D_{2}K}{a_{2}}\right)^{\frac{ \beta\cdot(1\wedge\beta)\eta}{d_{\star}}}\right|^{\frac{-d_{\star}/\beta}{(1 \wedge\beta)\eta}}\cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d_{\star}} {d_{\star}+\beta\cdot(1\wedge\beta)\eta}}\] \[=a_{2}\cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d_{\star} }{d_{\star}+\beta\cdot(1\wedge\beta)\eta}}\leq N.\] Consequently, \[KD_{3}\zeta^{\frac{-d_{\star}/\beta}{(1\wedge\beta)\eta}}\cdot \log\frac{1}{\zeta}=KD_{3}\cdot D_{4}^{\frac{-d_{\star}/\beta}{(1\wedge\beta) \eta}}\cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d_{\star}}{d_{\star}+ \beta\cdot(1\wedge\beta)\eta}}\cdot\log\frac{1}{\zeta}\] (A.95) \[\leq KD_{3}\cdot\left|\left(\frac{D_{3}E_{1}K}{D_{1}a_{3}} \right)^{\frac{\beta\cdot(1\wedge\beta)\eta}{d_{\star}}}\right|^{\frac{-d_{ \star}/\beta}{(1\wedge\beta)\eta}}\cdot\left(\frac{(\log n)^{5}}{n}\right)^{ \frac{-d_{\star}}{d_{\star}+\beta\cdot(1\wedge\beta)\eta}}\cdot\log\frac{1}{\zeta}\] \[=a_{3}\cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d_{\star} +\beta\cdot(1\wedge\beta)\eta}{d_{\star}+\beta\cdot(1\wedge\beta)\eta}}\cdot \frac{D_{1}\cdot\log\frac{1}{\zeta}}{E_{1}}\leq a_{3}\cdot\left(\frac{(\log n )^{5}}{n}\right)^{\frac{-d_{\star}}{d_{\star}+\beta\cdot(1\wedge\beta)\eta}} \cdot\log n\leq S.\] Then it follows from (A.78), (A.91), (A.93), (A.92), (A.94), and (A.95) that \[\inf\left\{\mathcal{E}_{P}^{\phi}\left(f\right)\left|f\in\mathcal{ F}_{d}^{\mathbf{FNN}}\left(G,N,S,B,F\right)\right.\right\}\] (A.96) \[\leq 8\zeta=8D_{4}\cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{ \beta\cdot(1\wedge\beta)\eta}{d_{\star}+\beta\cdot(1\wedge\beta)\eta}}.\] Besides, from (A.92) we know \(\mathrm{e}^{F}>2\). Hence by taking \(\delta_{0}=\frac{1}{\mathrm{e}^{F}+1}\) in Lemma A.10, we obtain immediately that there exists \[\psi:[0,1]^{d}\times\left\{-1,1\right\}\rightarrow\left[0,\log \left((10\mathrm{e}^{F}+10)\cdot\log\left(\mathrm{e}^{F}+1\right)\right)\right],\] (A.97) such that \[\int_{[0,1]^{d}\times\left\{-1,1\right\}}\psi\left(x,y\right) \mathrm{d}P(x,y)=\inf\left\{\mathcal{R}_{P}^{\phi}(f)\mid f:[0,1]^{d} \rightarrow\mathbb{R}\text{ is measurable}\right\},\] (A.98) and for any measurable \(f:[0,1]^{d}\to[-F,F]\), \[\begin{split}&\int_{[0,1]^{d}\times\{-1,1\}}(\phi\left(yf(x) \right)-\psi(x,y))^{2}\mathrm{d}P(x,y)\\ &\leq 125000\left|\log\left(1+\mathrm{e}^{F}\right)\right|^{2}\cdot \int_{[0,1]^{d}\times\{-1,1\}}\left(\phi\left(yf(x)\right)-\psi(x,y)\right) \mathrm{d}P(x,y)\\ &\leq 500000F^{2}\cdot\int_{[0,1]^{d}\times\{-1,1\}}(\phi\left(yf(x) \right)-\psi(x,y))\,\mathrm{d}P(x,y).\end{split}\] (A.99) Moreover, it follows from Corollary A.1 with \(\gamma=\frac{1}{n}\) that \[\begin{split}&\log W\leq(S+Gd+1)(2G+5)\log\left((\max\left\{N,d \right\}+1)(2nG+2n)B\right)\\ &\leq C_{\mathbf{b},d}\cdot(\log n)^{2}\cdot\left(\frac{(\log n)^{5} }{n}\right)^{\frac{-d_{\mathbf{s}}}{d_{\mathbf{s}}+\beta+(1\wedge\beta)^{d}}}\cdot \log\left((\max\left\{N,d\right\}+1)(2nG+2n)b_{5}n^{\nu}\right)\\ &\leq C_{\mathbf{b},d,\nu}\cdot(\log n)^{3}\cdot\left(\frac{(\log n )^{5}}{n}\right)^{\frac{-d_{\mathbf{s}}}{d_{\mathbf{s}}+\beta+(1\wedge\beta)^{d}}}=E_{ 4}\cdot(\log n)^{3}\cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d_{\mathbf{s} }}{d_{\mathbf{s}}+\beta+(1\wedge\beta)^{d}}}\end{split}\] (A.100) for some constant \(E_{4}\in(0,\infty)\) only depending on\((\mathbf{b},d,\nu)\), where \[W=3\vee\mathcal{N}\left(\left\{\left.f\right|_{[0,1]^{d}}\right|f\in\mathcal{F }_{d}^{\mathbf{FNN}}\left(G,N,S,B,F\right)\right\},\frac{1}{n}\right).\] Also, note that \[\sup_{t\in[-F,F]}\phi(t)=\log\left(1+\mathrm{e}^{F}\right)\leq\log\left((10 \mathrm{e}^{F}+10)\cdot\log\left(\mathrm{e}^{F}+1\right)\right)\leq 7F.\] (A.101) Therefore, by taking \(\epsilon=\frac{1}{2}\), \(\gamma=\frac{1}{n}\), \(\Gamma=500000F^{2}\), \(M=7F\), and \[\mathcal{F}=\left\{\left.f\right|_{[0,1]^{d}}\right|f\in\mathcal{F}_{d}^{ \mathbf{FNN}}\left(G,N,S,B,F\right)\right\}\] in Theorem 2.1 and combining (A.97), (A.98), (A.99), (A.100), (A.96), we obtain \[\begin{split}&\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}^{ \phi}\left(\hat{f}_{n}^{\mathbf{FNN}}\right)\right]=\mathbf{E}_{P^{\otimes n}} \left[\mathcal{R}_{P}^{\phi}\left(\hat{f}_{n}^{\mathbf{FNN}}\right)-\int_{[0,1] ^{d}\times\{-1,1\}}\psi(x,y)\mathrm{d}P(x,y)\right]\\ &\leq 360\cdot\frac{\Gamma\log W}{n}+\frac{4}{n}+\frac{30M\log W}{n}+3 0\cdot\sqrt{\frac{\Gamma\log W}{n^{2}}}+2\inf_{f\in\mathcal{F}}\left( \mathcal{R}_{P}^{\phi}(f)-\int\psi\mathrm{d}P\right)\\ &\leq\frac{360\Gamma\log W}{n}+\frac{\Gamma\log W}{n}+\frac{ \Gamma\log W}{n}+\frac{\Gamma\log W}{n}+\frac{2}{n}\inf_{f\in\mathcal{F}} \mathcal{E}_{P}^{\phi}(f)\\ &\leq\frac{2\cdot 10^{8}\cdot F^{2}\cdot\log W}{n}+16D_{4} \cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta\cdot(1\wedge\beta)^{d}} {d_{\mathbf{s}}+\beta\cdot(1\wedge\beta)^{d}}}\\ &\leq\frac{10^{9}\cdot\left|b_{4}\log n\right|^{2}\cdot E_{4} \cdot(\log n)^{3}\cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d_{\mathbf{s}}}{ d_{\mathbf{s}}+\beta\cdot(1\wedge\beta)^{d}}}}{n}+16D_{4}\cdot\left(\frac{(\log n)^{5}}{n} \right)^{\frac{\beta\cdot(1\wedge\beta)^{d}}{d_{\mathbf{s}}+\beta\cdot(1\wedge \beta)^{d}}}\\ &=\left(16D_{4}+10^{9}\cdot\left|b_{4}\right|^{2}\cdot E_{4} \right)\cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta\cdot(1\wedge \beta)^{d}}{d_{\mathbf{s}}+\beta\cdot(1\wedge\beta)^{d}}}\leq E_{3}\cdot\left( \frac{(\log n)^{5}}{n}\right)^{\frac{\beta\cdot(1\wedge\beta)^{d}}{d_{\mathbf{s}}+ \beta\cdot(1\wedge\beta)^{d}}}\end{split}\] with \[E_{3}:=4\cdot\left(16D_{4}+10^{9}\cdot\left|b_{4}\right|^{2}\cdot E_{4}\right)+4\] only depending on \((\mathbf{a},\mathbf{b},\nu,d,d_{*},d_{*},\beta,r,q,K)\). We then apply the calibration inequality (2.21) and conclude that \[\begin{split}&\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}\left( \hat{f}_{n}^{\mathbf{FNN}}\right)\right]\leq 2\sqrt{2}\cdot\mathbf{E}_{P^{\otimes n}} \left[\sqrt{\mathcal{E}_{P}^{\phi}\left(\hat{f}_{n}^{\mathbf{FNN}}\right)} \right]\leq 4\cdot\sqrt{\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}^{\phi} \left(\hat{f}_{n}^{\mathbf{FNN}}\right)\right]}\\ &\leq 4\cdot\sqrt{\left(16D_{4}+10^{9}\cdot\left|b_{4}\right|^{2} \cdot E_{4}\right)\cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta\cdot(1 \wedge\beta)^{q}}{d_{*}+\beta\cdot(1\wedge\beta)^{q}}}}\\ &\leq E_{3}\cdot\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta \cdot(1\wedge\beta)^{q}}{2d_{*}+2\cdot(1\wedge\beta)^{q}}}.\end{split}\] (A.102) Since \(P\) is arbitrary, the desired bound (A.89) follows. Setting \(\mathrm{c}=E_{1}\) completes the proof of Theorem 2.3. Now it remains to show Theorem 2.2. Indeed, it follows from (2.33) that \[\mathcal{H}_{1}^{d,\beta,r}\subset\mathcal{H}_{4,0,1,1,d}^{d,\beta,r}.\] Then by taking \(q=0\), \(d_{*}=d\) and \(d_{*}=K=1\) in Theorem 2.3, we obtain that there exists a constant \(\mathrm{c}\in(0,\infty)\) only depending on \((d,\beta,r)\) such that the estimator \(\hat{f}_{n}^{\mathbf{FNN}}\) defined by (2.14) with \[\begin{split}&\mathrm{c}\log n\leq G\lesssim\log n,\ N\times\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d}{d+\beta \cdot(1\wedge\beta)^{q}}}=\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d}{d+ \beta}},\\ & S\asymp\left(\frac{(\log n)^{5}}{n}\right)^{\frac{-d}{d+\beta \cdot(1\wedge\beta)^{0}}}\cdot\log n=\left(\frac{(\log n)^{5}}{n}\right)^{ \frac{-d}{d+\beta}}\cdot\log n,\\ & 1\leq B\lesssim n^{\nu},\ \text{and}\ \ \frac{\beta}{d+\beta}\cdot\log n=\frac{\beta\cdot(1\wedge\beta)^{0}}{d+\beta \cdot(1\wedge\beta)^{0}}\cdot\log n\leq F\lesssim\log n\end{split}\] must satisfy \[\begin{split}&\sup_{P\in\mathcal{H}_{1}^{d,\beta,r}}\mathbf{E}_{P^{ \otimes n}}\left[\mathcal{E}_{P}^{\phi}\left(\hat{f}_{n}^{\mathbf{FNN}}\right) \right]\leq\sup_{P\in\mathcal{H}_{4,0,1,d}^{d,\beta,r}}\mathbf{E}_{P^{\otimes n}} \left[\mathcal{E}_{P}^{\phi}\left(\hat{f}_{n}^{\mathbf{FNN}}\right)\right]\\ &\lesssim\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta\cdot(1 \wedge\beta)^{0}}{d+\beta\cdot(1\wedge\beta)^{0}}}=\left(\frac{(\log n)^{5}}{ n}\right)^{\frac{\beta}{d+\beta}}\end{split}\] and \[\begin{split}&\sup_{P\in\mathcal{H}_{1}^{d,\beta,r}}\mathbf{E}_{P^{ \otimes n}}\left[\mathcal{E}_{P}\left(\hat{f}_{n}^{\mathbf{FNN}}\right) \right]\leq\sup_{P\in\mathcal{H}_{4,0,1,1,d}^{d,\beta,r}}\mathbf{E}_{P^{\otimes n }}\left[\mathcal{E}_{P}\left(\hat{f}_{n}^{\mathbf{FNN}}\right)\right]\\ &\lesssim\left(\frac{(\log n)^{5}}{n}\right)^{\frac{\beta\cdot(1 \wedge\beta)^{0}}{2d+2\cdot(1\wedge\beta)^{0}}}=\left(\frac{(\log n)^{5}}{n} \right)^{\frac{\beta\cdot(1\wedge\beta)^{0}}{2d+2\beta}}.\end{split}\] This completes the proof of Theorem 2.2. #### a.3.5 Proof of Theorem 2.5 This subsection is devoted to the proof of Theorem 2.5. To this end, we need the following lemmas. Note that the logistic loss is given by \(\phi(t)=\log(1+\mathrm{e}^{-t})\) with \(\phi^{\prime}(t)=-\frac{1}{1+\mathrm{e}^{t}}\in(-1,0)\) and \(\phi^{\prime\prime}(t)=\frac{\mathrm{e}^{t}}{(1+\mathrm{e}^{t})^{2}}=\frac{1} {\mathrm{e}^{t}+\mathrm{e}^{-t}+2}\in(0,\frac{1}{4}]\) for all \(t\in\mathbb{R}\). **Lemma A.16**.: _Let \(\eta_{0}\in(0,1)\), \(F_{0}\in\left(0,\log\frac{1+\eta_{0}}{1-\eta_{0}}\right)\), \(a\in[-F_{0},F_{0}]\), \(\phi(t)=\log(1+\mathrm{e}^{-t})\) be the logistic loss, \(d\in\mathbb{N}\), and \(P\) be a Borel probability measure on \([0,1]^{d}\times\{-1,1\}\) of which the conditional probability function \([0,1]^{d}\ni z\mapsto P(\{1\}\,|z)\in[0,1]\) is denoted by \(\eta\). Then for any \(x\in[0,1]^{d}\) such that \(|2\eta(x)-1|>\eta_{0}\), there holds_ \[0 \leq|a-F_{0}\mathrm{sgn}(2\eta(x)-1)|\cdot\left(\frac{1-\eta_{0}}{ 2}\phi^{\prime}(-F_{0})-\frac{\eta_{0}+1}{2}\phi^{\prime}(F_{0})\right)\] (A.103) \[\leq|a-F_{0}\mathrm{sgn}(2\eta(x)-1)|\cdot\left(\frac{1-\eta_{0}} {2}\phi^{\prime}(-F_{0})-\frac{\eta_{0}+1}{2}\phi^{\prime}(F_{0})\right)\] \[\qquad+\frac{1}{2\left(\mathrm{e}^{-F_{0}}+\mathrm{e}^{F_{0}}+2 \right)}\left|a-F_{0}\mathrm{sgn}(2\eta(x)-1)\right|^{2}\] \[\leq\int_{\{-1,1\}}\left(\phi(ya)-\phi(yF_{0}\mathrm{sgn}(2\eta( x)-1))\right)\mathrm{d}P(y|x)\] \[\leq|a-F_{0}\mathrm{sgn}(2\eta(x)-1)|+F_{0}^{2}.\] Proof.: Given \(x\in[0,1]^{d}\), recall the function \(V_{x}\) defined in the proof of Lemma A.7. By Taylor expansion, there exists \(\xi\) between \(a\) and \(F_{0}\mathrm{sgn}(2\eta(x)-1)\) such that \[\int_{\{-1,1\}}\left(\phi(ya)-\phi(yF_{0}\mathrm{sgn}(2\eta(x)-1) )\right)\mathrm{d}P(y|x)\] (A.104) \[=V_{x}(a)-V_{x}(F_{0}\mathrm{sgn}(2\eta(x)-1))\] \[=(a-F_{0}\mathrm{sgn}(2\eta(x)-1))\cdot V_{x}^{\prime}(F_{0} \mathrm{sgn}(2\eta(x)-1))+\frac{1}{2}\left|a-F_{0}\mathrm{sgn}(2\eta(x)-1) \right|^{2}\cdot V_{x}^{\prime\prime}(\xi).\] Since \(\xi\in[-F_{0},F_{0}]\), we have \[0 \leq\frac{1}{\mathrm{e}^{-F_{0}}+\mathrm{e}^{F_{0}}+2}=\inf\left\{ \phi^{\prime\prime}(t)\mid t\in[-F_{0},F_{0}]\right\}\] \[\leq V_{x}^{\prime\prime}(\xi)=\eta(x)\phi^{\prime\prime}(\xi)+( 1-\eta(x))\phi^{\prime\prime}(-\xi)\leq\frac{1}{4}\] and then \[0 \leq\frac{1}{2}\left|a-F_{0}\mathrm{sgn}(2\eta(x)-1)\right|^{2} \cdot\frac{1}{\mathrm{e}^{-F_{0}}+\mathrm{e}^{F_{0}}+2}\] (A.105) \[\leq\frac{1}{2}\left|a-F_{0}\mathrm{sgn}(2\eta(x)-1)\right|^{2} \cdot V_{x}^{\prime\prime}(\xi)\] \[\leq\frac{1}{2}\left(|a|+F_{0}\right)^{2}\cdot\frac{1}{4}\leq \frac{1}{2}F_{0}^{2}.\] On the other hand, if \(2\eta(x)-1>\eta_{0}\), then \[(a-F_{0}\mathrm{sgn}(2\eta(x)-1))\cdot V_{x}^{\prime}(F_{0} \mathrm{sgn}(2\eta(x)-1))\] \[=(a-F_{0})\left(\eta(x)\phi^{\prime}(F_{0})-(1-\eta(x))\phi^{ \prime}(-F_{0})\right)\] \[=|a-F_{0}|\left((1-\eta(x))\phi^{\prime}(-F_{0})-\eta(x)\phi^{ \prime}(F_{0})\right)\] \[\geq|a-F_{0}|\left(\left(1-\frac{1+\eta_{0}}{2}\right)\phi^{ \prime}(-F_{0})-\frac{1+\eta_{0}}{2}\phi^{\prime}(F_{0})\right)\] \[=|a-F_{0}\mathrm{sgn}(2\eta(x)-1)|\cdot\left(\frac{1-\eta_{0}}{2} \phi^{\prime}(-F_{0})-\frac{1+\eta_{0}}{2}\phi^{\prime}(F_{0})\right).\] Similarly, if \(2\eta(x)-1<-\eta_{0}\), then \[(a-F_{0}\mathrm{sgn}(2\eta(x)-1))\cdot V_{x}^{\prime}(F_{0} \mathrm{sgn}(2\eta(x)-1))\] \[=(a+F_{0})\left(\eta(x)\phi^{\prime}(-F_{0})-(1-\eta(x))\phi^{ \prime}(F_{0})\right)\] \[=|a+F_{0}|\left(\eta(x)\phi^{\prime}(-F_{0})-(1-\eta(x))\phi^{ \prime}(F_{0})\right)\] \[\geq|a+F_{0}|\left(\frac{1-\eta_{0}}{2}\phi^{\prime}(-F_{0})- \left(1-\frac{1-\eta_{0}}{2}\right)\phi^{\prime}(F_{0})\right)\] \[=|a-F_{0}\text{sgn}(2\eta(x)-1)|\cdot\left(\frac{1-\eta_{0}}{2}\phi^{ \prime}(-F_{0})-\frac{1+\eta_{0}}{2}\phi^{\prime}(F_{0})\right).\] Therefore, for given \(x\in[0,1]^{d}\) satisfying \(|2\eta(x)-1|>\eta_{0}\), there always holds \[(a-F_{0}\text{sgn}(2\eta(x)-1))\cdot V_{x}^{\prime}(F_{0}\text{ sgn}(2\eta(x)-1))\] (A.106) \[\geq|a-F_{0}\text{sgn}(2\eta(x)-1)|\cdot\left(\frac{1-\eta_{0}}{ 2}\phi^{\prime}(-F_{0})-\frac{1+\eta_{0}}{2}\phi^{\prime}(F_{0})\right).\] We next show that \(\frac{1-\eta_{0}}{2}\phi^{\prime}(-F_{0})-\frac{1+\eta_{0}}{2}\phi^{\prime}(F_ {0})>0\). Indeed, let \(g(t)=\frac{1-\eta_{0}}{2}\phi^{\prime}(-t)-\frac{1+\eta_{0}}{2}\phi^{\prime}(t)\). Then \(g^{\prime}(t)=-\frac{1-\eta_{0}}{2}\phi^{\prime\prime}(-t)-\frac{1+\eta_{0}} {2}\phi^{\prime\prime}(t)<0\), i.e., \(g\) is strictly decreasing, and thus \[\frac{1-\eta_{0}}{2}\phi^{\prime}(-F_{0})-\frac{1+\eta_{0}}{2}\phi^{\prime}(F _{0})=g(F_{0})>g\left(\log\frac{1+\eta_{0}}{1-\eta_{0}}\right)=0.\] (A.107) Moreover, we also have \[(a-F_{0}\text{sgn}(2\eta(x)-1))\cdot V_{x}^{\prime}(F_{0}\text{ sgn}(2\eta(x)-1))\] (A.108) \[\leq|a-F_{0}\text{sgn}(2\eta(x)-1)|\cdot|V_{x}^{\prime}(F_{0} \text{sgn}(2\eta(x)-1))|\] \[=|a-F_{0}\text{sgn}(2\eta(x)-1)|\cdot|\eta(x)\phi^{\prime}(F_{0} \text{sgn}(2\eta(x)-1))-(1-\eta(x))\phi^{\prime}(-F_{0}\text{sgn}(2\eta(x)-1))|\] \[\leq|a-F_{0}\text{sgn}(2\eta(x)-1)|\,|\eta(x)+(1-\eta(x))|=|a-F_{ 0}\text{sgn}(2\eta(x)-1)|\,.\] Then the first inequality of (A.103) is from (A.107), the third inequality of (A.103) is due to (A.104), (A.105) and (A.106), and the last inequality of (A.103) is from (A.104), (A.105) and (A.108). Thus we complete the proof. **Lemma A.17**.: _Let \(\eta_{0}\in(0,1)\), \(F_{0}\in\left(0,\log\frac{1+\eta_{0}}{1-\eta_{0}}\right)\), \(d\in\mathbb{N}\), and \(P\) be a Borel probability measure on \([0,1]^{d}\times\{-1,1\}\) of which the conditional probability function \([0,1]^{d}\ni z\mapsto P(\{1\}\,|z)\in[0,1]\) is denoted by \(\eta\). Define_ \[\psi:[0,1]^{d}\times\{-1,1\} \to\mathbb{R},\] (A.109) \[(x,y) \mapsto\begin{cases}\phi\left(yF_{0}\text{sgn}(2\eta(x)-1)\right), &\text{ if }\,|2\eta(x)-1|>\eta_{0},\\ \phi\left(y\log\frac{\eta(x)}{1-\eta(x)}\right),&\text{ if }\,|2\eta(x)-1|\leq \eta_{0}.\end{cases}\] _Then there hold_ \[\int_{[0,1]^{d}\times\{-1,1\}}\left(\phi\left(yf(x)\right)-\psi( x,y)\right)^{2}\!\mathrm{d}P(x,y)\] (A.110) \[\leq\frac{8}{1-\eta_{0}^{2}}\cdot\int_{[0,1]^{d}\times\{-1,1\}} \left(\phi\left(yf(x)\right)-\psi(x,y)\right)\mathrm{d}P(x,y)\] _for any measurable \(f:[0,1]^{d}\to[-F_{0},F_{0}]\,,\) and_ \[0\leq\psi(x,y)\leq\log\frac{2}{1-\eta_{0}},\quad\forall(x,y)\in[0,1]^{d}\times \{-1,1\}\,.\] (A.111) Proof.: Recall that given \(x\in[0,1]^{d}\), \(V_{x}(t)=\eta(x)\phi(t)+(1-\eta(x))\phi(-t),\forall t\in\mathbb{R}\). Due to inequality (A.103) and Lemma A.7, for any measurable \(f:[0,1]^{d}\to[-F_{0},F_{0}]\), we have \[\int_{[0,1]^{d}\times\{-1,1\}}\left(\phi\left(yf(x)\right)-\psi( x,y)\right)\mathrm{d}P(x,y)\] (A.112) \[=\int_{|2\eta(x)-1|>\eta_{0}}\int_{\{-1,1\}}\phi\left(yf(x)\right) -\phi\left(yF_{0}\text{sgn}(2\eta(x)-1)\right)\mathrm{d}P(y|x)\mathrm{d}P_{X}(x)\] \[\quad+\int_{|2\eta(x)-1|\leq\eta_{0}}\int_{\{-1,1\}}\phi\left(yf(x )\right)-\phi\left(y\log\frac{\eta(x)}{1-\eta(x)}\right)\mathrm{d}P(y|x) \mathrm{d}P_{X}(x)\] \[\geq\int_{|2\eta(x)-1|>\eta_{0}}\frac{1}{2\left(\mathrm{e}^{F_{0}}+ \mathrm{e}^{-F_{0}}+2\right)}\left|f(x)-F_{0}\mathrm{sgn}(2\eta(x)-1)\right|^{2 }\mathrm{d}P_{X}(x)\] \[\quad+\int_{|2\eta(x)-1|\leq\eta_{0}}\left[\inf_{t\in\left[\log \frac{1-\eta_{0}}{1+\eta_{0}},\log\frac{1+\eta_{0}}{1-\eta_{0}}\right]}\frac{1} {2(\mathrm{e}^{t}+\mathrm{e}^{-t}+2)}\right]\left|f(x)-\log\frac{\eta(x)}{1- \eta(x)}\right|^{2}\mathrm{d}P_{X}(x)\] \[\geq\frac{1}{2}\frac{1}{\frac{1+\eta_{0}}{1-\eta_{0}}+\frac{1- \eta_{0}}{1+\eta_{0}}+2}\int_{\{|2\eta(x)-1|>\eta_{0}\}\times\{-1,1\}}\left| \phi\left(yf(x)\right)-\phi\left(yF_{0}\mathrm{sgn}(2\eta(x)-1)\right)\right|^ {2}\mathrm{d}P(x,y)\] \[\quad+\frac{1}{2}\frac{1+\eta_{0}}{1-\eta_{0}}+\frac{1-\eta_{0}} {1+\eta_{0}}+2\int_{\{|2\eta(x)-1|\leq\eta_{0}\}\times\{-1,1\}}\left|\phi \left(yf(x)\right)-\phi\left(y\log\frac{\eta(x)}{1-\eta(x)}\right)\right|^{2} \mathrm{d}P(x,y)\] \[=\frac{1-\eta_{0}^{2}}{8}\cdot\int_{[0,1]^{d}\times\{-1,1\}} \left(\phi\left(yf(x)\right)-\psi(x,y)\right)^{2}\mathrm{d}P(x,y),\] where the second inequality is from (A.20) and the fact that \(F_{0}\in\left(0,\log\frac{1+\eta_{0}}{1-\eta_{0}}\right)\). Thus we have proved the inequality (A.110). On the other hand, from the definition of \(\psi\) as well as \(F_{0}\in\left(0,\log\frac{1+\eta_{0}}{1-\eta_{0}}\right)\), we also have \[0\leq\psi(x,y)\leq\max\left\{\phi(-F_{0}),\phi\left(-\log\frac{1+\eta_{0}}{1- \eta_{0}}\right)\right\}\leq\phi\left(-\log\frac{1+\eta_{0}}{1-\eta_{0}} \right)=\log\frac{2}{1-\eta_{0}},\] which gives the inequality (A.111). The proof is completed. Now we are in the position to prove Theorem 2.5. Proof of Theorem 2.5.: Let \(\eta_{0}\in(0,1)\cap[0,t_{1}]\), \(F_{0}\in(0,\log\frac{1+\eta_{0}}{1-\eta_{0}})\cap[0,1]\), \(\xi\in(0,\frac{1}{2}\wedge t_{2}]\) and \(P\in\mathcal{H}_{6,t_{1},c_{1},t_{2},c_{2}}^{d,\beta,r,I,\Theta,s_{1},s_{2}}\) be arbitrary. Denote by \(\eta\) the conditional probability function \(P(\{1\}\left|\cdot\right)\) of \(P\). By definition, there exists a classifier \(\mathtt{C}\in\mathcal{C}^{d,\beta,r,I,\Theta}\) such that (2.24), (2.50) and (2.51) hold. According to Proposition A.4 and the proof of Theorem 3.4 in [22], there exist positive constants \(G_{0},N_{0},S_{0},B_{0}\) only depending on \(d,\beta,r,I,\Theta\) and \(\tilde{f}_{0}\in\mathcal{F}_{d}^{\mathbf{FNN}}(G_{\xi},N_{\xi},S_{\xi},B_{\xi},1)\) such that \(\tilde{f}_{0}(x)=\mathtt{C}(x)\) for \(x\in[0,1]^{d}\) with \(\Delta_{\mathtt{C}}(x)>\xi\), where \[G_{\xi}=G_{0}\log\frac{1}{\xi},\ N_{\xi}=N_{0}\left(\frac{1}{\xi}\right)^{\frac{ d-1}{\beta}},\ S_{\xi}=S_{0}\left(\frac{1}{\xi}\right)^{\frac{d-1}{\beta}}\log \left(\frac{1}{\xi}\right),\ B_{\xi}=\left(\frac{B_{0}}{\xi}\right).\] (A.112) Define \(\psi:[0,1]^{d}\times\{-1,1\}\to\mathbb{R}\) by (A.109). Then for any measurable function \(f:[0,1]^{d}\to[-F_{0},F_{0}]\), there holds \[\mathcal{E}_{P}(f)=\mathcal{E}_{P}\left(\frac{f}{F_{0}}\right)\leq \int_{[0,1]^{d}}\left|\frac{f(x)}{F_{0}}-\mathrm{sgn}(2\eta(x)-1)\right|\left| 2\eta(x)-1\right|\mathrm{d}P_{X}(x)\] \[\leq 2P\left(|2\eta(x)-1|\leq\eta_{0}\right)+\int_{|2\eta(x)-1|> \eta_{0}}\left|\frac{f(x)}{F_{0}}-\mathrm{sgn}(2\eta(x)-1)\right|\mathrm{d}P_{ X}(x)\] \[\leq 2c_{1}\eta_{0}^{s_{1}}+\frac{1}{F_{0}}\int_{|2\eta(x)-1|> \eta_{0}}|f(x)-F_{0}\mathrm{sgn}(2\eta(x)-1)|\mathrm{d}P_{X}(x)\] (A.113) \[\leq 2c_{1}\eta_{0}^{s_{1}}+\int_{|2\eta(x)-1|>\eta_{0}}\frac{\int (\phi(yf(x))-\phi(yF_{0}\mathrm{sgn}(2\eta(x)-1)))\,\mathrm{d}P(y|x)}{F_{0}\cdot \left(\frac{1-\eta_{0}}{2}\phi^{\prime}(-F_{0})-\frac{\eta_{0}+1}{2}\phi^{ \prime}(F_{0})\right)}\mathrm{d}P_{X}(x)\] \[\leq 2c_{1}\eta_{0}^{s_{1}}+\frac{\int_{[0,1]^{d}\times\{-1,1\}} \left(\phi(yf(x))-\psi(x,y)\right)\mathrm{d}P(x,y)}{F_{0}\cdot\left(\frac{1- \eta_{0}}{2}\phi^{\prime}(-F_{0})-\frac{\eta_{0}+1}{2}\phi^{\prime}(F_{0}) \right)},\] where the first inequality is from Theorem 2.31 of [40], the third inequality is due to the noise condition (2.24), and the fourth inequality is from (A.103) in Lemma A.16. Let \(\mathcal{F}=\mathcal{F}_{d}^{\mathbf{FNN}}(G_{\xi},N_{\xi},S_{\xi},B_{\xi},F_{0})\) with \((G_{\xi},N_{\xi},S_{\xi},B_{\xi})\) given by (A.112), \(\Gamma=\frac{8}{1-\eta_{0}^{2}}\) and \(M=\frac{2}{1-\eta_{0}}\) in Theorem 2.1. Then we will use this theorem to derive the desired generalization bounds for the \(\phi\)-\(\operatorname{ERM}\hat{f}_{n}:=\hat{f}_{n}^{\mathbf{FNN}}\) over \(\mathcal{F}_{d}^{\mathbf{FNN}}(G_{\xi},N_{\xi},S_{\xi},B_{\xi},F_{0})\). Indeed, Lemma A.17 guarantees that the conditions (2.3), (2.4) and (2.5) of Theorem 2.1 are satisfied. Moreover, take \(\gamma=\frac{1}{n}\). Then \(W=\max\left\{3,\;\mathcal{N}\left(\mathcal{F},\gamma\right)\right\}\) satisfies \[\log W\leq C_{d,\beta,r,I,\Theta}\xi^{-\frac{d-1}{\beta}}\left(\log\frac{1}{ \xi}\right)^{2}\left(\log\frac{1}{\xi}+\log n\right).\] Thus the expectation of \(\int_{[0,1]^{d}\times\{-1,1\}}\left(\phi(y\hat{f}_{n}(x))-\psi(x,y)\right) \mathrm{d}P(x,y)\) can be bounded by inequality (2.6) in Theorem 2.1 as \[\begin{split}&\boldsymbol{E}_{P^{\otimes n}}\left[\int_{[0,1]^{d} \times\{-1,1\}}\left(\phi(y\hat{f}_{n}(x))-\psi(x,y)\right)\mathrm{d}P(x,y) \right]\\ &\leq\frac{4000C_{d,\beta,r,I,\Theta}\xi^{-\frac{d-1}{\beta}} \left(\log\frac{1}{\xi}\right)^{2}\left(\log\frac{1}{\xi}+\log n\right)}{n(1- \eta_{0}^{2})}\\ &\qquad\qquad+2\inf_{f\in\mathcal{F}}\left(\mathcal{R}_{P}^{\phi }(f)-\int_{[0,1]^{d}\times\{-1,1\}}\psi(x,y)\mathrm{d}P(x,y)\right).\end{split}\] (A.114) We next estimate the approximation error, i.e., the second term on the right hand side of (A.114). Take \(f_{0}=F_{0}\tilde{f}_{0}\in\mathcal{F}\) where \(\tilde{f}_{0}\in\mathcal{F}_{d}^{\mathbf{FNN}}(G_{\xi},N_{\xi},S_{\xi},B_{\xi},1)\) satisfying \(\tilde{f}_{0}(x)=\mathtt{C}(x)\) for \(x\in[0,1]^{d}\) with \(\Delta_{\mathtt{C}}(x)>\xi\). Then one can bound the approximation error as \[\begin{split}&\inf_{f\in\mathcal{F}}\left(\mathcal{R}_{P}^{\phi}(f)- \int_{[0,1]^{d}\times\{-1,1\}}\psi(x,y)\mathrm{d}P(x,y)\right)\\ &\leq\mathcal{R}_{P}^{\phi}(f_{0})-\int_{[0,1]^{d}\times\{-1,1\}} \psi(x,y)\mathrm{d}P(x,y)=I_{1}+I_{2}+I_{3},\end{split}\] (A.115) where \[\begin{split}& I_{1}:=\int_{\{[2\eta(x)-1]>\eta_{0},\Delta_{ \mathtt{C}}(x)>\xi\}\times\{-1,1\}}\phi(yf_{0}(x))-\phi(yF_{0}\mathrm{sgn}(2 \eta(x)-1))\mathrm{d}P(x,y),\\ & I_{2}:=\int_{\{[2\eta(x)-1]\leq\eta_{0}\}\times\{-1,1\}}\phi(yf _{0}(x))-\phi\left(y\log\frac{\eta(x)}{1-\eta(x)}\right)\mathrm{d}P(x,y),\\ & I_{3}:=\int_{\{[2\eta(x)-1]>\eta_{0},\Delta_{\mathtt{C}}(x)\leq \xi\}\times\{-1,1\}}\phi(yf_{0}(x))-\phi(yF_{0}\mathrm{sgn}(2\eta(x)-1)) \mathrm{d}P(x,y).\end{split}\] Note that \(f_{0}(x)=F_{0}\tilde{f}_{0}(x)=F_{0}\mathtt{C}(x)=F_{0}\mathrm{sgn}(2\eta(x)-1)\) for \(P_{X}\)-almost all \(x\in[0,1]^{d}\) with \(\Delta_{\mathtt{C}}(x)>\xi\). Thus it follows that \(I_{1}=0\). On the other hand, from Lemma A.7 and the noise condition (2.24), we see that \[\begin{split}& I_{2}\leq\int_{\{[2\eta(x)-1]\leq\eta_{0}\}\times\{- 1,1\}}\left|f_{0}(x)-\log\frac{\eta(x)}{1-\eta(x)}\right|^{2}\mathrm{d}P(x,y) \\ &\leq\int_{\{\left|2\eta_{0}(x)-1\right|\leq\eta_{0}\}\times\{-1,1 \}}\left(F_{0}+\log\frac{1+\eta_{0}}{1-\eta_{0}}\right)^{2}\mathrm{d}P(x,y) \leq 4\left(\log\frac{1+\eta_{0}}{1-\eta_{0}}\right)^{2}c_{1}\cdot\eta_{0} ^{s_{1}}.\end{split}\] (A.116) Moreover, due to Lemma A.16 and the margin condition (2.51), we have \[\begin{split}& I_{3}\leq\int_{\{[2\eta(x)-1]>\eta_{0},\Delta_{ \mathtt{C}}(x)\leq\xi\}}\left(2F_{0}+F_{0}^{2}\right)\mathrm{d}P_{X}(x)\\ &\leq 3F_{0}\cdot P_{X}(\left\{x\in[0,1]^{d}|\,\Delta_{\mathtt{C}}(x) \leq\xi\right\})\leq 3F_{0}\cdot c_{2}\cdot\xi^{s_{2}}.\end{split}\] (A.117) The estimates above together with (A.113) and (A.114) give \[\begin{split}&\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}(\hat{f}_{ n})\right]\\ &\leq 2c_{1}\eta_{0}^{s_{1}}+\frac{1}{F_{0}}\cdot\frac{\mathbf{E}_{P^{ \otimes n}}\left[\int_{[0,1]^{d}\times\{-1,1\}}\left(\phi(y\hat{f}_{n}(x))- \psi(x,y)\right)\mathrm{d}P(x,y)\right]}{\frac{1-\eta_{0}}{2}\phi^{\prime}(-F_ {0})-\frac{\eta_{0}+1}{2}\phi^{\prime}(F_{0})}\\ &\leq 2c_{1}\eta_{0}^{s_{1}}+\frac{8\left|\log\frac{1+\eta_{0}}{1- \eta_{0}}\right|^{2}c_{1}\eta_{0}^{s_{1}}+6F_{0}c_{2}\xi^{s_{2}}+\frac{4000C_{ d,\beta,r,I,\Theta}\xi^{\frac{d-1}{\beta}}\left(\log\frac{1}{\xi}\right)^{2} \left(\log\frac{1}{\xi}+\log n\right)}{n(1-\eta_{0}^{s})}}.\end{split}\] (A.118) Since \(P\) is arbitrary, we can take the supremum over all \(P\in\mathcal{H}_{6,t_{1},c_{1},t_{2},c_{2}}^{d,\beta,r,I,\Theta,s_{1},s_{2}}\) to obtain from (A.118) that \[\begin{split}&\sup_{P\in\mathcal{H}_{6,t_{1},c_{1},t_{2},c_{2}}^{d, \beta,r,I,\Theta,s_{1},s_{2}}}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}(f_{ n}^{\mathbf{FNN}})\right]\\ &\leq 2c_{1}\eta_{0}^{s_{1}}+\frac{8\left|\log\frac{1+\eta_{0}}{1- \eta_{0}}\right|^{2}c_{1}\eta_{0}^{s_{1}}+6F_{0}c_{2}\xi^{s_{2}}+\frac{4000C_{ d,\beta,r,I,\Theta}\xi^{\frac{d-1}{\beta}}\left(\log\frac{1}{\xi}\right)^{2} \left(\log\frac{1}{\xi}+\log n\right)}{n(1-\eta_{0}^{s})}}.\end{split}\] (A.119) (A.119) holds for all \(\eta_{0}\in(0,1)\cap[0,t_{1}]\), \(F_{0}\in(0,\log\frac{1+\eta_{0}}{1-\eta_{0}})\cap[0,1]\), \(\xi\in(0,\frac{1}{2}\wedge t_{2}]\). We then take suitable \(\eta_{0}\), \(F_{0}\), and \(\xi\) in (A.119) to derive the convergence rates stated in Theorem 2.5. \[\begin{split}&\sup_{P\in\mathcal{H}_{6,t_{1},c_{1},t_{2},c_{2}}^{d, \beta,r,I,\Theta,s_{1},s_{2}}}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}(f_{ n}^{\mathbf{FNN}})\right]\\ &\leq 2c_{1}\eta_{0}^{s_{1}}+\frac{8\left|\log\frac{1+\eta_{0}}{1- \eta_{0}}\right|^{2}c_{1}\cdot\eta_{0}^{s_{1}}+6F_{0}c_{2}\xi^{s_{2}}+\frac{4000 C_{d,\beta,r,I,\Theta}\xi^{\frac{d-1}{\beta}}\left(\log\frac{1}{\xi}\right)^{2} \left(\log\frac{1}{\xi}+\log n\right)}{n(1-\eta_{0}^{s})}}.\end{split}\] (A.120) **Case I.** When \(s_{1}=s_{2}=\infty\), taking \(\eta_{0}=F_{0}=t_{1}\wedge\frac{1}{2}\) and \(\xi=t_{2}\wedge\frac{1}{2}\) in (A.119) yields \[\sup_{P\in\mathcal{H}_{6,t_{1},c_{1},t_{2},c_{2}}^{d,\beta,r,I,\Theta,s_{1},s_ {2}}}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}\left(f_{n}^{\mathbf{FNN}} \right)\right]\lesssim\frac{\log n}{n}.\] **Case II.** When \(s_{1}=\infty\) and \(s_{2}<\infty\), taking \(\eta_{0}=F_{0}=t_{1}\wedge\frac{1}{2}\) and \(\xi\simeq\left(\frac{\log n}{n}\right)^{\frac{1}{s_{2}+\frac{d-1}{\beta}}}\) in (A.119) yields \[\sup_{P\in\mathcal{H}_{6,t_{1},c_{1},t_{2},c_{2}}^{d,\beta,r,I,\Theta,s_{1},s_ {2}}}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}\left(\tilde{f}_{n}^{\mathbf{FNN}} \right)\right]\lesssim\left(\frac{\left(\log n\right)^{3}}{n}\right)^{\frac{1 }{1+\frac{1}{\beta\beta\beta\beta}}}.\] **Case III.** When \(s_{1}<\infty\) and \(s_{2}=\infty\), take \(\eta_{0}=F_{0}\asymp\left(\frac{\log n}{n}\right)^{\frac{1}{s_{1}+2}}\) and \(\xi=t_{2}\wedge\frac{1}{2}\) in (A.119). From the fact that \(\frac{\eta_{0}}{4}\leq\frac{1-\eta_{0}}{2}\phi^{\prime}(-\eta_{0})-\frac{\eta_{ 0}+1}{2}\phi^{\prime}(\eta_{0})\leq\eta_{0},\forall 0\leq\eta_{0}\leq 1\), the item in the denominator of the second term on the right hand side of (A.119) is larger than \(\frac{1}{4}\eta_{0}^{2}\). Then we have \[\sup_{P\in\mathcal{H}_{6,t_{1},c_{1},t_{2},c_{2}}^{d,\beta,r,I,\Theta,s_{1},s_ {2}}}\mathbf{E}_{P^{\otimes n}}\left[\mathcal{E}_{P}\left(\tilde{f}_{n}^{\mathbf{FNN}} \right)\right]\lesssim\left(\frac{\log n}{n}\right)^{\frac{s_{1}}{s_{1}+2}}.\] **Case IV.** When \(s_{1}<\infty\) and \(s_{2}<\infty\), taking \[\eta_{0}=F_{0}\asymp\left(\frac{\left(\log n\right)^{3}}{n}\right)^{\frac{s_{2} }{s_{2}+(s_{1}+1)\left(s_{2}+\frac{d-1}{\beta}\right)}}\text{ and }\xi\asymp\left(\frac{\left(\log n\right)^{3}}{n}\right)^{\frac{s_{1}+1}{s_{2}+(s_{ 1}+1)\left(s_{2}+\frac{d-1}{\beta}\right)}}\] in (A.119) yields \[\sup_{P\in\mathcal{H}_{0,t_{1},\cdot\cdot\cdot_{1},\cdot\cdot_{2},\cdot\cdot_{2}} ^{d,\beta,r,I,\varrho,\epsilon_{1},\cdot\cdot_{2}}}\mathbf{E}_{P^{\otimes n}}\left[ \mathcal{E}_{P}\left(\hat{f}_{n}^{\mathbf{FNN}}\right)\right]\lesssim\left( \frac{\left(\log n\right)^{3}}{n}\right)^{\frac{\varepsilon_{1}}{1+\left( \varepsilon_{1}+1\right)\left(1+\frac{d+1}{\beta\sigma_{2}}\right)}}.\] Combining above cases, we obtain the desired results. The proof of Theorem 2.5 is completed. #### a.3.6 Proof of Theorem 2.6 and Corollary 2.1 Hereinafter, for \(a\in\mathbb{R}^{d}\) and \(R\in\mathbb{R}\), we define \(\mathscr{B}(a,R):=\big{\{}\,x\in\mathbb{R}^{d}|\,\|x-a\|_{2}\leq R\big{\}}\). **Lemma A.18**.: _Let \(d\in\mathbb{N}\), \(\beta\in(0,\infty)\), \(r\in(0,\infty)\), \(Q\in\mathbb{N}\cap(1,\infty)\),_ \[G_{Q,d}:=\bigg{\{}\,(\frac{k_{1}}{2Q},\dots,\frac{k_{d}}{2Q})^{\top}\bigg{|}k_ {1},\dots,k_{d}\text{ are odd integers}\bigg{\}}\cap[0,1]^{d},\] _and \(T:G_{Q,d}\to\{-1,1\}\) be a map. Then there exist a constant \(\mathrm{c}_{1}\in(0,\frac{1}{9999})\) only depending on \((d,\beta,r)\), and an \(f\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d}\right)\) depending on \((d,\beta,r,Q,T)\), such that \(\|f\|_{[0,1]^{d}}=\frac{\mathrm{c}_{1}}{Q^{d}}\), and_ \[f(x)=\|f\|_{[0,1]^{d}}\cdot T(a)=\frac{\mathrm{c}_{1}}{Q^{\beta}}\cdot T(a), \;\forall\;a\in G_{Q,d},\;x\in\mathscr{B}(a,\frac{1}{5Q})\cap[0,1]^{d}.\] Proof.: Let \[\kappa:\mathbb{R}\to[0,1],t\mapsto\frac{\int_{t}^{\infty}\exp\left(-1/(x-1/9) \right)\cdot\exp\left(-1/(1/8-x)\right)\cdot\mathbb{1}_{(1/9,1/8)}(x)\mathrm{ d}x}{\int_{1/9}^{1/8}\exp\left(-1/(x-1/9)\right)\cdot\exp\left(-1/(1/8-x) \right)\mathrm{d}x}\] be a well defined infinitely differentiable decreasing function on \(\mathbb{R}\) with \(\kappa(t)=1\) for \(t\leq 1/9\) and \(\kappa(t)=0\) for \(t\geq 1/8\). Then define \(b:=\lceil\beta\rceil-1\), \(\lambda:=\beta-b\), \[u:\mathbb{R}^{d}\to[0,1],x\mapsto\kappa(\|x\|_{2}^{2}),\] and \(\mathrm{c}_{2}:=\big{\|}u|_{[-2,2]^{d}}\big{\|}_{\mathcal{C}^{b,\lambda}([-2, 2]^{d})}\). Obviously, \(u\) only depends on \(d\), and \(\mathrm{c}_{2}\) only depends on \((d,\beta)\). Since \(u\) is infinitely differentiable and supported in \(\mathscr{B}(\mathbf{0},\sqrt{\frac{1}{8}})\), we have \(0<\mathrm{c}_{2}<\infty\). Take \(\mathrm{c}_{1}:=\frac{r}{4\mathrm{c}_{2}}\wedge\frac{1}{10000}\). Then \(\mathrm{c}_{1}\) only depends on \((d,\beta,r)\), and \(0<\mathrm{c}_{1}<\frac{1}{9999}\). Define \[f:[0,1]^{d}\to\mathbb{R},x\mapsto\sum_{a\in G_{Q,d}}T(a)\cdot\frac{\mathrm{c}_ {1}}{Q^{\beta}}\cdot u(Q\cdot(x-a)).\] We then show that these \(\mathrm{c}_{1}\) and \(f\) defined above have the desired properties. For any \(\mathbf{m}\in(\mathbb{N}\cup\{0\})^{d}\), we write \(u_{\mathbf{m}}\) for \(\mathrm{D}^{\mathbf{m}}u\), i.e., the partial derivative of \(u\) with respect to the multi-index \(\mathbf{m}\). An elementary calculation yields \[\mathrm{D}^{\mathbf{m}}f(x)=\sum_{a\in G_{Q,d}}T(a)\cdot\frac{\mathrm{c}_{1}}{Q^{ \beta-\|\mathbf{m}\|_{1}}}\cdot u_{\mathbf{m}}(Q\cdot(x-a)),\;\forall\;\mathbf{m}\in( \mathbb{N}\cup\{0\})^{d},\;x\in[0,1]^{d}.\] (A.121) Note that the supports of the functions \(T(a)\cdot\frac{\mathrm{c}_{1}}{Q^{\beta-\|\mathbf{m}\|_{1}}}\cdot u_{\mathbf{m}}(Q \cdot(x-a))\) (\(a\in G_{Q,d}\)) in (A.121) are disjoint. Indeed, we have \[\begin{split}&\bigg{\{}x\in\mathbb{R}^{d}\left|T(a)\cdot\frac{ \mathrm{c}_{1}}{Q^{\beta-\|\mathbf{m}\|_{1}}}\cdot u_{\mathbf{m}}(Q\cdot(x-a))\neq 0 \right\}\\ &\subset\mathscr{B}(a,\frac{\sqrt{1/8}}{Q})\subset\bigg{\{}a+v \left|v\in(\frac{-1}{2Q},\frac{1}{2Q})^{d}\,\right\}\\ &\subset[0,1]^{d}\setminus\bigg{\{}z+v\left|v\in[\frac{-1}{2Q}, \frac{1}{2Q}]^{d}\right.\!\bigg{\}}\,,\,\forall\,\mathbf{m}\in(\mathbb{N}\cup\{0\} )^{d},\,a\in G_{Q,d},\,z\in G_{Q,d}\setminus\{a\}\,,\end{split}\] (A.122) and sets \(\mathscr{B}(a,\frac{\sqrt{1/8}}{Q})\) (\(a\in G_{Q,d}\)) are disjoint. Therefore, \[\begin{split}&\|\mathrm{D}^{\boldsymbol{m}}f\|_{[0,1]^{d}}=\sup_{a \in G_{Q,d}}\sup_{x\in[0,1]^{d}}\left|T(a)\cdot\frac{\mathrm{c}_{1}}{Q^{\beta- \left\|\boldsymbol{m}\right\|_{1}}}\cdot u_{\boldsymbol{m}}(Q\cdot(x-a))\right| \\ &=\sup_{a\in G_{Q,d}}\sup_{x\in\mathscr{B}(\boldsymbol{0},\frac{ \sqrt{1/8}}{Q})}\left|T(a)\cdot\frac{\mathrm{c}_{1}}{Q^{\beta-\left\| \boldsymbol{m}\right\|_{1}}}\cdot u_{\boldsymbol{m}}(Q\cdot(x-a))\right|\\ &=\sup_{a\in G_{Q,d}}\sup_{x\in\mathscr{B}(\boldsymbol{0}, \sqrt{1/8})}\left|\frac{\mathrm{c}_{1}}{Q^{\beta-\left\|\boldsymbol{m}\right\| _{1}}}\cdot u_{\boldsymbol{m}}(x)\right|\leq\sup_{x\in[-2,2]^{d}}\left|\frac{ \mathrm{c}_{1}}{Q^{\beta-\left\|\boldsymbol{m}\right\|_{1}}}\cdot u_{ \boldsymbol{m}}(x)\right|\\ &\leq\sup_{x\in[-2,2]^{d}}\left|\mathrm{c}_{1}\cdot u_{ \boldsymbol{m}}(x)\right|\leq\mathrm{c}_{1}\mathrm{c}_{2},\;\forall\; \boldsymbol{m}\in(\mathbb{N}\cup\{0\})^{d}\text{ with }\left\|\boldsymbol{m}\right\|_{1}\leq b.\end{split}\] (A.123) In particular, we have that \[\left\|f\right\|_{[0,1]^{d}}=\sup_{a\in G_{Q,d}}\sup_{x\in\mathscr{B}( \boldsymbol{0},\sqrt{1/8})}\left|\frac{\mathrm{c}_{1}}{Q^{\beta}}\cdot u(x) \right|=\frac{\mathrm{c}_{1}}{Q^{\beta}}.\] (A.124) Besides, for any \(a\in G_{Q,d}\), any \(x\in\mathscr{B}(a,\frac{1}{5Q})\cap[0,1]^{d}\), and any \(z\in G_{Q,d}\setminus\{a\}\), we have \[\left\|Q\cdot(x-z)\right\|_{2}\geq Q\left\|a-z\right\|_{2}-Q\left\|x-a\right\| _{2}\geq 1-\frac{1}{5}>\sqrt{1/8}>\sqrt{1/9}>\left\|Q\cdot(x-a)\right\|_{2},\] which means that \(u(Q\cdot(x-z))=0\) and \(u(Q\cdot(x-a))=1\). Thus \[f(x)=T(a)\cdot\frac{\mathrm{c}_{1}}{Q^{\beta}}\cdot u(Q\cdot(x-a ))+\sum_{z\in G_{Q,d}\setminus\{a\}}T(z)\cdot\frac{\mathrm{c}_{1}}{Q^{\beta}} \cdot u(Q\cdot(x-z))\] \[=T(a)\cdot\frac{\mathrm{c}_{1}}{Q^{\beta}}\cdot 1+\sum_{z\in G_{Q,d} \setminus\{a\}}T(z)\cdot\frac{\mathrm{c}_{1}}{Q^{\beta}}\cdot 0\] (A.125) \[=T(a)\cdot\frac{\mathrm{c}_{1}}{Q^{\beta}},\;\forall\;a\in G_{Q,d },\;x\in\mathscr{B}(a,\frac{1}{5Q})\cap[0,1]^{d}.\] Now it remains to show that \(f\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d}\right)\). Let \(\boldsymbol{m}\in(\mathbb{N}\cup\{0\})^{d}\) be an arbitrary multi-index with \(\left\|\boldsymbol{m}\right\|_{1}=b\), and \(x,y\) be arbitrary points in \(\bigcup_{a\in G_{Q,d}}\left\{a+v\left|v\in(-\frac{1}{2Q},\frac{1}{2Q})^{d} \right.\right\}\). Then there exist \(a_{x},a_{y}\in G_{Q,d}\), such that \(x-a_{x}\in(-\frac{1}{2Q},\frac{1}{2Q})^{d}\) and \(y-a_{y}\in(-\frac{1}{2Q},\frac{1}{2Q})^{d}\). If \(a_{x}=a_{y}\), then it follows from (A.122) that \[u_{\boldsymbol{m}}(Q\cdot(x-z))=u_{\boldsymbol{m}}(Q\cdot(y-z))=0,\;\forall\;z \in G_{Q,d}\setminus\{a_{x}\}\,,\] which, together with the fact that \(\{Q\cdot(x-a_{x}),Q\cdot(y-a_{y})\}\subset(-\frac{1}{2},\frac{1}{2})^{d}\), yields \[\begin{split}&|\mathrm{D}^{\boldsymbol{m}}f(x)-\mathrm{D}^{ \boldsymbol{m}}f(y)|\\ &=\left|T(a_{x})\cdot\frac{\mathrm{c}_{1}}{Q^{\beta-\left\| \boldsymbol{m}\right\|_{1}}}\cdot u_{\boldsymbol{m}}(Q\cdot(x-a_{x}))-T(a_{ y})\cdot\frac{\mathrm{c}_{1}}{Q^{\beta-\left\|\boldsymbol{m}\right\|_{1}}} \cdot u_{\boldsymbol{m}}(Q\cdot(y-a_{y}))\right|\\ &=\mathrm{c}_{1}\cdot\left|\frac{u_{\boldsymbol{m}}(Q\cdot(x-a_{ x}))-u_{\boldsymbol{m}}(Q\cdot(y-a_{y}))}{Q^{\lambda}}\right|\\ &\leq\frac{\mathrm{c}_{1}}{Q^{\lambda}}\cdot\left\|Q\cdot(x-a_{ x})-Q\cdot(y-a_{y})\right\|_{2}^{\lambda}\cdot\sup_{z,z^{\prime}\in(-\frac{1}{2}, \frac{1}{2})^{d},z\neq z^{\prime},}\left|\frac{u_{\boldsymbol{m}}(z)-u_{ \boldsymbol{m}}(z^{\prime})}{\left\|z-z^{\prime}\right\|_{2}^{\lambda}}\right| \\ &\leq\frac{\mathrm{c}_{1}}{Q^{\lambda}}\cdot\left\|Q\cdot(x-a_{ x})-Q\cdot(y-a_{y})\right\|_{2}^{\lambda}\cdot\mathrm{c}_{2}=\mathrm{c}_{1}\mathrm{c}_{2} \cdot\left\|x-y\right\|_{2}^{\lambda}.\end{split}\] If, otherwise, \(a_{x}\neq a_{y}\), then it is easy to show that \[\{t\cdot x+(1-t)\cdot y|t\in[0,1]\}\cap\left\{a_{x}+v\left|v\in[-\frac{1}{2Q}, \frac{1}{2Q}]^{d}\setminus(-\frac{1}{2Q},\frac{1}{2Q})^{d}\right.\right\}\neq\varnothing,\] \[\{t\cdot x+(1-t)\cdot y|t\in[0,1]\}\cap\left\{a_{y}+v\left|v\in[-\frac{1}{2Q}, \frac{1}{2Q}]^{d}\setminus(-\frac{1}{2Q},\frac{1}{2Q})^{d}\right.\right\}\neq\varnothing.\] In other words, the line segment joining points \(x\) and \(y\) intersects boundaries of rectangles \(\left\{a_{x}+v\left|v\in(-\frac{1}{2Q},\frac{1}{2Q})^{d}\right.\right\}\) and \(\left\{a_{y}+v\left|v\in(-\frac{1}{2Q},\frac{1}{2Q})^{d}\right.\right\}\). Take \[x^{\prime}\in\{t\cdot x+(1-t)\cdot y|t\in[0,1]\}\cap\left\{a_{x}+v\left|v\in[- \frac{1}{2Q},\frac{1}{2Q}]^{d}\setminus(-\frac{1}{2Q},\frac{1}{2Q})^{d}\right.\right\}\] and \[y^{\prime}\in\{t\cdot x+(1-t)\cdot y|t\in[0,1]\}\cap\left\{a_{y}+v\left|v\in[- \frac{1}{2Q},\frac{1}{2Q}]^{d}\setminus(-\frac{1}{2Q},\frac{1}{2Q})^{d}\right.\right\}\] (cf. Figure A.8). Obviously, we have that \[\{Q\cdot(x-a_{x}),Q\cdot(x^{\prime}-a_{x}),Q\cdot(y-a_{y}),Q\cdot(y^{\prime}- a_{y})\}\subset[-\frac{1}{2},\frac{1}{2}]^{d}.\] By (A.122), we have that \[u_{\mathbf{m}}(Q\cdot(x-z))\cdot(1-\mathbb{1}_{\{a_{x}\}}(z))=u_{ \mathbf{m}}(Q\cdot(x^{\prime}-z))\] \[=u_{\mathbf{m}}(Q\cdot(y^{\prime}-z))=u_{\mathbf{m}}(Q\cdot(y-z))\cdot(1- \mathbb{1}_{\{a_{y}\}}(z))=0,\;\forall\;z\in G_{Q,d}.\] Consequently, \[|\mathrm{D}^{\mathbf{m}}f(x)-\mathrm{D}^{\mathbf{m}}f(y)|\leq|\mathrm{D}^ {\mathbf{m}}f(x)|+|\mathrm{D}^{\mathbf{m}}f(y)|\] \[=\left|T(a_{x})\cdot\frac{\mathrm{c}_{1}}{Q^{\beta-\|\mathbf{m}\|_{1 }}}\cdot u_{\mathbf{m}}(Q\cdot(x-a_{x}))\right|+\left|T(a_{y})\cdot\frac{\mathrm{ c}_{1}}{Q^{\beta-\|\mathbf{m}\|_{1}}}\cdot u_{\mathbf{m}}(Q\cdot(y-a_{y}))\right|\] \[=\frac{\mathrm{c}_{1}}{Q^{\lambda}}\cdot|u_{\mathbf{m}}(Q\cdot(x-a_{ x}))|+\frac{\mathrm{c}_{1}}{Q^{\lambda}}\cdot|u_{\mathbf{m}}(Q\cdot(y-a_{y}))|\] \[=\frac{\mathrm{c}_{1}}{Q^{\lambda}}\cdot|u_{\mathbf{m}}(Q\cdot(x-a_{x }))-u_{\mathbf{m}}(Q\cdot(x^{\prime}-a_{x}))|\] \[\qquad+\frac{\mathrm{c}_{1}}{Q^{\lambda}}\cdot|u_{\mathbf{m}}(Q\cdot (y-a_{y}))-u_{\mathbf{m}}(Q\cdot(y^{\prime}-a_{y}))|\] \[\leq\frac{\mathrm{c}_{1}}{Q^{\lambda}}\cdot\left\|Q\cdot(x-a_{x} )-Q\cdot(x^{\prime}-a_{x})\right\|_{2}^{\lambda}\cdot\sup_{z,z^{\prime}\in[- \frac{1}{2},\frac{1}{2}]^{d},z\neq z^{\prime},}\left|\frac{u_{\mathbf{m}}(z)-u_{ \mathbf{m}}(z^{\prime})}{\|z-z^{\prime}\|_{2}^{\lambda}}\right|\] Figure A.8: Illustration of the points \(x\), \(y\), \(a_{x}\), \(a_{y}\), \(x^{\prime}\), \(y^{\prime}\) when \(Q=3\) and \(d=2\). \[\frac{\mathrm{d}P_{\eta_{1},\mathscr{Q}}}{\mathrm{d}P_{\eta_{2}, \mathscr{Q}}}(x,y)=\begin{cases}\frac{\eta_{1}(x)}{\eta_{2}(x)},&\text{if $y=1$},\\ \frac{1-\eta_{1}(x)}{1-\eta_{2}(x)},&\text{if $y=-1$}.\end{cases}\] Proof.: Let \(f:[0,1]^{d}\times\{-1,1\}\to[0,\infty),\ (x,y)\mapsto\begin{cases}\frac{\eta_{1}(x)}{ \eta_{2}(x)},&\text{if $y=1$},\\ \frac{1-\eta_{1}(x)}{1-\eta_{2}(x)},&\text{if $y=-1$}.\end{cases}\) Then we have that \(f\) is well defined and measurable. For any Borel subset \(S\) of \([0,1]^{d}\times\{-1,1\}\), let \(S_{1}:=\big{\{}\,x\in[0,1]^{d}\big{|}\,(x,1)\in S\big{\}}\), and \(S_{2}:=\big{\{}\,x\in[0,1]^{d}\big{|}\,(x,-1)\in S\big{\}}\). Obviouls, \(S_{1}\times\{1\}\) and \(S_{2}\times\{-1\}\) are measurable and disjoint. Besides, it is easy to verify that \(S=(S_{1}\times\{1\})\cup(S_{2}\times\{-1\})\). Therefore, \[\int_{S}f(x,y)\mathrm{d}P_{\eta_{2},\mathscr{Q}}(x,y)\] \[=\int_{S_{1}}\int_{\{1\}}f(x,y)\mathrm{d}\mathscr{M}_{\eta_{2}(x )}(y)\mathrm{d}\mathscr{Q}(x)+\int_{S_{2}}\int_{\{-1\}}f(x,y)\mathrm{d} \mathscr{M}_{\eta_{2}(x)}(y)\mathrm{d}\mathscr{Q}(x)\] \[=\int_{S_{1}}\eta_{2}(x)f(x,1)\mathrm{d}\mathscr{Q}(x)+\int_{S_{ 2}}(1-\eta_{2}(x))f(x,-1)\mathrm{d}\mathscr{Q}(x)\] \[=\int_{S_{1}}\eta_{1}(x)\mathrm{d}\mathscr{Q}(x)+\int_{S_{2}}(1 -\eta_{1}(x))\mathrm{d}\mathscr{Q}(x)\] \[=\int_{S_{1}}\int_{\{1\}}\mathrm{d}\mathscr{M}_{\eta_{1}(x)}(y)\mathrm{d} \mathscr{Q}(x)+\int_{S_{2}}\int_{\{-1\}}\mathrm{d}\mathscr{M}_{\eta_{1}(x)}(y) \mathrm{d}\mathscr{Q}(x)\] \[=P_{\eta_{1},\mathscr{Q}}(S_{1}\times\{1\})+P_{\eta_{1},\mathscr{Q }}(S_{2}\times\{-1\})=P_{\eta_{1},\mathscr{Q}}(S).\] Since \(S\) is arbitrary, we deduce that \(P_{\eta_{1},\mathscr{Q}}<<P_{\eta_{2},\mathscr{Q}}\), and \(\frac{\mathrm{d}P_{\eta_{1},\mathscr{Q}}}{\mathrm{d}P_{\eta_{2},\mathscr{Q}}}=f\). This completes the proof. **Lemma A.20**.: _Let \(\varepsilon\in(0,\frac{1}{5}]\), \(\mathscr{Q}\) be a Borel probability on \([0,1]^{d}\), and \(\eta_{1}:[0,1]^{d}\to[\varepsilon,3\varepsilon]\), \(\eta_{2}:[0,1]^{d}\to[\varepsilon,3\varepsilon]\) be two measurable functions. Then_ \[\mathrm{KL}(P_{\eta_{1},\mathscr{Q}}||P_{\eta_{2},\mathscr{Q}})\leq 9\varepsilon.\] Proof.: By Lemma A.19, \[\begin{split}&\mathrm{KL}(P_{\eta_{1},\mathscr{Q}}||P_{\eta_{2}, \mathscr{Q}})\\ &=\int_{[0,1]^{d}\times\{-1,1\}}\log\left(\frac{\eta_{1}(x)}{ \eta_{2}(x)}\cdot\mathbb{1}_{\{1\}}(y)+\frac{1-\eta_{1}(x)}{1-\eta_{2}(x)} \cdot\mathbb{1}_{\{-1\}}(y)\right)\mathrm{d}P_{\eta_{1},\mathscr{Q}}(x,y)\\ &=\int_{[0,1]^{d}}\left(\eta_{1}(x)\log\left(\frac{\eta_{1}(x)}{ \eta_{2}(x)}\right)+(1-\eta_{1}(x))\log\left(\frac{1-\eta_{1}(x)}{1-\eta_{2}(x )}\right)\right)\mathrm{d}\mathscr{Q}(x)\\ &\leq\int_{[0,1]^{d}}\left(3\varepsilon\cdot\left|\log\left(\frac {\eta_{1}(x)}{\eta_{2}(x)}\right)\right|+\left|\log\left(\frac{1-\eta_{1}(x)}{ 1-\eta_{2}(x)}\right)\right|\right)\mathrm{d}\mathscr{Q}(x)\\ &\leq\int_{[0,1]^{d}}\left(3\varepsilon\cdot\log\left(\frac{3 \varepsilon}{\varepsilon}\right)+\log\left(\frac{1-\varepsilon}{1-3\varepsilon }\right)\right)\mathrm{d}\mathscr{Q}(x)\\ &=\log\left(1+\frac{2\varepsilon}{1-3\varepsilon}\right)+3 \varepsilon\cdot\log 3\leq\frac{2\varepsilon}{1-3\varepsilon}+4\varepsilon\leq 9 \varepsilon.\end{split}\] **Lemma A.21**.: _Let \(m\in\mathbb{N}\cap(1,\infty)\), \(\Omega\) be a set with \(\#(\Omega)=m\), and \(\left\{0,1\right\}^{\Omega}\) be the set of all functions mapping from \(\Omega\) to \(\left\{0,1\right\}\). Then there exists a subset \(E\) of \(\left\{0,1\right\}^{\Omega}\), such that \(\#(E)\geq 1+2^{m/8}\), and_ \[\#\left(\left\{x\in\Omega\right|f(x)\neq g(x)\right\}\right)\geq\frac{m}{8}, \;\forall\;f\in E,\;\forall\;g\in E\setminus\left\{f\right\}.\] Proof.: If \(m\leq 8\), then \(E=\left\{0,1\right\}^{\Omega}\) have the desired properties. The proof for the case \(m>8\) can be found in Lemma 2.9 of [44]. **Lemma A.22**.: _Let \(\phi\) be the logistic loss,_ \[\begin{split}\mathcal{J}:(0,1)^{2}&\to\mathbb{R}\\ (x,y)&\mapsto(x+y)\log\frac{2}{x+y}+(2-x-y)\log\frac{ 2}{2-x-y}\\ &\qquad-\left(x\log\frac{1}{x}+(1-x)\log\frac{1}{1-x}+y\log\frac{ 1}{y}+(1-y)\log\frac{1}{1-y}\right),\end{split}\] (A.127) \(\mathscr{Q}\) _be a Borel probability measure on \([0,1]^{d}\), and \(\eta_{1}:[0,1]^{d}\to(0,1)\), \(\eta_{2}:[0,1]^{d}\to(0,1)\) be two measurable functions. Then there hold_ \[\mathcal{J}(x,y)=\mathcal{J}(y,x)\geq 0,\;\forall\;x\in(0,1),\;y\in(0,1),\] (A.128) \[\frac{\varepsilon}{4}<\mathcal{J}(\varepsilon,3\varepsilon)=\mathcal{J}(3 \varepsilon,\varepsilon)<\varepsilon,\;\forall\;\varepsilon\in(0,\frac{1}{6}],\] (A.129) _and_ \[\int_{[0,1]^{d}}\mathcal{J}(\eta_{1}(x),\eta_{2}(x))\mathrm{d}\mathscr{Q}(x) \leq\inf_{f\in\mathcal{F}_{d}}\left|\mathcal{E}_{\eta_{1},\mathscr{Q}}^{\phi}( f)+\mathcal{E}_{\eta_{2},\mathscr{Q}}^{\phi}(f)\right|.\] (A.130) Proof.: Let \(g:(0,1)\to(0,\infty),x\mapsto x\log\frac{1}{x}+(1-x)\log\frac{1}{1-x}\). Then it is easy to verify that \(g\) is concave (i.e., \(-g\) is convex), and \[\mathcal{J}(x,y)=2g(\frac{x+y}{2})-g(x)-g(y),\;\forall\;x\in(0,1),\;y\in(0,1).\] This yields (A.128). An elementary calculation gives \[\mathcal{J}(\varepsilon,3\varepsilon)=\mathcal{J}(3\varepsilon,\varepsilon)\] \[=\varepsilon\log\frac{27}{16}-\log\left(\frac{(1-2\varepsilon)^{ 2}}{(1-\varepsilon)(1-3\varepsilon)}\right)+4\varepsilon\log(1-2\varepsilon) -\varepsilon\log(1-\varepsilon)-3\varepsilon\log(1-3\varepsilon)\] \[\xlongetimes\text{Taylor expansion}\;\varepsilon\log\frac{27}{16} +\sum_{k=2}^{\infty}\frac{3^{k}+1-2\cdot 2^{k}}{k\cdot(k-1)}\cdot\varepsilon^{k},\; \forall\;\varepsilon\in(0,1/3).\] Therefore, \[\frac{\varepsilon}{4}<\varepsilon\log\frac{27}{16}\leq\varepsilon \log\frac{27}{16}+\sum_{k=2}^{\infty}\frac{1+\left(\left(\frac{3}{2}\right)^{ k}-2\right)\cdot 2^{k}}{k\cdot(k-1)}\cdot\varepsilon^{k}=\varepsilon\log\frac{27}{16}+\sum_{k=2}^{ \infty}\frac{3^{k}+1-2\cdot 2^{k}}{k\cdot(k-1)}\cdot\varepsilon^{k}\] \[=\mathcal{J}(\varepsilon,3\varepsilon)=\mathcal{J}(3\varepsilon, \varepsilon)=\varepsilon\log\frac{27}{16}+\sum_{k=2}^{\infty}\frac{3^{k}+1-2 \cdot 2^{k}}{k\cdot(k-1)}\cdot\varepsilon^{k}\leq\varepsilon\log\frac{27}{16}+ \sum_{k=2}^{\infty}\frac{3^{k}-7}{k\cdot(k-1)}\cdot\varepsilon^{k}\] \[=\varepsilon\log\frac{27}{16}+\varepsilon^{2}+\varepsilon\cdot \sum_{k=3}^{\infty}\frac{3^{k}-7}{k\cdot(k-1)}\cdot\varepsilon^{k-1}\leq \varepsilon\log\frac{27}{16}+\varepsilon/6+\varepsilon\cdot\sum_{k=3}^{ \infty}\frac{3^{k}}{3\cdot(3-1)}\cdot\left(\frac{1}{6}\right)^{k-1}\] \[=\left(\frac{1}{6}+\frac{1}{4}+\log\frac{27}{16}\right)\cdot \varepsilon<\varepsilon,\;\forall\;\varepsilon\in(0,1/6],\] which proves (A.129). Define \(f_{1}:[0,1]^{d}\to\mathbb{R},x\mapsto\log\frac{\eta_{1}(x)}{1-\eta_{1}(x)}\) and \(f_{2}:[0,1]^{d}\to\mathbb{R},x\mapsto\log\frac{\eta_{2}(x)}{1-\eta_{2}(x)}\). Then it is easy to verify that \[\mathcal{R}^{\phi}_{P_{\eta_{1},2}}(f_{i})=\int_{[0,1]^{d}}g(\eta_{i}(x)) \mathrm{d}\mathscr{Q}(x)\in(0,\infty),\;\forall\;i\in\left\{1,2\right\},\] and \[\inf\left\{a\phi(t)+(1-a)\phi(-t)\big{|}t\in\mathbb{R}\right\}=g(a),\;\forall \;a\in(0,1).\] Consequently, for any measurable function \(f:[0,1]^{d}\to\mathbb{R}\), there holds \[\mathcal{E}^{\phi}_{P_{\eta_{1},2}}(f)+\mathcal{E}^{\phi}_{P_{ \eta_{2},2}}(f)\geq\mathcal{R}^{\phi}_{P_{\eta_{1},2}}(f)-\mathcal{R}^{\phi}_ {P_{\eta_{1},2}}(f)+\mathcal{R}^{\phi}_{P_{\eta_{2},2}}(f)-\mathcal{R}^{\phi} _{P_{\eta_{2},2}}(f_{2})\] \[=\int_{[0,1]^{d}}\left((\eta_{1}(x)+\eta_{2}(x))\phi(f(x))+(2- \eta_{1}(x)-\eta_{2}(x))\phi(-f(x))\,\mathrm{d}\mathscr{Q}(x)\right.\] \[\qquad\qquad\qquad-\mathcal{R}^{\phi}_{P_{\eta_{1},2}}(f_{1})- \mathcal{R}^{\phi}_{P_{\eta_{2},2}}(f_{2})\] \[\geq\int_{[0,1]^{d}}2\cdot\inf\left\{\frac{\eta_{1}(x)+\eta_{2}(x )}{2}\phi(t)+(1-\frac{\eta_{1}(x)+\eta_{2}(x)}{2})\phi(-t)\bigg{|}\,t\in \mathbb{R}\right\}\mathrm{d}\mathscr{Q}(x)\] \[\qquad\qquad\qquad-\mathcal{R}^{\phi}_{P_{\eta_{1},2}}(f_{1})- \mathcal{R}^{\phi}_{P_{\eta_{2},2}}(f_{2})\] \[=\int_{[0,1]^{d}}2g(\frac{\eta_{1}(x)+\eta_{2}(x)}{2})\mathrm{d} \mathscr{Q}(x)-\mathcal{R}^{\phi}_{P_{\eta_{1},2}}(f_{1})-\mathcal{R}^{\phi} _{P_{\eta_{2},2}}(f_{2})\] \[=\int_{[0,1]^{d}}\left(2g(\frac{\eta_{1}(x)+\eta_{2}(x)}{2})-g( \eta_{1}(x))-g(\eta_{2}(x))\right)\mathrm{d}\mathscr{Q}(x)\] \[=\int_{[0,1]^{d}}\mathcal{J}(\eta_{1}(x),\eta_{2}(x))\mathrm{d} \mathscr{Q}(x).\] This proves (A.130). Proof of Theorem 2.6 and Corollary 2.1.: We first prove Theorem 2.6. Let \(n\) be an arbitrary integer greater than \(\left\lfloor\frac{7}{1-A}\right\rfloor^{\frac{d_{*}+\beta+(1-\beta)\beta}{-(1, \beta)^{d_{*}}}}\). Take \(b:=\lceil\beta\rceil-1\), \(\lambda:=\beta+1-\lceil\beta\rceil\), \(Q:=\left\lfloor n^{\frac{1}{d_{*}+\beta+(1\wedge\beta)\vartheta}}\right\rfloor+1,\mathtt{M}:=\left\lceil 2^{Q^{d_{*}}/8}\right\rceil\), \[G_{Q,d_{*}}:=\left\{(\frac{k_{1}}{2Q},\ldots,\frac{k_{d_{*}}}{2Q})^{\top} \right|k_{1},\ldots,k_{d_{*}}\text{ are odd integers}\right\}\cap[0,1]^{d_{*}},\] and \(\mathcal{J}\) to be the function defined in (A.127). Note that \(\#\left(G_{Q,d_{*}}\right)=Q^{d_{*}}\). Thus it follows from Lemma A.21 that there exist functions \(T_{j}:G_{Q,d_{*}}\to\{-1,1\}\), \(j=0,1,2,\ldots,\mathtt{M}\), such that \[\#\left(\left\{a\in G_{Q,d_{*}}|\,T_{i}(a)\neq T_{j}(a)\right\}\right)\geq \frac{Q^{d_{*}}}{8},\ \forall\ 0\leq i<j\leq\mathtt{M}.\] (A.131) According to Lemma A.18, for each \(j\in\{0,1,\ldots,\mathtt{M}\}\), there exists an \(f_{j}\in\mathcal{B}_{\frac{r\wedge 1}{777}}^{\beta}\left([0,1]^{d_{*}}\right)\), such that \[\frac{\mathrm{c}_{1}}{Q^{\beta}}=\|f_{j}\|_{[0,1]^{d_{*}}}\leq\|f_{j}\|_{ \mathcal{C}^{b,\lambda}([0,1]^{d_{*}})}\leq\frac{1\wedge r}{777},\] (A.132) and \[f_{j}(x)=\frac{\mathrm{c}_{1}}{Q^{\beta}}\cdot T_{j}(a),\ \forall\ a\in G_{Q,d_{*}}, \ x\in\mathscr{B}(a,\frac{1}{5Q})\cap[0,1]^{d_{*}},\] (A.133) where \(\mathrm{c}_{1}\in(0,\frac{1}{9999})\) only depends on \((d_{*},\beta,r)\). Define \[g_{j}:[0,1]^{d_{*}}\to\mathbb{R},x\mapsto\frac{\mathrm{c}_{1}}{Q^{\beta}}+f_{ j}(x).\] It follows from (A.132) that \[\mathbf{ran}(g_{j})\subset\left[0,\frac{2\mathrm{c}_{1}}{Q^{\beta}}\right] \subset\left[0,2\cdot\frac{1\wedge r}{777}\right]\subset[0,1]\] (A.134) and \[\frac{\mathrm{c}_{1}}{Q^{\beta}}+\|g_{j}\|_{\mathcal{C}^{b,\lambda}([0,1]^{d_ {*}})}\leq\frac{2\mathrm{c}_{1}}{Q^{\beta}}+\|f_{j}\|_{\mathcal{C}^{b,\lambda }([0,1]^{d_{*}})}\leq 2\cdot\frac{1\wedge r}{777}+\frac{1\wedge r}{777}<r,\] (A.135) meaning that \[g_{j}\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d_{*}}\right)\text{ and }g_{j}+ \frac{\mathrm{c}_{1}}{Q^{\beta}}\in\mathcal{B}_{r}^{\beta}\left([0,1]^{d_{*}} \right).\] (A.136) We then define \[h_{0,j}:[0,1]^{d}\to[0,1],\ (x_{1},\ldots,x_{d})^{\top}\mapsto g_{j}(x_{1}, \ldots,x_{d_{*}})\] if \(q=0\), and define \[h_{0,j}:[0,1]^{d}\to[0,1]^{K},\ (x_{1},\ldots,x_{d})^{\top}\mapsto(g_{j}(x_{1}, \ldots,x_{d_{*}}),0,0,\ldots,0)^{\top}\] if \(q>0\). Note that \(h_{0,j}\) is well defined because \(d_{*}\leq d\) and \(\mathbf{ran}(g_{j})\subset[0,1]\). Take \[\varepsilon=\frac{1}{2}\cdot\left|\frac{1\wedge r}{777}\right|^{\sum_{k=0}^{ \beta-1}(1\wedge\beta)^{k}}\cdot\left|\frac{2\mathrm{c}_{1}}{Q^{\beta}} \right|^{(1\wedge\beta)^{q}}.\] From (A.132) we see that \[0<\varepsilon\leq\frac{1\wedge r}{777}.\] (A.137) For all real number \(t\), define the function \[u_{t}:[0,1]^{d_{*}}\to\mathbb{R},\ (x_{1},\ldots,x_{d_{*}})^{\top}\mapsto t+ \frac{1\wedge r}{777}\cdot|x_{1}|^{(1\wedge\beta)}.\] Then it follows from (A.137) and the elementary inequality \[\left|\left|z_{1}\right|^{w}-\left|z_{2}\right|^{w}\right|\leq\left|z_{1}-z_{2} \right|^{w},\;\forall\;z_{1}\in\mathbb{R},z_{2}\in\mathbb{R},w\in(0,1]\] that \[\begin{split}&\max\left\{\left\|u_{e}\right\|_{[0,1]^{d_{*}}}, \left\|u_{0}\right\|_{[0,1]^{d_{*}}}\right\}\leq\max\left\{\left\|u_{e}\right\| _{\mathcal{C}^{\mathsf{R},\lambda}([0,1]^{d_{*}})},\left\|u_{0}\right\|_{ \mathcal{C}^{\mathsf{R},\lambda}([0,1]^{d_{*}})}\right\}\\ &\leq\left\|u_{0}\right\|_{\mathcal{C}^{\mathsf{R},\lambda}([0,1] ^{d_{*}})}+\varepsilon\leq\frac{1\wedge r}{777}\cdot 2+\varepsilon\leq\frac{1 \wedge r}{777}\cdot 2+\frac{1\wedge r}{777}<r\wedge 1,\end{split}\] (A.138) which means that \[\mathbf{ran}(u_{0})\cup\mathbf{ran}(u_{\varepsilon})\subset[0,1],\] (A.139) and \[\left\{u_{0},u_{\varepsilon}\right\}\subset\mathcal{B}_{r}^{\beta}\left([0,1] ^{d_{*}}\right).\] (A.140) Next, for each \(i\in\mathbb{N}\), define \[h_{i}:[0,1]^{K} \to\mathbb{R},\] \[(x_{1},\ldots,x_{K})^{\top} \mapsto u_{0}(x_{1},\ldots,x_{d_{*}})\] if \(i=q>0\), and define \[h_{i}:[0,1]^{K}\to\mathbb{R}^{K},\;(x_{1},\ldots,x_{K})^{\top}\mapsto(u_{0}(x _{1},\ldots,x_{d_{*}}),0,0,\ldots,0)^{\top}\] otherwise. It follows from (A.139) that \(\mathbf{ran}(h_{i})\subset[0,1]\) if \(i=q>0\), and \(\mathbf{ran}(h_{i})\subset[0,1]^{K}\) otherwise. Thus, for each \(j\in\{0,1,\ldots,\mathsf{M}\}\), we can well define \[\eta_{j}:[0,1]^{d}\to\mathbb{R},\;x\mapsto\varepsilon+h_{q}\circ h_{q-1}\circ \cdots\circ h_{3}\circ h_{2}\circ h_{1}\circ h_{0,j}(x).\] We then deduce from (A.136) and (A.140) that \[\eta_{j}\in\mathcal{G}_{d}^{\mathbf{CH}}(q,K,d_{*},\beta,r),\;\forall\;j\in\{ 0,1,\ldots,\mathsf{M}\}\,.\] (A.141) Moreover, an elementary calculation gives \[\begin{split}&\left|\frac{1\wedge r}{777}\right|^{\sum_{k=0}^{q- 1}(1\wedge\beta)^{k}}\cdot\left|g_{j}(x_{1},\ldots,x_{d_{*}})\right|^{(1\wedge \beta)^{q}}+\varepsilon\\ &=\eta_{j}(x_{1},\ldots,x_{d}),\;\forall\;(x_{1},\ldots,x_{d}) \in[0,1]^{d},\;\forall\;j\in\{0,1,\ldots,\mathsf{M}\}\,,\end{split}\] (A.142) which, together with (A.134), yields \[\begin{split}& 0<\varepsilon\leq\eta_{j}(x_{1},\ldots,x_{d})\leq \left|\frac{1\wedge r}{777}\right|^{\sum_{k=0}^{q-1}(1\wedge\beta)^{k}}\cdot \left|\frac{2c_{1}}{Q^{\beta}}\right|^{(1\wedge\beta)^{q}}+\varepsilon=2 \varepsilon+\varepsilon\\ &=3\varepsilon\leq\left|\frac{3c_{1}}{Q^{\beta}}\right|^{(1 \wedge\beta)^{q}}<\frac{1}{Q^{\beta-(1\wedge\beta)^{q}}}\leq\frac{1}{n^{\frac{ \beta-(1\wedge\beta)^{q}}{4+\beta^{q}(1\wedge\beta)^{q}}}}\leq\frac{1-A}{7}< \frac{1-A}{2}\\ &<1,\;\forall\;(x_{1},\ldots,x_{d})\in[0,1]^{d},\;\forall\;j\in\{ 0,1,\ldots,\mathsf{M}\}\,.\end{split}\] Consequently, \[\mathbf{ran}(\eta_{j})\subset[\varepsilon,3\varepsilon]\subset(0,1),\; \forall\;j\in\{0,1,\ldots,\mathsf{M}\}\,,\] (A.143) and \[\left\{\left.x\in[0,1]^{d}\right|\left|2\eta_{j}(x)-1\right|\leq A\right\}= \varnothing,\;\forall\;j\in\{0,1,\ldots,\mathsf{M}\}\,.\] (A.144) Combining (A.141), (A.143), and (A.144), we obtain \[P_{j}:=P_{\eta_{j}}\in\mathcal{H}_{5,A,q,K,d_{*}}^{d,\beta,r},\;\forall\;j\in \{0,1,2,\ldots,\mathsf{M}\}\,.\] (A.145) By (A.133) and (A.142), for any \(0\leq i<j\leq\mathtt{M}\), any \(a\in G_{Q,d_{*}}\) with \(T_{i}(a)\neq T_{j}(a)\), and any \(x\in[0,1]^{d}\) with \((x)_{\{1,2,\ldots,d_{*}\}}\in\mathscr{B}(a,\frac{1}{5Q})\), there holds \[\mathcal{J}(\eta_{i}(x),\eta_{j}(x))\] \[=\mathcal{J}\left(\left|\frac{1\wedge r}{777}\right|^{\sum_{k=0} ^{-1}(1\wedge\beta)^{k}}\cdot\left|\frac{c_{1}}{Q^{\beta}}+T_{i}(a)\cdot\frac{ c_{1}}{Q^{\beta}}\right|^{(1\wedge\beta)^{q}}+\varepsilon,\right.\] \[\left.\left|\frac{1\wedge r}{777}\right|^{\sum_{k=0}^{-1}(1\wedge \beta)^{k}}\cdot\left|\frac{c_{1}}{Q^{\beta}}+T_{j}(a)\cdot\frac{c_{1}}{Q^{ \beta}}\right|^{(1\wedge\beta)^{q}}+\varepsilon\right)\] \[=\mathcal{J}\left(\left|\frac{1\wedge r}{777}\right|^{\sum_{k=0}^ {-1}(1\wedge\beta)^{k}}\cdot\left|\frac{2c_{1}}{Q^{\beta}}\right|^{(1\wedge \beta)^{q}}+\varepsilon,\left|\frac{1\wedge r}{777}\right|^{\sum_{k=0}^{-1}(1 \wedge\beta)^{k}}\cdot|0|^{(1\wedge\beta)^{q}}+\varepsilon\right)\] \[=\mathcal{J}(2\varepsilon+\varepsilon,\varepsilon)=\mathcal{J}( \varepsilon,3\varepsilon).\] Thus it follows from Lemma A.22 and (A.131) that \[\inf_{f\in\mathcal{F}_{d}}\left(\mathcal{E}_{P_{j}}^{\phi}(f)+ \mathcal{E}_{P_{1}}^{\phi}(f)\right)\geq\int_{[0,1]^{d}}\mathcal{J}(\eta_{i}( x),\eta_{j}(x))\mathrm{d}x\] \[\geq\sum_{a\in G_{Q,d_{*}}:\,T_{j}(a)\neq T_{i}(a)}\int_{[0,1]^{d} }\mathcal{J}(\eta_{i}(x),\eta_{j}(x))\cdot\mathbb{1}_{\mathscr{B}(a,\frac{1} {5Q})}\big{(}(x)_{\{1,\ldots,d_{*}\}}\big{)}\mathrm{d}x\] \[=\sum_{a\in G_{Q,d_{*}}:\,T_{j}(a)\neq T_{i}(a)}\int_{[0,1]^{d}} \mathcal{J}(\varepsilon,3\varepsilon)\cdot\mathbb{1}_{\mathscr{B}(a,\frac{1} {5Q})}\big{(}(x)_{\{1,\ldots,d_{*}\}}\big{)}\mathrm{d}x\] \[\geq\sum_{a\in G_{Q,d_{*}}:\,T_{j}(a)\neq T_{i}(a)}\int_{[0,1]^{d} }\frac{\varepsilon}{4}\cdot\mathbb{1}_{\mathscr{B}(a,\frac{1}{5Q})}\big{(}(x )_{\{1,\ldots,d_{*}\}}\big{)}\mathrm{d}x\] (A.146) \[=\frac{\#\big{(}\big{\{}a\in G_{Q,d_{*}}|T_{j}(a)\neq T_{i}(a) \big{\}}\big{)}}{Q^{d_{*}}}\cdot\int_{\mathscr{B}(\mathbf{0},\frac{1}{5})} \frac{\varepsilon}{4}\mathrm{d}x_{1}\mathrm{d}x_{2}\cdots\mathrm{d}x_{d_{*}}\] \[\geq\frac{1}{8}\cdot\int_{\mathscr{B}(\mathbf{0},\frac{1}{5})} \frac{\varepsilon}{4}\mathrm{d}x_{1}\mathrm{d}x_{2}\cdots\mathrm{d}x_{d_{*}} \geq\frac{1}{8}\cdot\int_{[-\frac{1}{\sqrt{254}\varepsilon},\frac{1}{\sqrt{25 4}\varepsilon}]^{d_{*}}}\frac{\varepsilon}{4}\mathrm{d}x_{1}\mathrm{d}x_{2} \cdots\mathrm{d}x_{d_{*}}\] \[\geq\left|\frac{2}{\sqrt{25d_{*}}}\right|^{d_{*}}\cdot\frac{ \varepsilon}{32}=:s,\;\forall\;0\leq i<j\leq\mathtt{M}.\] Let \(\hat{f}_{n}\) be an arbitrary \(\mathcal{F}_{d}\)-valued statistic on \(([0,1]^{d}\times\{-1,1\})^{n}\) from the sample \(\{(X_{i},Y_{i})\}_{i=1}^{n}\), and let \(\mathcal{T}:([0,1]^{d}\times\{-1,1\})^{n}\to\mathcal{F}_{d}\) be the map associated with \(\hat{f}_{n}\), i.e., \(\hat{f}_{n}=\mathcal{T}(X_{1},Y_{1},\ldots,X_{n},Y_{n})\). Take \[\mathcal{T}_{0}:\mathcal{F}_{d}\to\{0,1,\ldots,\mathtt{M}\}\,,f\mapsto\inf\mathop {\arg\min}_{j\in\{0,1,\ldots,\mathtt{M}\}}\mathcal{E}_{P_{j}}^{\phi}(f),\] i.e., \(\mathcal{T}_{0}(f)\) is the smallest integer \(j\in\{0,\ldots,\mathtt{M}\}\) such that \(\mathcal{E}_{P_{j}}^{\phi}(f)\leq\mathcal{E}_{P_{i}}^{\phi}(f)\) for any \(i\in\{0,\ldots,\mathtt{M}\}\). Define \(g_{*}=\mathcal{T}_{0}\circ\mathcal{T}\). Note that, for any \(j\in\{0,1,\ldots,\mathtt{M}\}\) and any \(f\in\mathcal{F}_{d}\) there holds \[\mathcal{T}_{0}(f)\neq j\stackrel{{(\ref{eq:146})}}{{\Rightarrow}} \mathcal{E}_{P_{7_{0}(f)}}^{\phi}(f)+\mathcal{E}_{P_{j}}^{\phi}(f)\geq s \Rightarrow\mathcal{E}_{P_{j}}^{\phi}(f)+\mathcal{E}_{P_{j}}^{\phi}(f)\geq s \Rightarrow\mathcal{E}_{P_{j}}^{\phi}(f)\geq\frac{s}{2},\] which, together with the fact that the range of \(\mathcal{T}\) is contained in \(\mathcal{F}_{d}\), yields \[\mathbb{1}_{\mathbb{R}\setminus\{j\}}(g_{*}(z))=\mathbb{1}_{ \mathbb{R}\setminus\{j\}}(\mathcal{T}_{0}(\mathcal{T}((z))))\] (A.147) \[\leq\mathbb{1}_{[\frac{z}{2},\infty]}(\mathcal{E}_{P_{j}}^{\phi}( \mathcal{T}(z))),\;\forall\;z\in([0,1]^{d}\times\{-1,1\})^{n},\;\forall\;j\in\{0,1, \ldots,\mathtt{M}\}\,.\] Consequently, \[\sup_{P\in\mathcal{H}^{d,\beta,r}_{5,A,q,K,d_{*}}}\mathbf{E}_{P^{\otimes n }}\left[\mathcal{E}^{\phi}_{P}(\hat{f}_{n})\right]\geq\sup_{j\in\{0,1,\dots, \mathsf{M}\}}\mathbf{E}_{P_{j}^{\otimes n}}\left[\mathcal{E}^{\phi}_{P_{j}}(\hat{f} _{n})\right]\] (A.148) \[=\sup_{j\in\{0,1,\dots,\mathsf{M}\}}\int\mathcal{E}^{\phi}_{P_{j} }(\mathcal{T}(z))\mathrm{d}P_{j}^{\otimes n}(z)\geq\sup_{j\in\{0,1,\dots, \mathsf{M}\}}\int\frac{\mathbb{1}_{[\frac{z}{2},\infty]}(\mathcal{E}^{\phi}_{P _{j}}(\mathcal{T}(z)))}{2/s}\mathrm{d}P_{j}^{\otimes n}(z)\] \[\geq\sup_{j\in\{0,1,\dots,\mathsf{M}\}}\int\frac{\mathbb{1}_{ \mathbb{R}\setminus\{j\}}(g_{*}(z))}{2/s}\mathrm{d}P_{j}^{\otimes n}(z)=\sup_ {j\in\{0,1,\dots,\mathsf{M}\}}\frac{P_{j}^{\otimes n}\left(g_{*}\neq j\right)} {2/s}\] \[\geq\frac{s}{2}\cdot\inf\left\{\sup_{j\in\{0,1,\dots,\mathsf{M}\}} P_{j}^{\otimes n}\left(g\neq j\right)\left|g\text{ is a measurable function from}\right.\right.\] where the first inequality follows from (A.145) and the third inequality follows from (A.147). We then use Proposition 2.3 of [44] to bound the right hand side of (A.148). By Lemma A.20, we have that \[\frac{1}{\mathsf{M}}\cdot\sum_{j=1}^{\mathsf{M}}\mathrm{KL}(P_{j}^{\otimes n }||P_{0}^{\otimes n})=\frac{n}{\mathsf{M}}\cdot\sum_{j=1}^{\mathsf{M}} \mathrm{KL}(P_{j}||P_{0})\leq\frac{n}{\mathsf{M}}\cdot\sum_{j=1}^{\mathsf{M}} 9\varepsilon=9n\varepsilon,\] which, together with Proposition 2.3 of [44], yields \[\inf\Bigg{\{}\sup_{j\in\{0,1,\dots,\mathsf{M}\}}P_{j}^{\otimes n }\left(g\neq j\right)\left|g\text{ is a measurable function from}\right.\] \[\geq\sup_{\tau\in(0,1)}\left(\frac{\tau\mathsf{M}}{1+\tau\mathsf{ M}}\cdot\left(1+\frac{9n\varepsilon+\sqrt{\frac{9n\varepsilon}{2}}}{\log \tau}\right)\right)\geq\frac{\sqrt{\mathsf{M}}}{1+\sqrt{\mathsf{M}}}\cdot \left(1+\frac{9n\varepsilon+\sqrt{\frac{9n\varepsilon}{2}}}{\log\frac{1}{ \sqrt{\mathsf{M}}}}\right)\] \[\geq\frac{\sqrt{\mathsf{M}}}{1+\sqrt{\mathsf{M}}}\cdot\left(1- \left|\frac{9n\varepsilon+\sqrt{\frac{9n\varepsilon}{2}}}{\log\frac{1}{\sqrt{ \mathsf{M}}}}\right|\right)\geq\frac{\sqrt{\mathsf{M}}}{1+\sqrt{\mathsf{M}}} \cdot\left(1-\left|\frac{9n\varepsilon+\frac{1}{10}+12n\varepsilon}{\log\sqrt{ \mathsf{M}}}\right|\right)\] \[\geq\frac{\sqrt{\mathsf{M}}}{1+\sqrt{\mathsf{M}}}\cdot\left(1- \left|\frac{21n\varepsilon}{\frac{1}{2}\log\left(2^{Q^{d_{*}}/8}\right)} \right|-\frac{1/10}{\log\sqrt{2}}\right)\] \[=\frac{\sqrt{\mathsf{M}}}{1+\sqrt{\mathsf{M}}}\cdot\left(1-\left| \frac{336n}{Q^{d_{*}}\log 2}\cdot\frac{1}{2}\cdot\frac{1}{777}\right|-\frac{1/10}{\log\sqrt{2}} \right)\geq\frac{\sqrt{\mathsf{M}}}{1+\sqrt{\mathsf{M}}}\cdot\frac{1}{3}\geq \frac{1}{6}.\] Combining this with (A.148), we obtain that \[\sup_{P\in\mathcal{H}^{d,\beta,r}_{5,A,d_{*}}}\mathbf{E}_{P^{\otimes n }}\left[\mathcal{E}^{\phi}_{P}(\hat{f}_{n})\right]\geq\frac{s}{2}\cdot\frac{1} {6}\] \[=\left|\frac{2}{\sqrt{25d_{*}}}\right|^{d_{*}}\cdot\frac{|2c_{1}|^ {(1\wedge\beta)^{q}}}{768}\cdot\left|\frac{1\wedge r}{777}\right|^{\sum_{k=0}^{ \beta-1}(1\wedge\beta)^{k}}\cdot\left|\frac{1}{Q^{\beta}}\right|^{(1\wedge \beta)^{q}}\] \[\geq\left|\frac{2}{\sqrt{25d_{*}}}\right|^{d_{*}}\cdot\frac{|2c_{ 1}|^{(1\wedge\beta)^{q}}}{768}\cdot\left|\frac{1\wedge r}{777}\right|^{\sum_{k=0 }^{\beta-1}(1\wedge\beta)^{q}}\cdot\frac{1}{2^{\beta\cdot(1\wedge\beta)^{q}}}.\] Since \(\hat{f}_{n}\) is arbitrary, we deduce that \[\inf_{\hat{f}_{n}}\sup_{P\in\mathcal{H}^{d,\beta,r}_{5,A,q,K,d_{*}}}\mathbf{E}_{P^{ \otimes n}}\left[\mathcal{E}^{\phi}_{P}(\hat{f}_{n})\right]\geq c_{0}n^{-\frac {\beta\cdot(1\wedge\beta)^{q}}{d_{*}+\beta\cdot(1\wedge\beta)^{q}}}\] with \(\mathrm{c}_{0}:=\left|\frac{2}{\sqrt{25d_{*}}}\right|^{d_{*}}\cdot\frac{|2\mathrm{c} _{1}|^{(1\wedge\beta)^{q}}}{768}\cdot\left|\frac{1\wedge r}{777}\right|^{\sum_ {k=0}^{q-1}(1\wedge\beta)^{k}}\cdot\frac{1}{2^{\beta\cdot(1\wedge\beta)^{k}}}\) only depending on \((d_{*},\beta,r,q)\). Thus we complete the proof of Theorem 2.1. Now it remains to prove Corollary 2.1. Indeed, it follows from (2.33) that \[\mathcal{H}^{d,\beta,r}_{3,A}=\mathcal{H}^{d,\beta,r}_{5,A,0,1,d}.\] Taking \(q=0\), \(K=1\) and \(d_{*}=d\) in Theorem 2.6, we obtain that there exists an constant \(\mathrm{c}_{0}\in(0,\infty)\) only depending on \((d,\beta,r)\), such that \[\inf_{f_{n}}\sup_{P\in\mathcal{H}^{d,\beta,r}_{3,A}}\mathbf{E}_{P^{ \otimes n}}\left[\mathcal{E}^{\phi}_{P}(\hat{f}_{n})\right]=\inf_{f_{n}}\sup_ {P\in\mathcal{H}^{d,\beta,r}_{5,A,0,1,d}}\mathbf{E}_{P^{\otimes n}}\left[ \mathcal{E}^{\phi}_{P}(\hat{f}_{n})\right]\geq\mathrm{c}_{0}n^{-\frac{\beta \cdot(1\wedge\beta)^{0}}{2+\beta\cdot(1\wedge\beta)^{0}}}\] \[=\mathrm{c}_{0}n^{-\frac{\beta}{4+\beta}}\text{ provided that }n> \left|\frac{7}{1-A}\right|^{\frac{4+\beta\cdot(1\wedge\beta)^{0}}{\beta \cdot(1\wedge\beta)^{0}}}=\left|\frac{7}{1-A}\right|^{\frac{4+\beta}{\beta}}.\] This proves Corollary 2.1. #### a.3.7 Proof of (3.6) This subsection is devoted to the proof of the bound (3.6). Proof of (3.6).: Fix \(\nu\in[0,\infty)\) and \(\mu\in[1,\infty)\). Let \(P\) be an arbitrary probability in \(\mathcal{H}^{d,\beta}_{7}\). Denote by \(\eta\) the conditional probability function \(P(\{1\}\left|\cdot\right.)\) of \(P\). According to Lemma A.2 and the definition of \(\mathcal{H}^{d,\beta}_{7}\), there exists a function \(f^{*}\in\mathcal{B}^{\beta}_{1}\left([0,1]^{d}\right)\) such that \[f^{*}_{\phi,P}\xrightarrow{P_{X}\text{-a.s.}}\log\frac{\eta}{1-\eta} \xrightarrow{P_{X}\text{-a.s.}}f^{*}.\] (A.149) Thus there exists a measurable set \(\Omega\) contained in \([0,1]^{d}\) such that \(P_{X}(\Omega)=1\) and \[\log\frac{\eta(x)}{1-\eta(x)}=f^{*}(x),\;\forall\;x\in\Omega.\] (A.150) Let \(\delta\) be an arbitrary number in \((0,1/3)\). Then it follows from Corollary A.2 that there exists \[\tilde{g}\in\mathcal{F}^{\mathbf{FNN}}_{d}\left(C_{d,\beta}\log\frac{1}{ \delta},C_{d,\beta}\delta^{-d/\beta},C_{d,\beta}\delta^{-d/\beta}\log\frac{1}{ \delta},1,\infty\right)\] (A.151) such that \(\sup_{x\in[0,1]^{d}}|f^{*}(x)-\tilde{g}(x)|\leq\delta\). Let \(T:\mathbb{R}\to[-1,1],z\mapsto\min\left\{\max\left\{z,-1\right\},1\right\}\) and \[\tilde{f}:\mathbb{R}\to[-1,1],\;x\mapsto T(\tilde{g}(x))=\begin{cases}-1,& \text{if }\tilde{g}(x)<-1,\\ \tilde{g}(x),&\text{if }-1\leq\tilde{g}(x)\leq 1,\\ 1,&\text{if }\tilde{g}(x)>1.\end{cases}\] Obviously, \(|T(z)-T(w)|\leq|z-w|\) for any real numbers \(z\) and \(w\), and \[\sup_{x\in[0,1]^{d}}\left|f^{*}(x)-\tilde{f}(x)\right|\xrightarrow{ \cdot\cdot\|f^{*}\|_{[0,1]^{d}}\leq 1}\sup_{x\in[0,1]^{d}}|T(f^{*}(x))-T(\tilde{g}(x))|\] (A.152) \[\leq\sup_{x\in[0,1]^{d}}|f^{*}(x)-\tilde{g}(x)|\leq\delta.\] Besides, it is easy to verify that \[\tilde{f}(x)=\sigma(\tilde{g}(x)+1)-\sigma(\tilde{g}(x)-1)-1,\;\forall\;x\in \mathbb{R}^{d},\] which, together with (A.151), yields \[\begin{split}\tilde{f}&\in\mathcal{F}_{d}^{\mathbf{FNN} }\left(1+C_{d,\beta}\log\frac{1}{\delta},1+C_{d,\beta}\delta^{-d/\beta},4+C_{d,\beta}\delta^{-d/\beta}\log\frac{1}{\delta},1,1\right)\\ &\subset\mathcal{F}_{d}^{\mathbf{FNN}}\left(C_{d,\beta}\log\frac{ 1}{\delta},C_{d,\beta}\delta^{-d/\beta},C_{d,\beta}\delta^{-d/\beta}\log\frac{ 1}{\delta},1,1\right).\end{split}\] In addition, it follows from Lemma A.7 that \[\begin{split}&\frac{1}{2(\mathrm{e}^{\mu}+\mathrm{e}^{-\mu}+2)} \left|f(x)-f^{*}(x)\right|^{2}\leq\int_{\{-1,1\}}\left(\phi(yf(x))-\phi(yf^{*}( x))\right)\mathrm{d}P(y|x)\\ &\leq\frac{1}{4}\left|f(x)-f^{*}(x)\right|^{2},\ \forall\text{ measurable }f:[0,1]^{d}\to[-\mu,\mu],\ \forall\ x\in\Omega.\end{split}\] (A.153) Take \(\widetilde{C}:=2(\mathrm{e}^{\mu}+\mathrm{e}^{-\mu}+2)\). Integrating both side with respect to \(x\) in (A.153) and using (A.152), we obtain \[\begin{split}&\int_{[0,1]^{d}\times\{-1,1\}}\left(\phi(yf(x))- \phi(yf^{*}(x))\right)^{2}\mathrm{d}P(x,y)\\ &\leq\int_{[0,1]^{d}\times\{-1,1\}}\left(f(x)-f^{*}(x)\right)^{2 }\mathrm{d}P(x,y)=\int_{[0,1]^{d}}\left|f(x)-f^{*}(x)\right|^{2}\mathrm{d}P_{X} (x)\\ &\xlongeqed{\underbrace{\ddots P_{X}(\Omega)=1}_{\Omega}\int_{ \Omega}\frac{\widetilde{C}}{2(\mathrm{e}^{\mu}+\mathrm{e}^{-\mu}+2)}\left|f(x )-f^{*}(x)\right|^{2}\mathrm{d}P_{X}(x)}\\ &\leq\widetilde{C}\int_{\Omega}\int_{\{-1,1\}}\left(\phi(yf(x))- \phi(yf^{*}(x))\right)\mathrm{d}P(y|x)\mathrm{d}P_{X}(x)\\ &\xlongeqed{\underbrace{\ddots P_{X}(\Omega)=1}_{\Omega}\widetilde {C}\int_{[0,1]^{d}\times\{-1,1\}}\left(\phi(yf(x))-\phi(yf^{*}(x))\right) \mathrm{d}P(x,y)}\\ &\xlongeqed{\underbrace{\text{Lemma A.3}}_{\ref{lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lemlem:lemlem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lemlem:lem:lem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lemlem:lemlem:lemlem:lem:lem:lemlem:lemlem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lem:lemlem:lemlem:lem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lemlem:lemlem:lem:lem:lemlem:lemlem:lemlem:lemlem:lemlemlem:lem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlemlem:lem:lemlemlem:lemlem:lemlemlem:lemlem:lemlem:lemlemlem:lemlemlem:lemlem:lemlemlem:lemlemlem:lemlemlem:lemlem:lemlemlem:lemlemlem:lem that \(\frac{1}{n}\leq\delta_{n}<1/3\) for \(n>C_{l,c,d,\beta}\). We then deduce from (A.155) that \[\begin{split}&\inf\left\{\mathcal{E}_{P}^{\phi}(f)\left|f\in \mathcal{F}_{d}^{\mathbf{FNN}}\left(G,N,S,B,F\right)\right.\right\}\\ &\leq\inf\left\{\mathcal{E}_{P}^{\phi}(f)\left|f\in\mathcal{F}_{ d}^{\mathbf{FNN}}\left(c\log n,l\left|\frac{(\log n)^{3}}{n}\right|^{\frac{-d}{2 \beta+\beta}},l\left|\frac{(\log n)^{3}}{n}\right|^{\frac{-d}{2\beta+\beta}} \log n,B,F\right)\right.\right\}\\ &=\inf\left\{\mathcal{E}_{P}^{\phi}(f)\left|f\in\mathcal{F}_{d}^{ \mathbf{FNN}}\left(c\log n,c\delta_{n}^{-\frac{d}{\beta}},c\delta_{n}^{-\frac{ d}{\beta}}\log n,B,F\right)\right.\right\}\\ &\leq\inf\left\{\mathcal{E}_{P}^{\phi}(f)\left|f\in\mathcal{F}_{ d}^{\mathbf{FNN}}\left(c\log\frac{1}{\delta_{n}},c\delta_{n}^{-\frac{d}{ \beta}},c\delta_{n}^{-\frac{d}{\beta}}\log\frac{1}{\delta_{n}},B,F\right) \right.\right\}\\ &\leq\inf\left\{\mathcal{E}_{P}^{\phi}(f)\left|f\in\mathcal{F}_{ d}^{\mathbf{FNN}}\left(C_{d,\beta}\log\frac{1}{\delta_{n}},C_{d,\beta} \delta_{n}^{-\frac{d}{\beta}},C_{d,\beta}\delta_{n}^{-\frac{d}{\beta}}\log \frac{1}{\delta_{n}},1,1\right)\right.\right\}\\ &\leq\delta_{n}^{2},\;\forall\;n>C_{l,c,d,\beta},\end{split}\] (A.156) where we use the fact the infimum taken over a larger set is smaller. Define \(W=3\sqrt{\mathcal{N}}\left(\mathcal{F}_{d}^{\mathbf{FNN}}\left(G,N,S,B,F \right),\frac{1}{n}\right)\). Then by taking \(\mathcal{F}=\left\{\left.f\right|_{[0,1]^{d}}\right|f\in\mathcal{F}_{d}^{ \mathbf{FNN}}\left(G,N,S,B,F\right)\right\}\), \(\psi(x,y)=\phi(yf^{*})\), \(\Gamma=\widetilde{C}\), \(M=2\), \(\gamma=\frac{1}{n}\) in Theorem 2.1, and using (A.149), (A.154), (A.156), we deduce that \[\begin{split}&\boldsymbol{E}_{P\ni\infty}\left[\left\|\tilde{f }_{n}^{\mathbf{FNN}}-f_{\phi,P}^{*}\right\|_{\mathcal{E}_{\mathcal{P}_{\mathcal{ N}}}^{2}}^{2}\right]=\boldsymbol{E}_{P\ni\infty}\left[\left\|\tilde{f}_{n}^{ \mathbf{FNN}}-f^{*}\right\|_{\mathcal{L}_{\mathcal{P}_{\mathcal{N}}}^{2}}^{2} \right]\leq\widetilde{C}\boldsymbol{E}_{P\ni\infty}\left[\mathcal{E}_{P}^{ \phi}(\tilde{f}_{n}^{\mathbf{FNN}})\right]\\ &\xlongeqed{\text{by Lemma A.3}}{\text{by Lemma A.3}}\; \widetilde{C}\boldsymbol{E}_{P\ni\infty}\left[\mathcal{R}_{P}^{\phi}\left( \tilde{f}_{n}^{\mathbf{FNN}}\right)-\int_{[0,1]^{d}\times\{-1,1\}}\psi(x,y) \mathrm{d}P(x,y)\right]\\ &\leq\frac{500\cdot\left|\widetilde{C}\right|^{2}\cdot\log W}{n} +2\widetilde{C}\inf_{f\in\mathcal{F}}\left(\mathcal{R}_{P}^{\phi}(f)-\int_{[ 1]^{d}\times\{-1,1\}}\psi(x,y)\mathrm{d}P(x,y)\right)\\ &\xlongeqed{\text{by Lemma A.3}}{\text{by Lemma A.3}}\; \frac{500\cdot\left|\widetilde{C}\right|^{2}\cdot\log W}{n}+2\widetilde{C}\inf _{f\in\mathcal{F}}\mathcal{E}_{P}^{\phi}(f)\leq\frac{500\cdot\left|\widetilde{C }\right|^{2}\cdot\log W}{n}+2\widetilde{C}\delta_{n}^{2}\end{split}\] for \(n>C_{l,c,d,\beta}\). Taking the supremum, we obtain, \[\sup_{P\in\mathcal{H}_{\tau}^{d,\beta}}\boldsymbol{E}_{P\ni\infty}\left[ \left\|\tilde{f}_{n}^{\mathbf{FNN}}-f_{\phi,P}^{*}\right\|_{\mathcal{L}_{ \mathcal{P}_{\mathcal{N}}}^{2}}^{2}\right]\leq\frac{500\cdot\left|\widetilde{C} \right|^{2}\cdot\log W}{n}+2\widetilde{C}\delta_{n}^{2},\;\forall\;n>C_{l,c,d, \beta}.\] (A.157) Moreover, it follows from (3.7) and Corollary A.1 that \[\begin{split}&\log W\leq(S+Gd+1)(2G+5)\log\left((\max\left\{N,d \right\}+1)\cdot B\cdot(2nG+2n)\right)\lesssim(G+S)G\log n\\ &\lesssim\left(\left(\frac{n}{\log^{3}n}\right)^{\frac{d}{2\beta+ 2\beta}}\log n+\log n\right)\cdot(\log n)\cdot(\log n)\lesssim n\cdot\left( \frac{(\log n)^{3}}{n}\right)^{\frac{2\beta}{2+2\beta}}.\end{split}\] Plugging this into (A.157), we obtain \[\begin{split}&\sup_{P\in\mathcal{H}_{\tau}^{d,\beta}}\boldsymbol{E}_ {P\ni\infty}\left[\left\|\tilde{f}_{n}^{\mathbf{FNN}}-f_{\phi,P}^{*}\right\|_{ \mathcal{L}_{\mathcal{P}_{\mathcal{N}}}^{2}}^{2}\right]\lesssim\frac{\log W}{n }+\delta_{n}^{2}\\ &\lesssim\left|\frac{(\log n)^{3}}{n}\right|^{\frac{2\beta}{2+2 \beta}}+\left|\left(\frac{(\log n)^{3}}{n}\right)^{\frac{2\beta}{2+2\beta}} \right|^{2}\lesssim\left|\frac{(\log n)^{3}}{n}\right|^{\frac{2\beta}{2+2\beta}},\end{split}\] which proves the desired result.
2308.16665
Fault Injection on Embedded Neural Networks: Impact of a Single Instruction Skip
With the large-scale integration and use of neural network models, especially in critical embedded systems, their security assessment to guarantee their reliability is becoming an urgent need. More particularly, models deployed in embedded platforms, such as 32-bit microcontrollers, are physically accessible by adversaries and therefore vulnerable to hardware disturbances. We present the first set of experiments on the use of two fault injection means, electromagnetic and laser injections, applied on neural networks models embedded on a Cortex M4 32-bit microcontroller platform. Contrary to most of state-of-the-art works dedicated to the alteration of the internal parameters or input values, our goal is to simulate and experimentally demonstrate the impact of a specific fault model that is instruction skip. For that purpose, we assessed several modification attacks on the control flow of a neural network inference. We reveal integrity threats by targeting several steps in the inference program of typical convolutional neural network models, which may be exploited by an attacker to alter the predictions of the target models with different adversarial goals.
Clement Gaine, Pierre-Alain Moellic, Olivier Potin, Jean-Max Dutertre
2023-08-31T12:14:37Z
http://arxiv.org/abs/2308.16665v1
# Fault Injection on Embedded Neural Networks: Impact of a Single Instruction Skip ###### Abstract With the large-scale integration and use of neural network models, especially in critical embedded systems, their security assessment to guarantee their reliability is becoming an urgent need. More particularly, models deployed in embedded platforms, such as 32-bit microcontrollers, are physically accessible by adversaries and therefore vulnerable to hardware disturbances. We present the first set of experiments on the use of two fault injection means, electromagnetic and laser injections, applied on neural networks models embedded on a Cortex M4 32-bit microcontroller platform. Contrary to most of state-of-the-art works dedicated to the alteration of the internal parameters or input values, our goal is to simulate and experimentally demonstrate the impact of a specific fault model that is _isstruction skip_. For that purpose, we assessed several modification attacks on the control flow of a neural network inference. We reveal integrity threats by targeting several steps in the inference program of typical convolutional neural network models, which may be exploited by an attacker to alter the predictions of the target models with different adversarial goals. ## I Introduction Security of Machine Learning (ML) models is one of the most important challenge of modern Artificial Intelligence, amplified by the massive deployment of models (more particularly neural networks) in a large variety of hardware platforms. Those platforms include devices with strong constraints in terms of memory, computing ability, latency or energy (e.g., for IoT-oriented applications). The _adversarial_ and _privacy-preserving_ ML communities have already demonstrated an impressive set of threats that target the integrity, confidentiality and availability of models [18]. However, most of these attacks can be referred as _theoretical_ or _algorithmic_ since they consider a model as an _abstraction_ and do not rely on the specific features of their software or hardware implementations. Most recently, the attack surface has been significantly widened with such _implementation_-based attacks that leverage software or hardware characteristics as well as theoretical backgrounds highlighted by previous attacks. It is the case for _weight-based adversarial attacks_ such as the Bit-Flip Attack (BFA) [20] that directly disturbs the internal parameters of a deep neural network model stored in memory (typically, DRAM or Flash). Interestingly, in the BFA, the selection of the most sensitive parameters follows a gradient-based approach similar to classical white-box _adversarial examples_ crafting methods. This leads to only a few bit-flips to drop the accuracy of a state-of-the-art convolutional neural network to a random-guess level. Another example, is the use of side-channel analysis [12] or fault injection attacks (as rowhammer in [19]), to totally or partially recover the values of parameters so that it could significantly increase the efficiency of a _model extraction_ attack that aims at stealing a black-box protected model. Except for passive side-channel analysis, most of these new implementation-based threats are data-oriented fault injection attacks targeting the stored parameters. In this work, we highlight another important attack vectors caused by fault injection that target the instruction flow, more particularly with _instruction skips_. To the best of our knowledge, this work is the first to demonstrate instruction skip with laser and electromagnetic fault injection in the inference of neural network models deployed in a Cortex-M 32-bit microcontroller. **Our contributions** are the followings: * We used two injection means on a Cortex-M4 platform, electromagnetic and laser injections, and target the inference of a standard convolutional neural network performing an image classification task. * We demonstrate and analyze the impact of a single instruction skip at different critical paths of the inference: convolutional layers, bias additions, activation functions. * We highlight two potential adversarial exploitation: memory effect and forced prediction. ## II Background & Related works ### _Background_ **Neural network models.** A supervised neural network model \(M_{\Theta}(x)\) is a parametric model trained to optimally map an input space \(\mathcal{X}=\mathbb{R}^{n}\) (e.g., images) to an output space \(\mathcal{Y}\). For a classification task, \(\mathcal{Y}\) is a finite set of labels \(\{1,...,C\}\). The neural network model \(M_{\Theta}:\mathcal{X}\rightarrow\mathcal{Y}\), with parameters \(\Theta\) (also referred as _weights_), classifies an input \(x\in\mathcal{X}\) to a set of raw or normalized scores in \(\mathbb{R}^{C}\) so that the predicted label is \(\hat{y}=\arg\max(M_{\Theta}(x))\). \(M_{\Theta}\) is trained by minimizing a loss function \(\mathcal{L}\big{(}M_{\Theta}(x),y\big{)}\) that quantifies the error between the prediction \(M_{\Theta}(x)\) and the _groundtruth_\(y\). The training process aims at finding the best parameters that minimize the loss on the training dataset. **A Perceptron** (also called _neuron_) is the basic functional element of a neural network. It first processes a weighted sum of the input \(x=(x_{0},...,x_{j},x_{n-1})\in\mathbb{R}^{n}\) with its trainable parameters \(\theta\) and \(b\) (called _bias_), then it non-linearly maps the output thanks to an _activation function_\(\sigma\): \(a(x)=\sigma(\theta_{0}x_{0}+...+\theta_{n-1}x_{n-1}+b)\), where \(a\) is the perceptron output. A classical activation function is the rectified linear unit (ReLU) defined as \(\sigma(x)=max(0,x)\). **MultiLayer Perceptron (MLP)** are deep neural networks composed of several vertically stacked neurons called _layers_. These layers are called _fully-connected_ or _dense_ or even _linear_. For a MLP, a neuron from layer \(l\) gets information from all neurons belonging to the previous layer \(l-1\), therefore the output of a neuron is defined as in 1: \[a_{j}^{l}(x)=\sigma\Big{(}\sum_{i\in(l-1)}\theta_{i,j}a_{i}^{l-1}+b_{j}\Big{)} \tag{1}\] where \(\theta_{i,j}\) is the weight that connects the \(j^{\text{th}}\) neuron of the \(t^{\text{th}}\) layer and the \(i^{\text{th}}\) neuron of the previous layer (\(l-1\)), \(b_{j}\) is the bias of neuron \(j\) of layer \(l\) and \(a_{i}^{l-1}\) and \(a_{j}^{l}\) are the outputs of neuron \(i\) of layer \((l-1)\) and neuron \(j\) of layer \(l\), respectively. **Convolutional Neural Network (CNN)** is another type of neural network models that used convolutions with a set of _kernels_ (also called _filters_). The trainable weights are the parameters of the kernels and are shared among the input. The kernels are usually square with low dimensions, typically 3x3 for image classification. Therefore, for a _convolutional layer_ composed of \(K\) kernels of size \(Z\) applied on an input tensor of size \(H\times W\times C\), the weights tensor \(\Theta\) will have the shape \([K,Z,Z,C]\) (i.e., \(KCEZ^{2}\) parameters without bias, \((K+1)CZ^{2}\) otherwise). A naive implementation of the convolution of an input tensor \(X\) and a set of \(K\) kernels is detailed in algorithm 1. **Input:** Tensor \(X\) of size \(H\times H\times C\), parameters tensor \(\Theta\) of size \(Z\times Z\times C\times K\), bias tensor of size \(K\) **Output:** Tensor \(Y\) of size \(H\times H\times K\) ``` 1:for\(k\) in \([1,K]\)do 2:for\(x\) in \([1,H]\)do 3:for\(y\) in \([1,H]\)do 4:\(Y_{x,y,c}=B_{k}\) 5:for\(m\) in \([1,Z]\)do 6:for\(n\) in \([1,Z]\)do 7:for\(c\) in \([1,C]\)do 8:\(Y_{i,j,c}\)+ = \(\theta_{m,h,k,c}\cdot X_{x+m,y+n,k}\)return\(Y\) ``` **Algorithm 1** Convolution layer (\(K\) kernels) The output is also mapped with an activation function such as ReLU. Then, a third operation is applied with a _pooling_ process that aims at reducing the dimensions of the output tensor by locally summing it up with some statistics. A classical approach is to apply a _Max pooling_ or an _Average pooling_ with a 2x2 kernel over the output tensor \(Y\) of size \(H\times H\times K\) so that the resulting tensor is half the size \((H/2)\times(H/2)\times K\). Pooling also provides interesting (small) translation invariance property. **Embedded models.** For a typical 32-bit microcontroller, the model parameters are stored in the Flash memory and internal computations (i.e., mainly multiply-accumulations and non-linear activation) are processed in SRAM. To embed complex ML models and fit the memory and latency requirements, classical compression techniques encompass model pruning [26] and quantization [25, 5]. For 32-bit MCU, 8-bit quantization of the weights is a standard performed as a post-processing step (after training) or at training step with training-aware quantization methods. Post-training 8-bit quantization is proposed as the default configuration in many deployment tools (e.g., TF-Lite, CubeMX.AI, NNoM, MCUNet) and may be applied for both weights and activation outputs. We used the NNoM (Neural Network on Microcontroller)1 deployment framework, an open-source library with a full access to the source code (C) that enables 8-bit quantization for the weights, biases, activation values and output scores. The quantization is performed in the same way as in ARM CMSIS-NN [14] and relies on a uniform symmetric powers-of-two scheme (Eq. 2) that avoid division operation with only integer additions, multiplications and bit shifting. Footnote 1: [https://majianjia.github.io/nnom/](https://majianjia.github.io/nnom/) \[x_{i}=\left\lfloor x_{f}\cdot 2^{7-dec}\right\rceil,\,dec=\left\lceil log_{2} \big{(}max\big{(}|X_{f}|\big{)}\big{)}\right\rceil \tag{2}\] where \(X_{f}\) is a 32-bit floating point tensor, \(x_{f}\) a value of \(X_{f}\), \(x_{i}\) its 8-bit counterpart and \(2^{7-dec}\) the quantization scale. **Fault injection attacks** (FIA) are active hardware threats [2] that usually require a physical access to the victim device [4]. Fault injection techniques gather global approaches such as voltage or clock glitching and moderate/high-cost methods such as laser (LFI) or electromagnetic (EMFI) that reach high temporal and spatial accuracy [1]. EMFI involves generating a magnetic field that causes voltage variations in the circuit, leading to alter propagation times of the signals through logic gates. This can result in faults where assembly instruction are modified or skipped. For LFI, a laser diode emits photons that create a photocurrent when they reach the sensitive points of the targeted microcontroller, resulting in voltage variations. This can change bit values, leading to instruction opcode modifications, which can transform one instruction into another or the nop instruction. In this case, we obtain an _instruction skip_ similar to those obtained with EMFI. Interestingly, even though these two methods use different physical mechanisms, the results can be similar. Importantly, because of their precision and effectiveness, EMFI and LFI are standard fault injection means used in hardware security testing laboratories for security assessment or certification purposes [23]. ### _Related works and positioning_ Fault injection against deployed neural network models mainly focus on the alteration of the internal parameters stored in memory or the instructions flow. A reference for parameter-based attack is the Bit-Flip Attack (BFA) [20] that has been practically demonstrated with rowhammer on a DRAM platform [22]. Typically, BFA successes in dropping the average accuracy of a state-of-the-art CNN model to a random-guess level with only a few tens of bit-flips. Rowhammer and bit-flips on the parameters have also been exploited for a model extraction scenario [19] where the adversary knows the model architecture but has only access to less than 10% of the training dataset. The attack first uses rowhammer (as in the rambleed attack [13]) to guess the value of the most significant bit of almost 90% of the parameters of a victim model. Then, the adversary trains a _substitute_ model by constraining the weights values with information previously extracted. In the context of embedded neural network models, laser fault injection has been demonstrated on a 8-bit microcontroller (ATmega328P) by Breier et.al. [4] by targeting the instructions of some activation functions (ReLU, sigmoid, tanh implemented in C). Then, simulations on a 4-layer MLP trained on MNIST showed that it is necessary to perform a lot of faults (more than 50% neurons faulted) on the last hidden layer to reach a reasonable attack success rate (\(>50\%\)). Other simulation works by Jap et.al. [11] showed that a single bit modification on the Softmax activation function at the end of a neural network can lead to a misclassification. Liu et.al. [16] achieved misclassification using clock glitch on a FPGA-based deep learning accelerator. Changing other physical parameters such as supply voltage can decrease accuracy. Salami et.al. [21] demonstrated on FPGA-based CNN accelerators that in order to decrease the accuracy, it was necessary to reduce the supply voltage by at least 25%. To the best of our knowledge, our work is the first to demonstrate the impact of a single fault disrupting the instruction flow on the performance of a CNN model deployed in a Cortex-M platform. Contrary to [4] we consider a full inference program embedded with state-of-the-art deployment tool and analyze different attack paths. Additionally, with this scope, our experiments are the first to demonstrate both electromagnetic and laser injections for a complete inference program (CNN trained on the standard Fashion MNIST dataset) on a 32-bit ARM Cortex-M4 platform. ## III Experimental setup ### _Device under test, model and dataset_ We used a 32-bit ARM Cortex-M4 microcontroller as target which can operate at a frequency of up to 100 MHz. The device has a 512KB Flash memory and a RAM of 128KB. We focused our experiments on a typical convolutional neural network model trained for a supervised image classification task. We used the standard Fashion-MNIST dataset, which consists of 70,000 (60K for training and 10K for testing) 28x28 grayscale images divided into 10 cloth categories. Our model is composed of two convolutional layers with respectively 32 and 48 kernels of size \([3,3]\) with ReLU as activation. Each layer is followed by a Max pooling layer of size \([2,2]\). The end of the model is composed of two fully-connected layer with respectively 24 and 10 neurons. The activation function is ReLU except for the last layer which typically used Softmax to provide normalized outputs. The model (illustrated in Fig 1) has a total of 70,914 parameters and reaches an accuracy of 91% on the complete test set. The trained model (with TensorFlow v2) is deployed on our target device with the NNoM library [17] that offers a 8-bit model quantization of the parameters, activation and output prediction scores and a complete white-box access to the inference code. The accuracy of the deployed 8-bit model is evaluated directly on the development board over limited random sets of 100 inputs. We observed that the implementation of the quantization scheme in NNoM raises integer overflow that may impact the accuracy depending on the test sets. Over different test sets the model has an accuracy from 77% to 88% (i.e., close to the accuracy of the full-precision model over the 10K test set). However, we noticed that our results are similar from one test sets to another (even by fixing the overflow issue). Therefore, for our fault injection experiments, we keep the same 100-input test set that reaches the lowest precision (77%). Fig. 1: Illustration of the CNN model. Red lightnings indicate the three investigated attack paths with instruction skip. Fig. 2: EMFI (left) and LFI (right) setups. ### _Fault injection setup_ For EMFI, we used a voltage pulse generator that can deliver 200 V pulses, connected to an injection probe as illustrated in Figure 2. The control voltage of the pulse generator is 200 V with a rise time of 2 ns for a pulse width of 10 ns. For LFI, we use an 1,064 nm infrared laser beam with pulse energy of 0.1W, a pulse duration of 50 ns and a spot size of 5 um. The laser attacks are performed on the rear face of the silicon, which requires to decapsulate the component. For the laser experiments, the operating frequency of the card is reduced from 100 MHz to 50 MHz. The triggering of the laser and electromagnetic shot is synchronized by a signal generated by the target device and the delay between its rising edge and the triggering of the shot is adjustable. This enables to control the magnetic field and laser beam to target a specific instruction. ### _Test code_ First, we need to map the sensitive areas of our device, that is to say the locations where exploitable faults are obtained. We used a simple assembly test code (Fig. 3) to identify the locations where fault injections can be successfully performed. This code performs simple register manipulations and is long enough to not require precise time synchronization. By comparing the readback values of the registers after execution of the test code with and without fault injection, we aimed at identifying a location where a _sub_ instruction is not executed. We successfully obtained an _instruction skip_ type fault. With EMFI, the result of this mapping procedure is a 200 um by 100 um sensibility area within a chip size of 4 mm by 4 mm. It was necessary to use a very precise electromagnetic probe as described in [9]. The positions of the probe for successful injections are indicated in blue on Figure 4. The positions used for LFI are distinct from those for EMFI and depicted in red in Figure 4. These results are consistent compared to other reference works such as [7]. ## IV Instruction skip on a CNN inference After characterizing the sensitive areas according to our fault model (instruction skip) and injection means (EM and laser pulse), we detail, in this section, three experiments on the inference program of our convolutional neural network model trained and tested on FashionMNIST. We targeted three attack paths that correspond to critical elements of a CNN model: * the first convolutional layer that extract low-level _features_ from the input; * the bias addition and the ReLU activation function for the first fully-connected layer. An illustration of these attack paths on our CNN model is presented in Fig. 1. ### _Targeting the first convolution layer_ **Experiments.** Parameter-based attacks such as the BFA [20] have highlighted the sensitivity of the first convolutional layers of CNN models against adversarial perturbations [10]: an alteration of the initial features map grows and propagates through the network leading to a misprediction. However, this is achieved by targeting some specific kernels since others are resistant against the perturbation of their parameters. With our experiments, with only one instruction skip on the main convolution loop, we aimed at analyzing the sensitivity of this critical part of the inference. During a convolution operation, a loop over the filters is executed to carry out the convolution computations (as in Algorithm 1). The assembly code of this loop is given in Fig. 5. With an instruction skip, our objective is to prematurely interrupt the loop over the filters, thereby halting the execution of the convolutions. The impact of the instruction skip strongly depends of the implementation. In our case, such a fault completely breaks the convolution process: if a fault is injected for the kernel \(j\) then the convolutions with kernels \([j+1;K]\) are not processed. We discuss that point in Section V. Through simulation, we observed that a valid attack path is to jump the branch instruction (highlighted in red). Fig. 6(a) Fig. 4: Sensitive areas according to the different injection means. Fig. 5: Assembly code of the loop over convolution filters. reports the impact on the model accuracy of simulated fault injections (the skip of the jump instruction, hence ending the computation loop) as a function of the last processed filter index (the remaining filters were skipped). We used five different random test sets of 100 images each. Despite minor variations, the results on each dataset are comparable. Thus, we can therefore consider that the 100 images of the first test set are representative of the general behavior (even if the accuracy is slightly below the average). Logically, the accuracy of the model decreases as the filters loop was exited earlier. However, we observe that the last kernels do not have a significant impact: exiting the loop from the \(17^{th}\) kernel - i.e. not performing almost half of the kernels - has a limited effect on the accuracy (from 75% to 82%). A possible explanation is that most of the deep neural network models are over-parametrized which can be observed when applying compression techniques such as _model pruning_ that typically remove, for standard CNN, a large part of kernels without any significant drop of performance. Here, we can make the hypothesis that most of the useful features are _captured_ by the first kernels. To demonstrate this attack in practice, we conducted EMFI on the 100 images of the dataset 1. The experimental and simulation results, presented in Fig. 6 (b), are almost identical. However, since the repeatability of the EMFI is not perfect, it happens that the fault injection is not successful, which explains some results slightly above the simulation curve (accuracy is higher by a few percents). Other times, the EMFI will cause the board to crash, which occurs at filters 14 and 19 explaining the significant accuracy drops at these indexes. For LFI, it appeared that triggering the position and the injection delay at different filters within the loop was very challenging. Therefore, we only processed one laser fault injection on the first filter to see if the LFI result was similar with our simulation and EMFI. We logically reached a random-guess level (10%) since the forward pass is completely altered. **Exploitation.** Interestingly, we observed that exiting the loop prematurely at the first filters causes the inference results to match with the last correctly executed inference, as exemplified in Figure 7. This _memory effect_ can be exploited in critical applications, such as the authentication of an unauthorized person after one with the appropriate permissions. As this type of attack leaves no trace or fault in the circuit and will be overwritten by correct data after the next prediction, it can be difficult to detect, except for monitoring the process time (e.g. by using an instruction counter). This attack is possible because the result of the convolution layer calculations is stored in RAM. Therefore, to prevent such attacks, it is compulsory to perform a RAM memory reset between two inferences to clear the stored results. ### _Targeting the bias values_ **Experiments.** The first dense layers in the model contains biases that can be modified to alter the inference results. By modifying the store instruction that initializes the biases, we observed significant corruption of bias values resulting in mispredictions in our simulations. Specifically, we wrote an address value to the register instead of the bias so that the bias takes a significantly higher value. Fault injections were performed using laser injection and the value is similar to the simulation. Although the induced fault differs with EMFI, it results in bias values that are different from the initial values. This has significant effects on the inference results as shown in Table I. The accuracy is only detailed for the first 4 biases but they are comparable for all 24 neurons. **Exploitation.** The accuracy of the model was found to be highly sensitive to a single injection, with modifications to the bias of a neuron favoring certain predictions over others, as presented in Figure 8. For example, modifying the bias of the first neuron resulted in the model mostly predicting _T-shirt_ (label 0) about 50 times out of 100 tests, regardless whether the simulation, electromagnetic or laser injection we used. The results for the other bias calculations are similar for the accuracy and majority inferences. These results demonstrate that it is possible to significantly bias the model predictions. An attacker can then easily force a prediction by choosing to fault the calculation of a single bias; To protect against this type of attack, the bias value of the dense layer can be reset if it exceeds a certain bound. In simulation on the first 4 bias calculations with bounds between -2048 and 2048 to the output value of the neuron, we obtained the same results to those without fault injection. We recovered \begin{table} \begin{tabular}{c c c c} \hline \hline Neuron tested & Accuracy Simulation & Accuracy Laser & Accuracy EM \\ \hline 0 & 38\% & 37\% & 40\% \\ 1 & 26\% & 26\% & 37\% \\ 2 & 40\% & 41\% & 56\% \\ 3 & 28\% & 28\% & 35\% \\ \hline no fault & 77\% & 77\% & 77\% \\ \hline \hline \end{tabular} \end{table} TABLE I: Accuracy when faulting the bias of one of the 4 first neurons Fig. 6: **a. Simulations over 5 different test sets (dataset used in **a** is refered as _Dataset 1_) **b.** Accuracy with an instruction skip on the first convolution layer (simulation and EMFI) Fig. 7: Illustration of the _memory effect_ when skipping the first convolutional layer. the initial accuracy which had been reduced between 26 and 40% with the fault injection. ### _Targeting the activation function_ **Experiments.** We targeted the activation function of the first _dense_ layer composed of 24 neurons (with instruction skip faults). For this layer, the activation function is ReLU (\(\sigma(x)=\max(0;x)\)) which is generally used in most state-of-the-art deep convolutional network models. Fig. 9 (a) presents the assembly code of ReLU used in our NNoM implementation. We first simulated the impact of instruction skip. It appears that it is possible to alter ReLU in two different ways: * force a reset (target the blue instructions in Fig. 9 (a)): output is zero while the input is positive. Therefore, the activation is constantly null: \(\sigma(x)=0\). * skip the reset (target the red instructions in Fig. 9 (a)): the activation is turned into the identity function and is not set to zero if the input is negative: \(\sigma(x)=x\). Interestingly, our simulations shown that skipping the reset causes more mispredictions than forcing a reset. Consequently, we focused on this kind of attack for our experiments. **Exploitation.** Skipping the reset of the ReLU activation function has minor impact on the accuracy of the model, as depicted in the Figure 9 (b). Therefore, when targeting the first fully-connected layer, our experiments show that it is less effective to target the ReLu activation function than the biases if we seek to reduce the accuracy. This observation is coherent with the first experiments from Breier et.al. [4] that demonstrated the need of performing a significant number of faults on ReLU to alter the overall accuracy of a 5-layer MLP model (faults were injected on the penultimate layer). We observe that the results of simulation experiments are similar to those obtained with LFI, with an accuracy decreasing to 74%. The accuracy was reduced to 61% with EMFI where we observed a certain number of outputs equal to zero (less than those without faults) that indicates few forced resets (i.e. forcing a reset while the input is positive). This behavior did not appear in simulation or with LFI. That highlights the fact that, even though simulations can predict the majority of behaviors, experimental studies are compulsory to accurately account for the effect of a complex fault model. ## V Discussions ### _Comparison of injection techniques and limitations_ LFI is known as a very effective injection means because of a high temporal and spatial accuracy [23]. However, contrary to EMFI, silicon has to be visible to perform LFI, therefore it is necessary to decapsulate the components, a delicate step that complicates the implementation of the attack. Another practical point between LFI and EMI is the cost of the characterization environment, significantly higher for LFI with approximately 100k \(\copyright\) compared to 30k \(\copyright\) for the EMFI bench. Experimentally, the search for the sensitive zone was more tedious in EMFI as we encountered many freezes and restarts of the target device, which was less common with the laser pulses. Moreover, our fault model being a single instruction skip, it was straightforward to simulate the impact of such faults on the inference process and compare the match between simulation outputs and observed ones with real injections. We observed a higher similarity between simulations and LFI than with EMFI (typically for the experiments on ReLU in IV-C). This difference is explained by a lower repeatability of EMFI due to its lower spatial accuracy compared to laser. A classical limitation of our security characterization is related to the synchronization of the injections: dedicated instructions inserted in our test programs activated an output of the target that was used to trigger our fault injectors after a programmable delay (i.e. our experiments were performed in a white-box setting). However, since the implementation code includes many loops that generate an electromagnetic leak with a particular and detectable signature, one can consider the use of a device to temporally trigger the injections, such as presented in [8]. We keep the use of such techniques for further experiments. Fig. 8: Majority output inference when faulting bias on the 4 first neurons ### _Neural network implementation_ Our experiments are based on a classical CNN model deployed thanks to the NNoM platform, which has the advantage of being open source, covering classical types of layer and reaching interesting inference performance. Further works would investigate how different implementations (e.g., MCUNet [15]) behave when exposed to instruction skips even for challenging black-box tools (e.g., STMCubeMX.AI). **Target a single convolutional kernel**. According to what we observed with the NNoM implementation, we can highlight some interesting outcome that pave the way to further analysis of different types of implementations for the most critical functional structures of the inference (and then lead to more robust neural network inference implementations). For example, we simulated an implementation that allows for the non-execution of a single filter in the convolution layer, rather than producing a premature exit as presented in section IV-A. This attack is also possible on the NNoM implementation by replaying the instruction that increments the loop counter. As a result, an adversary could skip only one filter and then exclude one channel from the resulting features map. Our results show that the accuracy can be reduced by up to 20% depending on the targeted filter. This significant drop of accuracy is consistent with what observed with only few bit-flips with the BFA [20] that usually result in _turning off_ an initially important kernel. **Using CMSIS-NN.** With NNoM, it is possible to use as backend the CMSIS-NN [14] library from ARM2 and, in that case, the implementations may differ. Although our results were obtained without using the CMSIS-NN library, the bias loading code remains the same, making the attack transferable. For the convolution operation, CMSIS-NN uses the im2col algorithm [14] that transforms the input image and the set of filters in a new matrix representation so that the convolution is processed through efficient matrix multiplications. We performed a first set of simulation tests on an implementation using the CMSIS-NN library that shown that a single instruction skip in the loop over the dimension of the output tensor (i.e. the number of kernels) of the first convolutional layer causes a forced output: we obtained the label 0 (T-shirt) for 98 inputs over 100. This behavior is also highly critical and may be exploited by an adversary to impact the integrity of the model. Thus, further experiments are necessary to analyze potential vulnerabilities to the im2col implementation. Footnote 2: also used in STMCubeMX.AI from STMicroelectronics ### _Protections and Exploitation for confidentiality concern_ As a first step, we focused our experiments on the direct impact of faults on the model performance (here, the standard accuracy), i.e. we mainly focus our work on task-integrity purpose. However, recent milestone works such as [19] demonstrated how fault injection techniques can be exploited to leak critical information about a model that may help an adversary for a model extraction attack. An interesting future work is to analyze what kind of information about the parameters can be revealed by one or several instruction skips. A first insight is that skipping filters will give important information about the importance of these filters for the prediction. This is what we observed with our experiments on the first convolutional layer (section IV-A) since we highlighted the fact that exiting the loop from the \(17^{th}\) kernel has a limited effect on the accuracy, meaning that most of the most important kernels are the first ones. Therefore, an adversary may put his effort only on recovering a small part of the parameters which can significantly facilitate the attack. Many software countermeasures have been proposed by the hardware security community to protect critical algorithms (e.g., cryptographic modules) [3] and we mentioned some obvious implementation advice to reduce the impact of the faults we performed in section IV. However, the main challenge in terms of defense, is the length of the inference code that contains many loops. Therefore, many protections based on local verification (including ones relying on redundancy) can lead to prohibitive additional costs and significantly reduce the performance of the system. Promising defense schemes encompass the protections based on CFI (Control Flow Integrity) that aims at checking a program execution flow and detecting potential alteration [24, 6]. ## VI Conclusion We investigated the effect of single instruction skip fault attacks on a neural network model embedded in an ARM Fig. 9: Target ReLU activation function. (Left) Assembly code. (Right) Impact on accuracy (simulation, LFI, EMI, laser and simulation curves overlap) when skipping the zeroing of the ReLU activation function. Cortex-M4 microcontroller. We used two standard powerful injection means, laser and electromagnetic injection, as well as simulations. We identified several vulnerabilities at different positions of the model architecture. More particularly, it is possible to prematurely exit the loop over the convolutional filters, leading to incorrect predictions, and even to a so-called memory effect if the whole convolution loop is skipped (if so, the faulted inference outputs the prediction of the previous one). Additionally, we demonstrated that instruction skips can alter the bias computation in fully-connected layer that may force the output prediction towards a chosen label. In a context of critical security concerns related to the large-scale deployment of AI systems, with upcoming regulation and certification actions, these results (the first with such an experimental scope) highlight the urgent need to properly evaluate the intrinsic robustness of embedded models and pave the way to further analysis to cover more models types, devices as well as assess the relevance of state-of-the-art protections for embedded machine learning inference programs. ## Acknowledgment This work is supported by (CEA-Leti) the European project InSecTT (ECSEL JU 876038) and by the French ANR in the _Investissements d'avenir_ program (ANR-10-AIRT-05, irtano-elec); and (MSE) by the ANR PICTURE program. This work benefited from the French Jean Zay supercomputer with the AI dynamic access program.
2309.08799
SHAPNN: Shapley Value Regularized Tabular Neural Network
We present SHAPNN, a novel deep tabular data modeling architecture designed for supervised learning. Our approach leverages Shapley values, a well-established technique for explaining black-box models. Our neural network is trained using standard backward propagation optimization methods, and is regularized with realtime estimated Shapley values. Our method offers several advantages, including the ability to provide valid explanations with no computational overhead for data instances and datasets. Additionally, prediction with explanation serves as a regularizer, which improves the model's performance. Moreover, the regularized prediction enhances the model's capability for continual learning. We evaluate our method on various publicly available datasets and compare it with state-of-the-art deep neural network models, demonstrating the superior performance of SHAPNN in terms of AUROC, transparency, as well as robustness to streaming data.
Qisen Cheng, Shuhui Qu, Janghwan Lee
2023-09-15T22:45:05Z
http://arxiv.org/abs/2309.08799v1
# SHAPNN: Shapley Value Regularized Tabular Neural Network ###### Abstract We present SHAPNN, a novel deep tabular data modeling architecture designed for supervised learning. Our approach leverages Shapley values, a well-established technique for explaining black-box models. Our neural network is trained using standard backward propagation optimization methods, and is regularized with real-time estimated Shapley values. Our method offers several advantages, including the ability to provide valid explanations with no computational overhead for data instances and datasets. Additionally, prediction with explanation serves as a regularizer, which improves the model's performance. Moreover, the regularized prediction enhances the model's capability for continual learning. We evaluate our method on various publicly available datasets and compare it with state-of-the-art deep neural network models, demonstrating the superior performance of SHAPNN in terms of AUROC, transparency, as well as robustness to streaming data. ## 1 Introduction Tabular data is widely used in real-world applications like scientific analysis [Kehrer and Hauser (2012)], financial transactions [Andriosopoulos et al. (2019)], industrial planning [Hecklau et al. (2016)], etc. Tabular data are commonly presented in a structured and heterogeneous form [Borisov et al. (2022)], with data points or samples in rows, and features in columns, corresponding to particular dimensions of information. In the past decade, machine learning algorithms have been used to efficiently analyze tabular data, with most research focusing on classification and regression tasks [Athmaja et al. (2017)]. Gradient-boosted decision trees (GBDT) [Chen and Guestrin (2016)] and its extensions, such as LightGBM [Ke et al. (2017)] and CatBoost [Dorogush et al. (2018)], have emerged as dominant methods. However, these methods have limitations in practice due to their data-specific learning paradigm [Arik and Pfister (2021)]. Firstly, gradient-based tree structures impede continual learning, which is crucial in situations where live data streams in. Secondly, these models are typically data-specific and must be learned in a fully supervised manner, which hinders their ability to fuse with other models and data modalities under different degrees of label availability [Ke et al. (2019)]. Recently, deep learning has been explored as an alternative to GBDT-based models for analyzing tabular data [Huang et al. (2020)]. DNN employs adaptable weights that can be gradually updated to learn almost any mapping from inputs to targets, and it has proven to be effective and flexible in handling various types of data modalities. DNN models can also conveniently learn from and adapt to continuously streaming data [Ke et al. (2019)]. However, despite these promising features, DNN's performance on tabular data often falls short compared to that of GBDT-based methods [Gorishniy et al. (2021)]. Additionally, DNN models are often considered a "black box" approach, lacking transparency in how they transform input data into model outputs [Klambauer et al. (2017)]. Due to these limitations of both GBDT and DNN, there is no clear winner for tabular data tasks [Kadra et al. (2021), Shwartz-Ziv and Armon (2022)]. In comparison to GBDT-based models, DNN lacks two crucial capabilities, which degrade its performance on various tabular tasks: (1) the ability to effectively utilize the most informative features through the splitting mechanism based on information gain and (2) the capacity to progressively discover feature sets that lead to fine-grained enhancements through the boosting-based ensemble. We could contend that both capabilities contribute to evaluating feature utility and selecting relevant features during model training [Grinsztajn et al. (2022)]. In this study, we aim to address these challenges faced by current deep learning methods for tabular data. Our objective is to develop a DNN-based model that accomplishes the following goals: (i) achieves superior performance on tabular data tasks, (ii) provides quantitative explanations of model decisions, and (iii) facilitates effective continual learning. To achieve these goals, we introduce SHAPNN, which leverages the Shapley value as a bridge between GBDTs and DNNs. The Shapley value is a model-agnostic approach used for generating post-hoc explanations by quantifying the influence of each feature on predictions based on game theory. In SHAPNN, we incorporate Shapley value estimation into the DNN training process and use it as additional supervision to transfer feature evaluation and selection guidelines from GBDT-based priors. However, Shapley value estimation is time-consuming due to exponentially growing feature selections [Lundberg and Lee (2017)]. To overcome this obstacle, we utilize the recent FastSHAP framework (Jethani et al. (2021)) to efficiently estimate Shapley values and generate model predictions in a single forward propagation. Our approach also allows us to ensemble multiple prior models to provide comprehensive feature evaluation and selection guidelines. Moreover, in inference, we utilize the estimated Shapley values to obtain feature-level explanations of how the model makes decisions. We extend the utilization of Shapley values to enhance the continual learning of DNNs by using them as proxies that memorize the mapping from features to predictions in a certain time step. We can then use them to regulate the updating of models to achieve overall stability, eliminating the need for collecting and accessing all historical data during inference. Our extensive experiments demonstrate the effectiveness of the SHAPNN approach. Our contributions are threefold: 1) To our best knowledge, this is the first work to incorporate Shapley value estimation in DNN training for tabular data, 2) We demonstrate that the approach can improve overall stability in continual learning of DNN, 3) this method can be applied to different backbone models, resulting in performance improvements and quantitative explanations in a single feedforward pass. In this paper, our motivations are introduced in section 2. The background of Shapley values is presented in section 3. Our proposed methodology is shown in section 4. The experiment details and results are presented in section 5. Related work is shown in section 6. Section 7 concludes our paper. ## 2 An empirical study on tabular data We provide an empirical example to explain our motivation for introducing Shapley-based regularization into deep neural networks (DNN) training for tabular data. This example illustrates the shortcomings of using a Multilayer Perceptron (MLP) for feature evaluation and selection compared to a Gradient Boosting Decision Tree (GBDT) model. We compared the classification accuracy of LGBM (GBDT-based model) and MLP on a customized Iris dataset [Fisher (1936)], where we purposefully attach extra numerical features (columns) whose values are sampled from a uniform distribution. As demonstrated in Figure 0(a), we observe a significant decrease in MLP's classification accuracy, as the percentage of extra features being constructed. We further investigate the effect of each feature on model prediction by examining their Shapley values by using KernelSHAP [Lundberg and Lee (2017)]. As shown in Figure 0(b), we observe that the extra features obtain a larger impact on MLP predictions, which explains its performance degradation. In contrast, the GBDT model almost completely disregards the extra features, and its performance remains stable even with the introduction of the new features. The aforementioned example suggests a potential remedy for the comparatively weaker feature evaluation and selection ability of DNNs. As Shapley values provide a measure of the contribution of each feature, we can align the values obtained by DNNs with those obtained by GBDTs, in order to supervise the training process. This approach has the potential to enhance DNNs training by reducing the impact of irrelevant features and prioritizing the learning of useful ones. ## 3 Background ### Shapley value The Shapley value aims to distribute the gain and cost fairly among the players in a coalition game to achieve a desired outcome or payoff. In a coalition game, there are \(N\) players and a characteristic function \(v(S)\) that maps the subset of players \(S\in\{0,1\}^{N}\) to a real number, representing the expected sum of the payoffs of the subset S that can be obtained through cooperation. The Shapley value \(\phi(v)\in R^{N}\) distributes the gains to players. The contribution \(\phi_{i}(v)\) of player \(i\) is calculated: \[\phi_{i}(v)=\frac{1}{N}\sum_{S\subseteq N\setminus i}\binom{n-1}{|S|}^{-1} \left(v(S\cup i)-v(S)\right) \tag{1}\] In the context of machine learning explanation, the characteristic function shows how the prediction of a sample changes when different subsets of features are removed. More specifically, given a sample \((x,y)\in\mathcal{D}\) from dataset \(\mathcal{D}\), where \(x=(x_{1},...,x_{N})\) is the input vector, and \(y\in{1,...,K}\) is the output of \(K\) classes for a classification problem, the characteristic function \(v\) is defined as follows: \[v_{x,y}(S)=E_{p(x_{1-S})}[\mathrm{Softmax}(f_{\theta}(x_{S},x_{1-S})],y) \tag{2}\] Here, \(f_{\theta}\) represents the machine learning model.The exact computation of the Shapley value increases exponentially with the number of players (features) \(N\). Various approximation solutions have been proposed to improve efficiency [Lundberg and Lee (2017)]. Despite these, accurately estimating the Shapley value can still be extremely slow for large-scale and high-dimensional cases. ### FastSHAP Due to the computational cost of Shapley value estimation, we adopt the FastSHAP approach introduced in [Jethani et al.(2021)] to perform amortized estimation of Shapley values. Specifically, we first learn a FastSHAP function \(\phi_{fast,\gamma}(x,y):X\times Y\to R^{N}\), with \(\gamma\) being the Shapley value generation model that map each feature to a Shapley value. The function is learned in a single forward pass by penalizing predictions using the following loss: \[\mathcal{L}_{\gamma}=E_{p(x)}E_{U(y)}E_{p(S)}[(v_{x,y}(S)-v_{x,y}(\emptyset)-S ^{T}\phi_{fast,\gamma}(x,y))^{2}] \tag{3}\] Figure 1: Comparison between LGBM and MLP under impact of noisy features. In (a), it shows the prediction accuracy comparison between the 2 models with different amount of noisy features. (b) shows the Shapley values of each feature for both models. where \(S\) denotes a subset of features, \(p(S)=\frac{n-1}{(\frac{n}{\tau_{S}})^{1/T}S(N-1^{T}S)}\), and \(U(y)\) is uniformly sampled from a distribution of \(K\) classes. To further improve training efficiency, we use additive efficient normalization to obtain Shapley value estimation function \(\phi_{fast,\gamma}^{eff}(x,y)\): \[\phi_{fast,\gamma}^{eff}(x,y)= \phi_{fast}(x,y,\gamma)+\frac{1}{N}\left(v_{x,y}(\mathbb{1})-v_{ x,y}(\emptyset)-\mathbb{1}^{T}\phi_{fast,\gamma}(x,y)\right) \tag{4}\] Here, \(\phi_{fast,\gamma}(x,y)\) denotes the original FastSHAP function, \(v_{x,y}(\mathbb{1})\) and \(v_{x,y}(\emptyset)\) are the sum of prediction values of all features and the sum of prediction values with zero features, respectively. FastSHAP consists of three steps: 1) Train a machine learning model \(f_{\theta}(x)\to y\) to be explained. 2) Train a surrogate model \(f_{surr,\beta}\) that approximates the original prediction model \(f_{\beta}(x,m)\to y\) with a masking function \(m(x,S)\) that is to replace the feature \(x_{i}\) with a default value not in the support of \(S\). 3) Train the Shapley value generation model \(\phi_{\gamma}(x)\to v_{x}(S)\). ## 4 Methodology ### Shapnn This section presents the Shapley-based Neural Network (SHAPNN), which is built upon FastSHAP. By utilizing estimated Shapley values as intermediate features, SHAPNN is designed to construct a machine-learning prediction model that achieves both high prediction accuracy and interpretability. Both model predictions and Shapley value estimations are obtained in a single forward pass. The neural network serves as the foundation for (1) the Shapley value generation model \(\phi_{\gamma}(x)\in R^{N\times K}\), which takes the input feature vector and generates the Shapley value vector for each possible class, and (2) the surrogate model \(f_{\beta}(x,S)\to y\), which takes the input feature vector and support \(S\) to produce the predicted label. The Concat SHAPNN \(f_{w}(x)\) is constructed by incorporating the estimated Shapley value \(v_{x}(S)\) as part of the input feature to the prediction model \(f_{w^{\prime}}:f_{w^{\prime}}(f_{w\setminus w^{\prime}}(x),v_{x}(S))\to y\), where \(v_{x}(S)=\phi_{\gamma}(x)\in R^{N,K}\) represents the estimated Shapley value. The Concat SHAPNN's loss function (\(\mathcal{L}\)) is: \[\mathcal{L}=\mathcal{L}_{\gamma}+CE(y^{\prime},y) \tag{5}\] Here, \(CE\) denotes the cross-entropy loss function for classification. ### Ensemble prior The SHAPNN with ensemble prior is developed by aligning estimated Shapley values to a series of GBDT models, such as an ensemble prior that combines Xgboost and LightGBM. These Shapley value Figure 2: The SHAPNN framework involves generating Surrogate Models from prior models. During the training of the DNN, the input data \(X\) is perturbed, and the Shapley values for the perturbed dimensions are estimated in an intermediate block. The original data and the estimated Shapley values are then passed to the prediction block for classification. estimations are then integrated into the input feature for the prediction model, \(f_{w^{\prime}}(f_{w\setminus w^{\prime}},v_{x}(S))\to y^{\prime}\), where \(v_{x}(S)=\phi_{\gamma}(x)\in R^{N,K}\) is the estimated Shapley value. The overall SHAPNN loss function is defined as: \[\mathcal{L}=agg_{k}(\mathcal{L}_{\gamma}^{k})+CE(y^{\prime},y) \tag{6}\] where the aggregation operator, \(agg\), combines the losses from each prior model of the ensemble, indexed by \(k\). In practice, we use a weighted sum for aggregation. This design of the SHAPNN enables explainability while also achieving higher performance. ### Continual learning The concept of continual learning can be framed as follows: given a data stream composed of a series of data batches \(x^{t}\), indexed by \(t\in[0,1,...,T]\), and a model \(f_{w}(x)\to y\) that is sequentially trained on each data batch \(x^{t}\) and recorded at each time step as \(f_{w}^{t}\), the task is to make two predictions at each time step. Firstly, using the most recent recorded model (\(f_{w}^{t-1}\)), we make predictions (\(\hat{y^{t}}\)) on the current data batch \(x^{t}\). Note that this batch of data is not available for model training before making the prediction. Secondly, we make backward predictions (\(y^{t-1}\)) on data batches that precede \(t-1\) using \(f_{w}^{t-1}\). Our aim is to ensure that both \(\hat{y^{t}}\) and \(y^{t-1}\) are accurate predictions of their respective true labels \(y\). During each time step \(t\), the model is trained using a combination of model prediction loss and Shapley estimation regularization, as described in previous sections. To ensure the model remains robust to concept drift, we generate pseudo labels for time step \(t\) by applying mixup (Zhang et al. (2017)) between the true label and all predictions from surrogate models of previous steps. This involves combining the true label (\(y^{t}\)) with a weighted average of the predictions (\(f_{w}^{i}(x^{t})\)) from previous steps \(i\in\{1,...,t-1\}\), where the weight is controlled by a parameter \(\alpha\): \[\widetilde{y^{t}}=\alpha\cdot y^{t}+(1-\alpha)\cdot\sum_{i}^{t-1}f_{w}^{i}(x^ {t}) \tag{7}\] To ensure stable feature selection and evaluation during continual learning, we also extend the regularization by including all the explanation models \(\gamma_{t}\) from previous time steps. Thus, the model at time step \(t\) is trained using the following loss: \[\mathcal{L}^{t}=\sum_{i}^{t-1}\lambda^{i}\cdot\mathcal{L}_{\gamma}^{i}+CE( \hat{y^{t}},\widetilde{y^{t}}) \tag{8}\] where \(\lambda_{i}\) is a discount factor of the losses from each time step, and \(\sum_{i}^{t-1}\lambda^{i}=1\). In practice, we use a decaying schedule that emphasizes recent steps and reduces the effect of distant steps. ## 5 Experiments ### Implementation and setup To evaluate the generalizability of our SHAPNN approach, we conducted our experiments on two popular DNN models for processing tabular data: Multi-Layer Perceptron (Kadra et al. (2021)) and recently published FT-Transformer (Gorishniy et al. (2021)), which has demonstrated state-of-the-art performance on various tabular datasets. The MLP has 3 hidden layers, each containing 512 neurons, while the FT-Transformer's hyperparameter follows (Gorishniy et al. (2021)). The Shapley estimation block for both implementations consists of a 2-layer MLP with an output dimension equal to the number of features in each dataset. The prediction layer is a linear projection layer without nonlinear activation functions. We employed the standard Stochastic Gradient Descent (SGD) optimizer and followed the hyper-parameter settings outlined in (Gorishniy et al. (2021)), including the learning rate selection. More detail is shown in Appendix. ### Tabular data analysis and datasets We conducted experiments on several well-known benchmark datasets, including: 1) the Adult Income dataset (Kohavi et al. (1996)), which comprises 48842 instances of adult income data with 14 attributes; 2) the Electricity dataset [Hoiem et al. (2009)], which contains 45312 instances of electricity consumption with 8 real-valued attributes; 3) the Iris dataset [Fisher (1936)], consisting of 3 types of Iris flowers, each with 50 samples; 4) the Epsilon dataset [Blackard and Dean (1999)], comprising 40000 objects with 2001 columns of simulated experiments; and 5) the Covertype dataset [Hulten et al. (2001)], which includes 581012 instances of tree samples, each with 54 attributes. We specifically chose the Epsilon and Covertype datasets for their higher dimensionality, which allowed us to demonstrate the efficiency and scalability of our method. The evaluation metric used for all analyses in this section is the Area Under the Receiver Operating Characteristic curve (AUROC). We chose this metric to ensure a fair comparison and to account for label imbalance bias. ### Model prediction results Table 1 shows that our SHAPNN approach applied to MLP consistently improves performance over the vanilla MLP baseline on all tabular data benchmarks. The magnitude of improvement appears to be associated with the difficulty of the datasets. On the challenging Adult Income dataset, which has missing values and different data types in features Shwartz-Ziv and Armon (2022), we achieve an improvement in AUROC of 1.3%. We observe an 0.6% increase in AUROC over the original 94.6% on the Iris dataset, which has the smallest size and fewest features among the five datasets. Table 1 also shows the test results on the FT-Transformer backbone, where we also observe improvements over the baseline model on all 5 test cases. Notably, FT-Transformer is a stronger baseline compared to MLP, potentially due to its attention mechanism that effectively weighs the features based on their pairwise correlation. Nevertheless, our approach still benefits FT-Transformer by enhancing feature evaluation and selection. Additionally, we compare the performance of two widely used models, Logistic Regression (LR) and Random Forest (RF), for tabular classification tasks to further evaluate our FT-Transformer's performance. The results show that FT-Transformer's performance is comparable to, or better than, that of LR and RF. \begin{table} \begin{tabular}{l l c c c c c} \hline \hline Datasets & \multicolumn{3}{c}{Adult} & Electricity & Iris & Epsilon & Covertype \\ \hline \multirow{6}{*}{Models} & Logistic Regression & 0.793 & 0.774 & 0.935 & 0.854 & 0.945 \\ & Random Forrest & 0.837 & 0.822 & 0.959 & 0.892 & 0.957 \\ \cline{2-7} & MLP & 0.839 & 0.790 & 0.946 & 0.883 & 0.955 \\ & **SHAPNN (MLP)** & **0.852** & **0.818** & **0.952** & **0.892** & **0.961** \\ \cline{2-7} & FT-Transformer & 0.849 & 0.824 & 0.954 & 0.890 & 0.960 \\ & **SHAPNN (FT-Transformer)** & **0.857** & **0.835** & **0.957** & **0.894** & **0.969** \\ \hline \hline \end{tabular} \end{table} Table 1: Prediction Results (AUROC) \begin{table} \begin{tabular}{l l c c c c} \hline \hline Dataset & \multicolumn{3}{c}{Adult} & Electricity & Iris & Epsilon & Covertype \\ \hline \multirow{2}{*}{Models} & SHAPNN (single prior) & 0.849 & 0.807 & 0.952 & 0.889 & 0.961 \\ & SHAPNN (ensemble prior) & **0.852** & **0.818** & 0.952 & **0.892** & 0.961 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison between single and ensemble prior (AUROC) \begin{table} \begin{tabular}{l l l} \hline \hline Dataset & \multicolumn{3}{c}{Models} \\ \cline{2-5} & SHAPNN & KernelSHAP \\ \hline Epsilon & **4.7 s** & 34.9 s \\ Covertype & **0.8 s** & 5.2 s \\ \hline \hline \end{tabular} \end{table} Table 3: Inference speed comparison #### 5.3.1 Single prior vs. ensemble priors The performance comparison between a DNN trained with a single prior model and an ensemble of prior models is presented in Table 2. The results show that on 3 of the 5 datasets, including the more challenging Adult Income and Electricity datasets, using ensemble priors leads to better performance compared to using a single prior. However, on the Iris and Covertype datasets, where the original performance is already high, the performance of using ensemble priors is the same as using a single prior. The observed improvement in performance may be attributed to the ensemble priors providing a more comprehensive evaluation of features compared to a single prior. ### Model explanation results Figures 2(a) and 2(b) illustrate SHAPNN's ability to provide quantitative explanations at both the sample-wise and population-wise levels, respectively, using the Adult Income dataset as an example. For each type of explanation, SHAPNN presents the impact of each feature on the model prediction, along with its magnitude and polarity. In sample-wise explanations, the magnitude indicates the importance of each feature, while the polarity reflects the direction in which a feature influences the model prediction for a particular sample. For example, the education length feature seems to be important for predicting personal income, with a positive contribution to high earners and a negative contribution to low earners. Notably, negative class samples (i.e., low earners) are associated with features of overwhelmingly negative impacts, while positive class samples have more diverse feature influences. Similarly, population-wise explanations demonstrate the general relationship between feature values and their influence within a given population. In this example, relationship and marital status are identified as two important factors. We can interpret from the plot that not being in a relationship or being married almost always contributes positively to earning status, whereas the influence is more diverse for opposite conditions. It is also worth mentioning that only a few features have high Shapley values, which could be an effect of the proposed regularization. To evaluate the efficiency of our method in generating explanations, we conducted a wall-clock experiment comparing the inference time consumed by SHAPNN and KernelSHAP Lundberg and Lee (2017) for generating sample-wise explanations. We tested Covertype and Epsilon datasets due to their relatively higher dimensionality. We report the average inference time of 100 randomly sampled data points in Table 3. Our method was found to provide a 7-8X speedup over KernelSHAP. Figure 3: Explanation examples of Adult Income Dataset. (a) shows the sample-wise explanation examples, and (b) gives the population-wise explanation examples. ### Continual learning analysis We further analyze the ability of SHAPNN in handling streaming data through the continual learning framework. Continual learning presents two conflicting challengesDe Lange et al. (2021): the model should quickly adapt to incoming data that often leads to concept drift, but it should not forget the knowledge learned from previous data and become biased towards the newest data. To comprehensively evaluate the model's performance in both aspects, we conduct both online adaptations and retrospective tests. We use three synthetic streaming datasets with controlled levels of concept drift for this analysis: STA dataset Gama et al. (2004), SEA dataset Street and Kim (2001), and ROT dataset Hulten et al. (2001). In all three datasets, the mapping between features and predictors changes over time with different concept drifts defined by certain functions. Recurring and abrupt concept drift is introduced into each time window by randomly shuffling the parameter of the functions. The function definitions can be found in Appendix. These datasets pose a significant challenge to the model. #### 5.5.1 Online adaptation For all the datasets that follow, we conduct an adaptation test by assuming that only the most recent data is available for re-training. This means that we test the model on each time step \(t\) after updating it with the most recent data (i.e., data batch \(t-1\)). We compare two scenarios: one with SHAPNN and one without SHAPNN, using MLP as the backbone model (see Appendix) in both cases. Figures 3(a) to 3(c) depict the online adaptation results on these streaming datasets. The comparison between the baseline case and the SHAPNN approach reveals that the latter provides much more stable performance across all time steps. The fluctuations are reduced, and the average performance is substantially higher. These suggest SHAPNN's capability for online adaptation of streaming data. #### 5.5.2 Retrospective test For this test, we update the MLP model (see Appendix) using the same approach as in the online adaptation test. We assess the model's performance by predicting the historic data it was trained from and report the average AUROC over all past time steps. The retrospective testing outcomes are displayed in Table 4. The test outcomes are reported at timestep 10 and 50. Since no historical data is used in model retraining, the MLP baseline model performs poorly on previous data batches after updating its weights at the evaluation time step. At timestep 50, the MLP model barely outperforms the random guessing, which clearly indicates the catastrophic forgetting issue. The model's weights are biased toward the latest data and lose previously learned concepts. On the other hand, SHAPNN consistently maintains a higher model Figure 4: Online adaptation performance for (a) STA, (b) SEA, and (c) ROT datasets. performance on previous data batches, which shows the efficacy of SHAPNN in mitigating the catastrophic forgetting issue. ## 6 Related work **Neural networks for tabular data** Several approaches have been proposed to enhance the performance of tree-based models for analyzing tabular data, either by extending them with deep learning techniques or by designing new neural architectures Borisov et al. (2022). Two main categories of model architectures have emerged from these efforts: differentiable trees and attention-based models. For instance, TabNet leverages sequential attention to perform feature selection and learning Arik and Pfister (2021), while NODE uses an ensemble of shallow neural nets connected in a tree fashion Popov et al. (2019). Another example is Net-NDF, which utilizes disjunctive normal neural form blocks to achieve feature splitting and selection Katzir et al. (2020). More recently, researchers have explored applying transformer-based models to tabular data, with TabTransformer being the first attempt to do so Huang et al. (2020). This approach has been further improved upon in SAINT, which introduced additional row-wise attention Sompenalli et al. (2021). The state-of-the-art method in this category is the Feature-tokenizer Transformer, which enhances the learning of embedding from tabular data with a tailored tokenizer Gorishniy et al. (2021). **Intepretable machine learning** The importance of generating interpretable tabular neural networks has gained increasing attention in recent years, particularly for critical applications where explanations are essential (Sahakyan et al. (2021)). Existing work in this area often relies on attention-based mechanisms to generate feature-level explanations (Konstantinov and Utkin (2022)). Another line of research involves using model-agnostic approaches to explain trained models, such as KernelSHAP and its extensions (Lundberg and Lee (2017); Covert and Lee (2021)). While most Shapley-based explanations are performed post-hoc, Wang et al. (2021) proposed a Shapley Explanation Network that incorporates Shapley values during training by adding extra Shapley value estimation modules to the neural net. In contrast, our approach uses amortized estimation to generate and leverage Shapley-based representations, which largely reduces the complexity of incorporating Shapley value. **Continual learning** Concept drift handling and adapting to new data after model training have been extensively discussed and explored even before the advent of deep learning (Widmer and Kubat (1996); Gama et al. (2014)). Typically, existing work relies on collectively re-training a new model on the aggregated historical data. With deep learning, this concept has been extended to continual learning, which focuses on learning new tasks while preventing the model from forgetting what has been learned on old tasks (Chen and Liu (2018)). As summarized in (De Lange et al. (2021)), prior work has introduced more regularization terms during training (Aljundi et al. (2018); Zhang et al. (2020), learned separate sets of parameters for different tasks (Aljundi et al. (2017); Rosenfeld and Tsotsos (2018)), or retained sampled historical data in a memory buffer to compensate for new task data during re-training (Rolnick et al. (2019); Lopez-Paz and Ranzato (2017)). For instance, ASER (Shim et al. (2021)) leverages Shapley values to adversarially select buffered data samples for effective re-training. In a similar vein, we also utilize the Shapley values for continual learning. However, unlike ASER, we directly leverage the Shapley value estimator of past models as a medium for retaining knowledge from past training without accessing any historical data. Since the Shapley value estimators already contain the information on the mapping between features and predictions, we use them to regularize the parameter updating. \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline Dataset & \multicolumn{2}{c}{STA} & \multicolumn{2}{c}{SEA} & \multicolumn{2}{c}{ROT} \\ \hline Timestep & 10 & 50 & 10 & 50 & 10 & 50 \\ \hline \multirow{2}{*}{Models} & MLP & 0.647 & 0.493 & 0.627 & 0.563 & 0.692 & 0.583 \\ & SHAPNN (MLP) & **0.715** & **0.673** & **0.902** & **0.757** & **0.881** & **0.785** \\ \hline \hline \end{tabular} \end{table} Table 4: Retrospective testing results (AUROC) Conclusion, Boarder Impact, Limitations, LLM Statement We introduce SHAPNN, a new deep-learning architecture for supervised learning tasks on tabular data. The neural network incorporates real-time Shapley value estimation module, which is trained through standard backward propagation. The estimation module provides enhanced regularization for model training that leads to performance improvements and enables valid explanations with no extra computational cost. Furthermore, the Shapley-based regularization improves the ability to perform continual learning. We extensively evaluate SHAPNN on publicly available datasets and compare it to state-of-the-art deep learning models, demonstrating its superior performance. We also show that SHAPNN is effective in continual learning, adapting to concept drifts and being robust to noisy data. Our work could potentially facilitate general data analysis, and improve the transparency and trustworthiness of AI. Some limitations of our method include: 1) prior models need to be trained separately, ahead of training of the neural network; 2) our model may have an upper limit on its capacity to adapt to new concepts or drifts. In this paper, we use LLM to correct grammatical mistakes.
2301.13376
Quantized Neural Networks for Low-Precision Accumulation with Guaranteed Overflow Avoidance
We introduce a quantization-aware training algorithm that guarantees avoiding numerical overflow when reducing the precision of accumulators during inference. We leverage weight normalization as a means of constraining parameters during training using accumulator bit width bounds that we derive. We evaluate our algorithm across multiple quantized models that we train for different tasks, showing that our approach can reduce the precision of accumulators while maintaining model accuracy with respect to a floating-point baseline. We then show that this reduction translates to increased design efficiency for custom FPGA-based accelerators. Finally, we show that our algorithm not only constrains weights to fit into an accumulator of user-defined bit width, but also increases the sparsity and compressibility of the resulting weights. Across all of our benchmark models trained with 8-bit weights and activations, we observe that constraining the hidden layers of quantized neural networks to fit into 16-bit accumulators yields an average 98.2% sparsity with an estimated compression rate of 46.5x all while maintaining 99.2% of the floating-point performance.
Ian Colbert, Alessandro Pappalardo, Jakoba Petri-Koenig
2023-01-31T02:46:57Z
http://arxiv.org/abs/2301.13376v1
# Quantized Neural Networks for Low-Precision Accumulation with Guaranteed Overflow Avoidance ###### Abstract Quantizing the weights and activations of neural networks significantly reduces their inference costs, often in exchange for minor reductions in model accuracy. This is in large part due to compute and memory cost savings in operations like convolutions and matrix multiplications, whose resulting products are typically accumulated into high-precision registers, referred to as accumulators. While many researchers and practitioners have taken to leveraging low-precision representations for the weights and activations of a model, few have focused attention on reducing the size of accumulators. Part of the issue is that accumulating into low-precision registers introduces a high risk of numerical overflow which, due to wraparound arithmetic, can significantly degrade model accuracy. In this work, we introduce a quantization-aware training algorithm that guarantees avoiding numerical overflow when reducing the precision of accumulators during inference. We leverage weight normalization as a means of constraining parameters during training using accumulator bit width bounds that we derive. We evaluate our algorithm across multiple quantized models that we train for different tasks, showing that our approach can reduce the precision of accumulators while maintaining model accuracy with respect to a floating-point baseline. We then show that this reduction translates to increased design efficiency for custom FPGA-based accelerators. Finally, we show that our algorithm not only constrains weights to fit into an accumulator of user-defined bit width, but also increases the sparsity and compressibility of the resulting weights. Across all of our benchmark models trained with 8-bit weights and activations, we observe that constraining the hidden layers of quantized neural networks to fit into 16-bit accumulators yields an average 98.2% sparsity with an estimated compression rate of 46.5x all while maintaining 99.2% of the floating-point performance. ## 1 Introduction Quantization is the process of reducing the range and precision of the numerical representation of data. Among the many techniques used to reduce the inference costs of neural networks (NNs), integer quantization is one of the most widely applied in practice (Gholami et al., 2021). The reduction in compute and memory requirements resulting from low-precision quantization provides increased throughput, power savings, and resource efficiency, usually in exchange for minor reductions in model accuracy (Hubara et al., 2017). During inference, information is propagated through the layers of an NN, where most of the compute workload is concentrated in the multiply-and-accumulates (MACs) of operators such as convolutions and matrix multiplications. It has been shown that reducing the bit width of the accumulator can increase throughput and bandwidth efficiency for general-purpose processors by creating more opportunities to increase parallelism (de Bruin et al., 2020; Ni et al., 2021; Xie et al., 2021). However, exploiting such an optimization is non-trivial, as doing so incurs a high risk of overflow which can introduce numerical errors that significantly degrade model accuracy due to wraparound twos-complement arithmetic (Ni et al., 2021). Previous work has sought to either reduce the risk of numerical overflow (Xie et al., 2021; Sakr et al., 2019) or mitigate its impact on model accuracy (Ni et al., 2021). In this work, we train quantized NNs (QNNs) to avoid numerical overflow altogether when using low-precision accumulators during inference. To fully exploit the wider design space exposed by considering low-precision weights, activations, and accumulators, we target model deployment on FPGA accelerators with custom spatial streaming dataflow rather than general-purposes platforms like CPUs or GPUs. The flexibility of FPGAs makes them ideal devices for low-precision inference engines as they allow for bit-level control over every part of a network; the precision of weights, activations, and accumulators can be individually tuned to custom data types for each layer without being restricted to power-of-2 bit widths like a CPU or a GPU would be. The contributions of our work are summarized as follows: * We show that reducing the bit width of the accumulator can reduce the resource utilization of custom low-precision QNN inference accelerators. * We derive comprehensive bounds on accumulator bit widths with finer granularity than existing literature. * We introduce a novel quantization-aware training (QAT) algorithm that constrains learned parameters to avoid numerical overflow when reducing the precision of accumulators during inference. * We show that our algorithm not only constrains weights to fit into an accumulator of user-defined bit width, but also significantly increases the sparsity and compressibility of the resulting weights. * We integrate our algorithm into the Brevitas quantization library (Pappalardo, 2021) and the FINN compiler (AMD-Xilinx, 2023) to demonstrate an end-to-end flow for training and deploying QNNs using low-precision accumulators when targeting custom streaming architectures on AMD-Xilinx FPGAs. To the best of our knowledge, we are the first to explore the use of low-precision accumulators to improve the design efficiency of programmable QNN inference accelerators. However, our results have implications outside of the accelerators generated by FINN. Constraining the accumulator bit width to a user-defined upper bound has been shown to increase throughput and bandwidth efficiency on general-purpose processors (de Bruin et al., 2020; Ni et al., 2021; Xie et al., 2021) and reduce the compute overhead of homomorphic encryption arithmetic (Lou and Jiang, 2019). Furthermore, our experiments show that our algorithm can offer a better trade-off between resource utilization and model accuracy than existing approaches, confirming the benefit of including the accumulator bit width in the overall hardware-software (HW) co-design space. ## 2 Related Work As activations propagate through the layers of a QNN, the intermediate partial sums resulting from convolutions and matrix multiplications are typically accumulated in a high-precision register before being requantized and passed to the next layer, which we depict in Fig. 1. While many researchers and practitioners have taken to leveraging reduced precision representations for weights and activations (Jacob et al., 2018; Gholami et al., 2021; Nagel et al., 2021; Zhang et al., 2022), few works have focused attention on reduced precision accumulators (Sakr et al., 2019; de Bruin et al., 2020; Xie et al., 2021; Ni et al., 2021). One approach to training QNNs to use low-precision accumulators is to mitigate the impact of overflow on model accuracy. Xie et al. 2021 sought to reduce the risk of overflow using an adaptive scaling factor tuned during training; however, their approach relies on distributional assumptions that cannot guarantee overflow avoidance during inference. Alternatively, Ni et al. 2021 proposed training QNNs to be robust to overflow using a cyclic activation function based on expensive modulo arithmetic. They also use a regularization penalty to control for the amount of overflows. In both approaches, overflow is accounted for at the outer-most level, which fails to consider possible overflow when accumulating intermediate partial sums. Moreover, modeling overflow at the inner-most accumulation level during QAT is not easily supported by off-the-shelf deep learning frameworks as it is not directly compatible with fake-quantization over pre-existing floating-point backends. As such, the current practice is to either use high-precision registers or simply saturate values as they are accumulated; however, such clipping can still: (1) introduce errors that cascade when propagated through a QNN; and (2) require saturation logic, which can break associativity and add to latency and area requirements (AMD-Xilinx, 2023). Thus, in our work, we train QNNs to completely avoid overflow rather than simply reducing its impact on model accuracy. Most similar to our work is that of de Bruin et al. 2020, which proposed an iterative layer-wise optimization strategy to select mixed-precision bit widths to avoid overflow using computationally expensive heuristics that assume signed bit widths for all input data types. Our proposed method constrains weights to avoid numerical overflow through the construction of our weight normalization-based quantization Figure 1: A simplified illustration of fixed-point arithmetic in neural network inference. Quantized weights are frozen during inference. Input/output data is dynamic and thus, scaled then clipped as the hidden representations (_i.e._, activations) are passed through the network. The accumulator needs to be big enough to fit the dot product of the learned weights with input data vectors, which are assumed to both be \(K\)-dimensional. formulation, which accounts for both signed and unsigned input data types while adding negligible training overhead. Tangential to our work, Wang et al. 2018 and Sakr et al. 2019 study the impact of reduced precision floating-point accumulators for the purpose of accelerating training. Such methods do not directly translate to fixed-point arithmetic, which is the focus of this work. ## 3 Background Our work explores the use of weight normalization as a means of constraining weights during QAT for the purpose of avoiding overflow when using low-precision accumulators. Here, we provide background related to this objective. ### Quantization-Aware Training (QAT) The standard operators used to emulate quantization during training rely on uniform affine mappings from a high-precision real number to a low-precision quantized number, allowing for the core computations to use integer-only arithmetic (Jacob et al., 2018). The quantizer (Eq. 1) and dequantizer (Eq. 2) are parameterized by zero-point \(z\) and scaling factor \(s\). Here, \(z\) is an integer value that maps to the real zero such that the real zero is exactly represented in the quantized domain, and \(s\) is a strictly positive real scalar that corresponds to the resolution of the quantization function. Scaled values are rounded to the nearest integers using half-way rounding, denoted by \(\lfloor\cdot\rceil\), and elements that exceed the largest supported values in the quantized domain are clipped: \(\text{clip}(x;n,p)=\min(\max(x;n);p)\), where \(n\) and \(p\) are dependent on the data type of \(x\). For signed integers of bit width \(b\), we assume \(n=-2^{b-1}\) and \(p=2^{b-1}-1\) and assume \(n=0\) and \(p=2^{b}-1\) when unsigned. \[\text{quantize}(x;s,z) :=\text{clip}(\left\lfloor\frac{x}{s}\right\rceil+z;n,p) \tag{1}\] \[\text{dequantize}(x;s,z) :=s\cdot(x-z) \tag{2}\] It has become increasingly common to use unique scaling factors for each of the output channels of the learned weights to adjust for varied dynamic ranges (Nagel et al., 2019). However, extending this strategy to the activations incurs additional overhead as it requires either storing partial sums or introducing additional control logic. As such, it is standard practice to use per-tensor scaling factors for activations and per-channel scaling factors on only the weights. It is also common to constrain the weight quantization scheme such that \(z=0\), which is referred to as symmetric quantization. Eliminating these zero points reduces the computational overhead of cross-terms when executing inference using integer-only arithmetic (Jain et al., 2020). During training, the straight-through estimator (STE) (Bengio et al., 2013) is used to allow local gradients to permeate the rounding function such that \(\nabla_{x}\lfloor x\rceil=1\) everywhere, where \(\nabla_{x}\) denotes the local gradient with respect to \(x\). ### Weight Normalization Weight normalization reparameterizes each weight vector \(\mathbf{w}\) in terms of a parameter vector \(\mathbf{v}\) and a scalar parameter \(g\) as given in Eq. 3, where \(\|\mathbf{v}\|_{2}\) is the Euclidean norm of the \(K\)-dimensional vector \(\mathbf{v}\)(Salimans and Kingma, 2016). This simple reparameterization fixes the Euclidean norm of weight vector \(\mathbf{w}\) such that \(\|\mathbf{w}\|_{2}=g\), which enables the magnitude and direction to be independently learned. \[\mathbf{w}=g\cdot\frac{\mathbf{v}}{\|\mathbf{v}\|_{2}} \tag{3}\] Tangential to our work, prior research has sought to leverage weight normalization as a means of regularizing long-tail weight distributions during QAT (Cai and Li, 2019). They replace the standard \(\ell_{2}\)-norm with an \(\ell_{\infty}\)-norm and derive a projection operator to map real values into the quantized domain. In our work, we replace the \(\ell_{2}\)-norm with an \(\ell_{1}\)-norm to use the weight normalization parameterization as a means of constraining learned weights during training to use a pre-defined accumulator bit width during inference. Figure 2: We adapt images from (Umuroglu et al., 2017; Blott et al., 2018) to provide: (a) an overview of the FINN framework; and (b) an abstraction of the matrix-vector-activation unit (MVAU), which is one of the primary building blocks used by the FINN compiler to generate custom streaming architectures. ## 4 Motivation To motivate our research objective, we evaluate the impact of accumulator bit width on the resource utilization of custom FPGA accelerators with spatial dataflow architectures. To do so, we adopt FINN (Umuroglu et al., 2017; Blott et al., 2018), an open-source framework designed to generate specialized streaming architectures for QNN inference acceleration on AMD-Xilinx FPGAs. ### Generating Streaming Architectures with FINN The FINN framework, depicted in Fig. 2a, generates specialized QNN accelerators for AMD-Xilinx FPGAs using spatial streaming dataflow architectures that are individually customized for the network topology and the data types used. At the core of FINN is its compiler, which empowers flexible hardware-software (HW-SW) co-design by allowing a user to have per-layer control over the generated accelerator. Weight and activation precisions can be individually specified for each layer in a QNN, and each layer is instantiated as its own dedicated compute unit (CU) that can be independently optimized with fine-grained parallelism. As an example of how a layer is instantiated as its own CU, we provide a simplified abstraction of the matrix-vector-activation unit (MVAU) in Fig. 2b. The MVAU is one of the primary building blocks used by the FINN compiler for linear and convolutional layers (Blott et al., 2018). Each CU consists of processing elements (PEs), which parallelize work along the data-independent output dimension, and single-instruction multiple-data (SIMD) lanes, which parallelize work along the data-dependent input dimension. Execution over SIMDs and PEs within a layer is concurrent (_i.e._, spatial parallelism), while execution over layers within a network is pipelined (_i.e._, temporal parallelism). All quantized monotonic activation functions in the network are implemented as threshold comparisons that map high-precision accumulated results from the preceding layer into low-precision output values. During compilation, batch normalization, biases and even scaling factors are absorbed into this threshold logic via mathematical manipulation (Blott et al., 2018). The input and output data for the generated accelerators are streamed into and out of the chip using AXI-Stream protocols while on-chip data streams are used to interconnect these CUs to propagate intermediate activations through the layers of the network. During inference, all network parameters are stored on-chip to avoid external memory bottlenecks. For more information on the FINN framework, we refer the interested reader to (Umuroglu et al., 2017; Blott et al., 2018; AMD-Xilinx, 2023a). ### Accumulator Impact on Resource Utilization FINN typically relies on look-up tables (LUTs) to perform MACs at low precision; in such scenarios, LUTs are often the resource bottleneck for the low-precision streaming accelerators it generates. Furthermore, because activation functions are implemented as threshold comparisons, their resource utilization exponentially grows with the precision of the accumulator and output activations (Blott et al., 2018). Thus, reducing the size of the accumulator has a direct influence on both the compute and memory requirements. To evaluate the impact of accumulator bit width on LUT utilization, we consider a fully connected QNN with one hidden layer that is parameterized by a matrix of signed integers. The QNN takes as input a \(K\)-dimensional vector of unsigned integers and gives as output a 10-dimensional vector of signed integers. We use the FINN compiler to generate a streaming architecture with a single MVAU targeting an AMD-Xilinx PYNQ-Z2 board with a frequency of 100 MHz. We report the resource utilization of the resulting RTL post-synthesis. To simplify our analysis, we assume that LUTs are the only type of resources available and configure the FINN compiler to target LUTs for both compute and memory so that we can evaluate the impact of accumulator bit width on resource utilization using just one resource. As further discussed in Section 5, the minimum accumulator bit width that can be used to avoid overflow is a function of the size of the dot product (\(K\)) as well as the bit widths of the Figure 3: Reducing the size of the accumulator in turn reduces LUT utilization as we vary the size of the dot product (\(K\)) and the input and weights bit widths, \(N\) and \(M\) respectively. For simplicity, we use the same bit width for the weights and activations such that \(N=M\) for all data points, and jointly refer to them as “data bit width.” For a given dot product size and data bit width, we normalize the LUT utilization to the largest lower bound on the accumulator bit width as determined by the data types of the inputs and weights. input and weight vectors \(\mathbf{x}\) and \(\mathbf{w}\), respectively denoted as \(N\) and \(M\). In Fig. 3, we visualize how further reducing the accumulator bit width in turn decreases resource utilization as we vary \(K\), \(N\), and \(M\). For a given dot product size and data bit width, we normalize the LUT utilization to the largest lower bound on the accumulator bit width as determined by the data types of the inputs and weights. To control for the resource utilization of dataflow logic, we use a single PE without applying optimizations such as loop unrolling, which increase the amount of SIMD lanes. We observe that the impact of accumulator bit width on resource utilization grows exponentially with the precision of the data (_i.e._, \(M\) and \(N\)). As we reduce the size of the accumulator, we observe up to a 25% reduction in the LUT utilization of a layer when \(N=M=8\), but only up to a 1% reduction in LUT utilization when \(N=M=3\). This is expected as compute and memory requirements exponentially grow with precision and thus have larger proportional savings opportunities. We also observe that \(K\) has a dampening effect on the impact of accumulator bit width reductions that is also proportional to the precision of the data. When \(N=M=8\) and \(K=32\), we observe on average a 2.1% LUT reduction for every bit that we reduce the accumulator, but only a 1.5% LUT reduction when \(K=512\). Conversely, we observe on average a 0.2% LUT reduction for every bit that we reduce the accumulator when \(N=M=3\) regardless of \(K\). We hypothesize that this dampening effect is in part due to the increased storage costs of larger weight matrices because, unlike threshold storage, the memory requirements of weights are not directly impacted by the accumulator bit width. Therefore, the relative savings from accumulator bit width reductions are diluted by the constant memory requirements of the weights. We explore this further in Section 7, where we break down the resource utilization of FPGA accelerators generated for our benchmark models. ## 5 Accumulator Bit Width Bounds Figure 1 illustrates a simplified abstraction of accumulation in QNN inference. As activations are propagated through the layers, the intermediate partial sums resulting from operations such as convolutions or matrix multiplications are accumulated into a register before being requantized and passed to the next layer. To avoid numerical overflow, the register storing these accumulated values needs to be wide enough to not only store the result of the dot product, but also all intermediate partial sums. Consider the dot product of input data \(\mathbf{x}\) and learned weights \(\mathbf{w}\), which are each \(K\)-dimensional vectors of integers. Let \(y\) be the scalar result of their dot product given by Eq. 4, where \(x_{i}\) and \(w_{i}\) denote element \(i\) of vectors \(\mathbf{x}\) and \(\mathbf{w}\), respectively. Since the representation range of \(y\) is bounded by that of \(\mathbf{x}\) and \(\mathbf{w}\), we use their ranges to derive lower bounds on the bit width \(P\) of the accumulation register, or accumulator. \[y=\sum_{i=1}^{K}x_{i}w_{i} \tag{4}\] It is common for input data to be represented with unsigned integers either when following activation functions with non-negative dynamic ranges (_e.g._, rectified linear units, or ReLUs), or when an appropriate zero point is adopted (_i.e._, asymmetric quantization). Otherwise, signed integers are used. Since weights are most often represented with signed integers, we assume the accumulator is always signed in our work. Therefore, given that the scalar result of the dot product between \(\mathbf{x}\) and \(\mathbf{w}\) is a \(P\)-bit integer defined by Eq. 4, it follows that \(\sum_{i=1}^{K}x_{i}w_{i}\) is bounded such that: \[-2^{P-1}\leq\sum_{i=1}^{K}x_{i}w_{i}\leq 2^{P-1}-1 \tag{5}\] To satisfy the right-hand side of this double inequality, it follows that \(|\sum_{i=1}^{K}x_{i}w_{i}|\leq 2^{P-1}-1\). However, the accumulator needs to be wide enough to not only store the result of the dot product, but also all intermediate partial sums. Since input data is not known _a priori_, our bounds must consider the worst-case values for every MAC. Thus, because the magnitude of the sum of products is upper-bounded by the sum of the product of magnitudes, it follows that if \(\sum_{i=1}^{K}|x_{i}||w_{i}|\leq 2^{P-1}-1\), then the dot product between \(\mathbf{x}\) and \(\mathbf{w}\) fits into a \(P\)-bit accumulator without numerical overflow, as shown below. \[|\sum_{i}x_{i}w_{i}|\leq\sum_{i}|x_{i}w_{i}|\leq\sum_{i}|x_{i}||w_{i}|\leq 2^ {P-1}-1 \tag{6}\] ### Deriving Lower Bounds Using Data Types The worst-case values for each MAC can naively be inferred from the representation range of the data types used. When \(x_{i}\) and \(w_{i}\) are signed integers, their magnitudes are bounded such that \(|x_{i}|\leq 2^{N-1}\) and \(|w_{i}|\leq 2^{M-1}\), respectively. In scenarios where \(x_{i}\) is an unsigned integer, the magnitude of each input value is upper-bounded such that \(|x_{i}|\leq 2^{N}-1\); however, we simplify this upper bound to be \(|x_{i}|\leq 2^{N}\) for convenience of notation1. Combining the signed and unsigned upper bounds, it follows that \(|x_{i}|\leq 2^{N-1_{\text{signal}}(\mathbf{x})}\), where \(\mathbbm{1}_{\text{signed}}(\mathbf{x})\) is an indicator function that returns 1 if and only if \(\mathbf{x}\) is a vector of signed integers. Footnote 1: Note that our simplification of the upper bound for unsigned input data does not compromise overflow avoidance. Building from Eq. 6, it follows that the sum of the product of the magnitudes is bounded such that: \[\sum_{i=1}^{K}|x_{i}||w_{i}|\leq K\cdot 2^{N+M-1-1_{\text{signed}}(\mathbf{x})}\leq 2 ^{P-1}-1 \tag{7}\] Taking the log of both sides of Eq. 7, we can derive a lower bound on the accumulator bit width \(P\): \[\log_{2}\left(2^{\log_{2}(K)+N+M-1-1_{\text{signed}}(\mathbf{x})}+1\right)+1\leq P \tag{8}\] This simplifies to the following lower bound on \(P\): \[P \geq\alpha+\phi(\alpha)+1 \tag{9}\] \[\alpha =\log_{2}(K)+N+M-1-\mathbb{1}_{\text{signed}}(\mathbf{x})\] (10) \[\phi(\alpha) =\log_{2}(1+2^{-\alpha}) \tag{11}\] In Fig. 4, we visualize this bound assuming that \(\mathbf{x}\) is a vector of unsigned integers such that \(\mathbb{1}_{\text{signed}}(\mathbf{x})=0\). There, we show how the lower bound on the accumulator bit width increases as we vary both the size of the dot product (\(K\)) and the bit width of the weights and activations. ### Deriving Lower Bounds Using Learned Weights Since learned weights are frozen during inference time, we can use knowledge of their magnitudes to derive a tighter lower bound on the accumulator bit width. Building again from Eq. 6, the sum of the product of magnitudes is bounded by Eq. 12, where \(\|\mathbf{w}\|_{1}\) denotes the standard \(\ell_{1}\)-norm over vector \(\mathbf{w}\). \[\sum_{i=1}^{K}|x_{i}||w_{i}|\leq 2^{N-\mathbb{1}_{\text{signed}}(\mathbf{x})} \cdot\|\mathbf{w}\|_{1}\leq 2^{P-1}-1 \tag{12}\] Here, we define a tighter lower bound on \(P\): \[P \geq\beta+\phi(\beta)+1 \tag{13}\] \[\beta =\log_{2}(\|\mathbf{w}\|_{1})+N-\mathbb{1}_{\text{signed}}(\mathbf{x})\] (14) \[\phi(\beta) =\log_{2}(1+2^{-\beta}) \tag{15}\] In Fig. 4, we visualize this bound again assuming that \(\mathbf{x}\) is a vector of unsigned integers. Because Eq. 14 is dependent on the values of the learned weights, we randomly sample each \(K\)-dimensional vector from a discrete Gaussian distribution and show the median accumulator bit width along with the minimum and maximum observed over 300 random samples. We show that using learned weights (right) provides a tighter lower bound on the bit width of the accumulator than using data types (left) as we vary both the size of the dot product (\(K\)) and the bit width of the weights and input activations. ## 6 Training QNNs to Avoid Overflow To train QNNs to use low-precision accumulators without overflow, we use weight normalization as a means of constraining learned weights \(\mathbf{w}\) to satisfy the bound derived in Section 5.2. Building from Eq. 12, we transform our lower bound on accumulator bit width \(P\) to be the upper bound on the \(\ell_{1}\)-norm of \(\mathbf{w}\) given by Eq. 16. Note that because each output neuron requires its own accumulator, this upper bound needs to be enforced channelwise. \[\|\mathbf{w}\|_{1}\leq\left(2^{P-1}-1\right)\cdot 2^{\mathbb{1}_{\text{signed}}(\bm {x})-N} \tag{16}\] ### Constructing Our Quantization Operator To enforce this constraint during QAT, we reparameterize our quantizer such that each weight vector \(\mathbf{w}\) is represented in terms of parameter vectors \(\mathbf{g}\) and \(\mathbf{v}\). Similar to the standard weight normalization formulation discussed in Section 3.2, this reparameterization decouples the norm from the weight vector; however, unlike the standard formulation, the norm is learned for each output channel. For a given layer with \(C\) output channels, we replace the per-tensor \(\ell_{2}\)-norm of the standard formulation (Eq. 3) with a per-channel \(\ell_{1}\)-norm. This reparameterization, given by Eq. 17, allows for the \(\ell_{1}\)-norm of weight vector \(\mathbf{w}\) to be independently learned per-channel such that \(g_{i}=\|\mathbf{w}_{i}\|_{1}\) for all \(i\in\{1,\cdots,C\}\), where \(\mathbf{w}_{i}\) denotes the weights of channel \(i\) and \(g_{i}\) denotes element \(i\) in parameter vector \(\mathbf{g}\). \[\mathbf{w}_{i}=g_{i}\cdot\frac{\mathbf{v}_{i}}{\|\mathbf{v}_{i}\|_{1}}\quad \forall\;i\in\{1,\cdots,C\} \tag{17}\] Similar to the standard quantization operator, our weight normalization-based quantization relies on a uniform affine mapping from the high-precision real domain to the low-precision quantized domain using learned per-channel scaling factors \(\mathbf{s}=\{s_{i}\}_{i=1}^{C}\). Thus, by constraining \(g_{i}\) to satisfy Eq. 18, we can learn quantized weights that satisfy our accumulator bit width bound and avoid overflow. \[g_{i}\leq s_{i}\cdot\left(2^{P-1}-1\right)\cdot 2^{\mathbb{1}_{\text{signed}}( \mathbf{x})-N} \tag{18}\] Below, we articulate our weight normalization-based quantization operator. For clarity and convenience of notation, we consider a layer with one output channel (_i.e._, \(C=1\)) such that parameter vectors \(\mathbf{g}=\{g_{i}\}_{i=1}^{C}\) and \(\mathbf{s}=\{s_{i}\}_{i=1}^{C}\) can be represented as scalars \(g\) and \(s\), respectively. \[\text{quantize}(\mathbf{w};s,z):=\text{clip}(\left\lfloor\frac{g}{s} \frac{\mathbf{v}}{\|\mathbf{v}\|_{1}}\right\rfloor+z;n,p) \tag{19}\] During training, our weight quantization operator applies fours elementwise operations the following in order: scale, round, clip, then dequantize. As with the standard operator, we eliminate the zero points in our mapping such that \(z=0\). We use an exponential parameterization of both the scaling Figure 4: We visualize the differences between our accumulator bit width bounds as we vary the size of the dot product (\(K\)) as well as the bit width of the weights (\(M\)) and input activations (\(N\)), which we jointly refer to as “data bit width” such that \(M=N\). factor \(s=2^{d}\) and the norm parameter \(g=2^{t}\), where \(d\) and \(t\) are both log-scale parameters to be learned through stochastic gradient descent. This is similar to the work of (Jain et al., 2020) with the caveat that we remove integer power-of-2 constraints, which provide no added benefit to the streaming architectures generated by FINN as floating-point scaling factors can be absorbed into the threshold logic via mathematical manipulation (Blott et al., 2018). The scaled tensors are then rounded to zero, which we denote by \(\lfloor\cdot\rfloor\). This prevents any upward rounding that may cause the norm to increase past our constraint. Note that this is another difference from the conventional quantization operators, which use half-way rounding. Finally, once scaled and rounded, the elements in the tensor are then clipped and dequantized using Eq. 2. Our resulting quantization operator used during training is given by Eq. 20, where \(n\) and \(p\) depend on the representation range of weight bit width \(M\). \[q(\mathbf{w};s):=\text{clip}\left(\left\lfloor\frac{g}{s}\frac{\mathbf{ v}}{\|\mathbf{v}\|_{1}}\right\rfloor;n,p\right)\cdot s \tag{20}\] \[\text{where }s=2^{d}\] (21) \[\text{and }g=2^{\min(T,t)}\] (22) \[\text{and }T=\mathbb{1}_{\text{signed}}(\mathbf{x})+\log_{2}(2^{P-1}-1 )+d-N \tag{23}\] When quantizing our activations, we use the standard quantization operators discussed in Section 3.1. All activations that follow non-negative functions (_i.e._, ReLU) are represented using unsigned integers, otherwise they are signed. To update our learnable parameters during training, we use the straight-through estimator (STE) (Bengio et al., 2013) to allow local gradients to permeate our rounding function such that \(\nabla_{x}\left\lfloor x\right\rfloor=1\) everywhere, as is common practice. ### Regularization with Lagrangian Penalties To avoid \(t\) getting stuck when \(t>T\) in Eq. 22, we introduce the Lagrangian penalty \(\mathcal{L}_{\text{penalty}}\) given by Eq. 24. For a neural network with \(L\) layers, each with \(C_{l}\) output channels, \(t_{i,l}\) denotes the log-scale parameter of the norm for channel \(i\) in layer \(l\) and \(T_{i,l}\) denotes its upper bound as given by Eq. 23. Note that, even when \(\mathcal{L}_{\text{penalty}}>0\), we still satisfy our accumulator constraints by clipping the norm in Eq. 22. However, our Lagrangian penalty encourages norm \(g_{i}\) and scale \(s_{i}\) of each channel \(i\) in each layer to jointly satisfy the bound without clipping, which allows their log-scale parameters to be updated with respect to only the task error. \[\mathcal{L}_{\text{penalty}}=\sum_{l=1}^{L}\sum_{i=1}^{C_{l}}\left(t_{i,l}-T_{ i,l}\right)_{+} \tag{24}\] It is important to note that our formulation is task agnostic. Assuming base task loss \(\mathcal{L}_{\text{task}}\), our total loss \(\mathcal{L}_{\text{total}}\) is given by Eq. 25, where \(\lambda\) is a Lagrange multiplier. In our experiments, we fix \(\lambda\) to be a constant scalar, although adaptive approaches such as (Roy et al., 2021) could be explored. \[\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{task}}+\lambda\mathcal{L}_{ \text{penalty}} \tag{25}\] ## 7 Experiments We evaluate our algorithm using image classification and single-image super resolution benchmarks. Because our algorithm is the first of its kind, we compare our approach to the standard QAT formulation discussed in Section 3.1. We implement all algorithms in PyTorch using Brevitas v0.7.2 (Pappalardo, 2021), where single-GPU quantization-aware training times range from 5 minutes (_e.g._, ESPCN) to 2 hours (_e.g._, MobileNetV1) on an AMD MI100 accelerator. To generate custom FPGA architectures for the resulting QNNs, we work from FINN v0.8.1 (AMD-Xilinx, 2023a) and add extensions to support our work. ### Image Classification Benchmarks We train MobileNetV1 (Howard et al., 2017) and ResNet18 (He et al., 2016a) to classify images using the CIFAR10 dataset (Krizhevsky et al., 2009). We closely follow the network architectures originally proposed by the respective authors, but introduce minor variations that yield more amenable intermediate representations given the image size as we discuss below. As is common practice, we fix the input and output layers to 8-bit weights and activations for all configurations, and initialize all models from floating-point counterparts pre-trained to convergence on CIFAR10. We evaluate all models by the observed top-1 test accuracy. For MobileNetV1, we use a stride of 2 for both the first convolution layer and the final average pooling layer. This reduces the degree of downscaling to be more amenable to training over smaller images. All other layer configurations remain the same as proposed in (Howard et al., 2017). We use the stochastic gradient descent (SGD) optimizer to fine-tune all models for 100 epochs in batches of 64 images using a weight decay of 1e-5. We use an initial learning rate of 1e-3 that is reduced by a factor of 0.9 every epoch. For ResNet18, we alter the first convolution layer to use a stride and padding of 1 with a kernel size of 3. Similar to MobileNetV1, we remove the preceding max pool layer to reduce the amount of downscaling throughout the network. We also use a convolution shortcut (He et al., 2016b) rather than the standard identity as it empirically proved to yield superior results in our experiments. All other layer configurations remain the same as proposed in (He et al., 2016a). We use the SGD optimizer to fine-tune all models for 100 epochs in batches of 256 using a weight decay of 1e-5. We use an initial learning rate of 1e-3 that is reduced by a factor of 0.1 every 30 epochs. ### Single-Image Super Resolution Benchmarks We train ESPCN (Shi et al., 2016) and UNet (Ronneberger et al., 2015) to upscale single images by a factor of 3x using the BSD300 dataset (Martin et al., 2001). Again, we closely follow the network architectures originally proposed by the respective authors, but introduce minor variations that yield more hardware-friendly network architectures. As is common practice, we fix the input and output layers to 8-bit weights and activations for all configurations; however, we train all super resolution models from scratch. We empirically evaluate all models by the peak signal-to-noise ratio (PSNR) observed over the test dataset. For ESPCN, we replace the sub-pixel convolution with a nearest neighbor resize convolution (NNRC), which has been shown to reduce checkerboard artifacts during training (Odena et al., 2016) and can be efficiently executed during inference (Colbert et al., 2021). All other layer configurations remain the same as proposed in (Shi et al., 2016). We use the Adam optimizer (Kingma and Ba, 2014) to fine-tune all models for 100 epochs in batches of 16 images using a weight decay of 1e-4. We use an initial learning rate of 1e-4 that is reduced by a factor of 0.98 every epoch. For UNet, we use only 3 encoders and decoders to create a smaller architecture than originally proposed by (Ronneberger et al., 2015). We replace transposed convolutions with NNRCs, which have been shown to be functionally equivalent during inference (Colbert et al., 2021), but have more favorable behavior during training (Odena et al., 2016). We replace all concatenations with additions and reduce the input channels accordingly. We use the Adam optimizer to fine-tune all models for 200 epochs in batches of 16 images using a weight decay of 1e-4. We use an initial learning rate of 1e-3 that is reduced by a factor of 0.3 every 50 epochs. ### Experiment Setup and Research Questions We design our experiments around the following questions: * How does reducing the accumulator bit width impact model performance (Section 7.4)? * What are the trade-offs between resource utilization and model performance (Section 7.5)? * Where do our resource savings come from as we reduce accumulator bit width (Section 7.6)? Similar to the experiments described in Section 4, we simplify our analysis and assume that LUTs are the only type of resources available. We configure the FINN compiler to target LUTs for both compute and memory wherever possible so that we can evaluate the impact of accumulator bit width on resource utilization using just one type of resource. Throughout our experiments, we focus our attention on data bit widths between 5 and 8 bits for two reasons: (1) reducing precision below 5 bits often requires uniquely tailored hyperparameters, which would not make for an even comparison across bit widths; and (2) reducing the size of the accumulator has a negligible impact on LUT utilization at lower data bit widths, as shown in Section 4. Still, even with a reduced set of possible data bit widths, it is computationally intractable to test every combination of weight, activation, and accumulator bit widths within the design space exposed by the QNNs that we use as benchmarks. Thus, for weight and activation bit widths, we uniformly enforce precision for each hidden layer in the network such that \(M\) and \(N\) are constant scalars, aside from the first and last layers that remain at 8 bits such that \(M=N=8\). The accumulator bit width, however, is dependent on not only the weight and activation bit widths, but also the size of the dot product (\(K\)), as discussed in Section 5. To simplify our design space, we constrain all layers in a given network to use the same accumulator bit width that we denote as \(P^{*}\) such that the maximum accumulator bit width for any layer is \(P^{*}\) bits. Recall that the value for \(P^{*}\) is used by Eq. 16 to Figure 5: Using image classification and single-image super resolution benchmarks, we show that we are able to maintain model performance with respect to the floating-point baseline even with significant reductions to the accumulator bit width. We visualize this trade-off using pareto frontiers estimated using a grid search over various weight, activation, and accumulator bit widths. Here, we use \(P^{*}\) to denote the largest accumulator bit width allowed across all layers in the network. We compare our algorithm (**green dots**) against the standard quantization baseline algorithm (**blue stars**) and repeat each experiment 3 times, totaling over 500 runs per model to form each set of pareto frontiers. We observe that our algorithm dominates the baseline in all benchmarks, showing that we can reduce the accumulator bit width without sacrificing significant model performance even with respect to a floating-point baseline. upper bound the \(\ell_{1}\)-norm of each weight per-channel and used by Eq. 23 to enforce this constraint during training. To collect enough data to investigate our research questions, we perform a grid search over weight and activation bit widths from 5 to 8 bits. For each of these 16 combinations, we calculate the largest lower bound on the accumulator bit width as determined by the data type bound (Eq. 9) of the largest layer in the network. Using this to initialize \(P^{*}\), we evaluate up to a 10-bit reduction in the accumulator bit width to create a total of 160 configurations. Finally, we benchmark our results against the standard quantization algorithm discussed in Section 3.1; however, because it does not expose control over accumulator bit width, this grid search is restricted to the 16 combinations of weight and activation bit widths. We run each configuration 3 times to form a total of 528 runs per model. In the following sections, we summarize our findings. ### Accumulator Impact on Model Performance Our algorithm introduces a novel means of constraining the weights of a QNN to use a pre-defined accumulator bit width without overflow. As an alternative to our algorithm, a designer can choose to heuristically manipulate data bit widths based on our data type bound given by Eq. 9. Such an approach would still guarantee overflow avoidance when using a pre-defined accumulator bit width \(P\), but is an indirect means of enforcing such a constraint. Given a pre-defined accumulator bit width upper bound \(P^{*}\), we compare the performance of models trained with our algorithm against this heuristic approach. We visualize this comparison as a pareto frontier in Fig. 5. It is important to note that, while this is not a direct comparison against the algorithm proposed by (de Bruin et al., 2020), the experiment is similar in principle. Unlike (de Bruin et al., 2020), we use the more advanced quantization techniques detailed in Section 3.1, and replace the computationally expensive Figure 6: To evaluate the trade-off between resource utilization and accuracy, we visualize the pareto frontier observed over our image classification and single-image super resolution benchmarks. We evaluate these pareto frontiers in the following scenarios: (a) no spatial parallelism; (b) maximizing the PEs for each layer; and (c) maximizing the PEs and SIMDs for each layer. In each scenario, we observe that our algorithm (**green dots**) provides a dominant pareto frontier across each model when compared to the standard quantization algorithm (**blue stars**), showing that our algorithm can reduce LUT utilization without sacrificing significant model performance. loss-guided search technique with an even more expensive, but more comprehensive grid search. The pareto frontier shows the maximum observed model performance for a given \(P^{*}\). We observe that our algorithm can push the accumulator bit width lower than what is attainable using current methods while also maintaining model performance. Furthermore, most models show less than a 1% performance drop from even the floating-point baseline with a 16-bit accumulator, which is most often the target bit width for low-precision accumulation in general-purpose processors (de Bruin et al., 2020; Xie et al., 2021). ### Trade-Offs Between Resources and Accuracy To understand the impact that accumulator bit width can have on the design space of the accelerators generated by FINN, we evaluate the trade-offs between resource utilization and model performance. For each of the models trained in the grid search detailed in Section 7.3, we use the FINN compiler to generate resource utilization estimates and use pareto frontiers to visualize the data. In Fig. 6, we provide the maximum observed model performance for the total LUTs used by the accelerator. We evaluate these pareto frontiers with three optimization configurations. First, we instantiate each layer in each model as a CU without any spatial parallelism optimization and visualize the pareto frontiers in Fig. 5(a). Second, we maximize the number of PEs used in each layer in each model and visualize the pareto frontiers in Fig. 5(b). Finally, we maximize both the number of PEs and the SIMD lanes used and visualize the pareto frontiers in Fig. 5(c). To ensure that the trends we analyze from these estimates are meaningful, we return to the experiments carried out in Section 4. We compare the absolute LUT utilization reported from post-synthesis RTL against the corresponding FINN estimates and observe a 94% correlation. Reducing the precision of weights and activations provides resource utilization savings in exchange for model performance; however, we observe that adding the accumulator bit width to the design space provides a better overall trade-off. Our results show that, for a given target accuracy or resource budget, our algorithm can offer a better trade-off between LUT utilization and model performance than exist Figure 7: We break down LUT utilization into compute, memory, and control flow for each of our pareto optimal models from Fig. 6. ing baselines for various optimization strategies, confirming the benefit of including the accumulator bit width in the overall HW-SW co-design space. ### Evaluating Resource Savings Because we force the FINN compiler to use LUTs for compute and memory resources wherever possible, we evaluate where our resource savings come from. To do so, we separate LUT utilization into compute, memory, and control flow. For compute, we aggregate the LUTs used for adder trees, MACs, and comparison logic; for memory, the LUTs used to store weights, thresholds, and intermediate representation; and for control flow, the LUTs used for on-chip interconnects and AXI-Stream protocols. In Fig. 7, we visualize this break down for each of the pareto optimal models that correspond to our pareto frontier in Fig. 6. We observe that the majority of LUT savings come from reductions to memory resources when accelerators are generated without spatial parallelism, but primarily come from compute resources as parallelism is increased. Without parallelism, the reductions in LUT utilization are largely from the reduced storage costs of thresholds and intermediate activations, which are directly impacted by the precision of the accumulator and output activations. As parallelism is increased, the reductions in compute LUTs primarily come from the reduced cost of MACs, which are directly impacted by the precision of the weights, inputs, and accumulators. Finally, we observe that the control flow LUTs largely remain constant for each network, which is expected as the network architecture is not impacted by changes to the data types used. Noticeably, the relative share of LUTs contributed by control flow logic is higher for networks with skip connections (_e.g._, ResNet18 and UNet) than without (_e.g._, MobileNetV1 and ESPCN), and is relatively less impactful as parallelism is increased. ### A Deeper Look at the Impact of Our Constraints As a byproduct of our weight normalization formulation, our quantization algorithm provides a means of not only constraining weights to fit into an accumulator of user-defined bit width, but also of increasing the sparsity and compressibility of the resulting weights, as shown in Fig. 8. **Sparsity** is the proportion of zero-valued elements in a tensor. The most common use of sparsity in machine learning workloads is to accelerate inference by reducing compute and memory requirements Gale et al. (2020). We direct the interested reader to Hoefler et al. (2021) for a recent survey of prior work on sparsity in deep learning. Among this work are various studies regarding the use of \(\ell_{1}\)-norm weight regularization as a means of introducing sparsity Yang et al. (2019); Chao et al. (2020). Our quantization algorithm has a similar effect. By replacing the \(\ell_{2}\)-norm of the standard weight normalization formulation with our log-scale \(\ell_{1}\)-norm parameter, we introduce a novel means of encouraging unstructured weight sparsity. Recall that the value of \(P^{*}\) is used by Eq. 16 to upper bound the \(\ell_{1}\)-norm of the weights. Consequently, reducing \(P^{*}\) further constrains this upper bound to encourage weight sparsity as a form of \(\ell_{1}\)-norm weight regularization. In Fig. 8 on the left, we visualize the average sparsity across all of our benchmark models as we reduce the accumulator bit width \(P^{*}\). **Compressibility** is often estimated using the theoretical lower bound on the amount of bits per element as measured by the entropy Shannon (1948). Reducing the entropy reduces the amount of information required for lossless compression, increasing the compression rate. Prior work has studied the use of entropy regularization as a means of improving weight compression Agustsson et al. (2017); Aytekin et al. (2019). We observe that our \(\ell_{1}\)-norm constraints have a similar effect as these techniques. As shown in Fig. 8 in the middle, we observe that the entropy decreases as we reduce the accumulator bit width \(P^{*}\). In Fig. 8, we visualize how the sparsity, compressibility, and relative model performance are effected by reductions to \(P^{*}\) using the models from our grid search described in Section 7.3. To simplify our analysis, we focus on configurations where the weight and input activation bit widths were the same (_e.g._, \(M=N\)), and plot the averages observed across all 4 of our benchmark models. For models with 8-bit weights and activations, we observe that reducing \(P^{*}\) to 16 bits yields an average sparsity of 98.2% with an estimated compression rate of 46.5x while maintaining 99.2% of the floating-point performance. ## 8 Conclusion & Future Work We propose a novel quantization algorithm to train QNNs for low-precision accumulation. Our algorithm leverages weight normalization as a means of constraining learned parameters to fit into an accumulator of a pre-defined bit width. Unlike previous work, which has sought to merely Figure 8: As a result of our \(\ell_{1}\)-norm constraints, reducing the accumulator bit width exposes opportunities to exploit unstructured sparsity (left) and weight compression (middle) without sacrificing model performance relative to the floating-point baseline (right). reduce the risk of overflow or mitigate its impact on model accuracy, our approach guarantees overflow avoidance. Our study is the first to our knowledge that explores the use of low-precision accumulators as a means of improving the design efficiency of programmable hardware used as QNN inference accelerators. As such, we theoretically evaluate overflow and derive comprehensive bounds on accumulator bit width with finer granularity than existing literature. Our experiments show that using our algorithm to train QNNs for AMD-Xilinx FPGAs improves the trade-offs between resource utilization and model accuracy when compared to the standard baseline. Our results inform the following takeaways: * While reducing the size of the accumulator invariably degrades model accuracy, our algorithm significantly alleviates this trade-off. * Using our algorithm to train QNNs for lower precision accumulators yields higher performing models for the same resource budget when compared to the baseline. * Without spatial parallelism, the majority of our resource savings come from reductions to memory requirements because reducing the accumulator bit width also reduces the cost of storing thresholds and intermediate activations. * As spatial parallelism is increased, reductions in compute costs dominate our resource savings because reducing the accumulator bit width reduces the cost of creating more MACs. * Our algorithm inherently encourages extreme unstructured sparsity and increased compressibility of the resulting weights of the QNN while maintaining performance relative to the floating-point baseline. The flexibility of FPGAs is a double-edged sword. The bit-level control allows for the precisions of weights, activations, and now accumulators to be individually tuned for each layer in a QNN; however, the design space exposed by so many degrees of freedom introduces a complex optimization problem. Our algorithm increases the flexibility of HW-SW co-design by exposing the accumulator bit width as yet another parameter that can be tuned when simultaneously optimizing QNNs and their corresponding inference accelerators. In future work, we hope to explore the use of state-of-the-art neural architecture search algorithms as a means of navigating this large design space more efficiently. ## Acknowledgements We would like to thank Gabor Sines, Michaela Blott, Nicholas Fraser, Yaman Umuroglu, Thomas Preusser, Mehdi Saeedi, Alex Cann, and the rest of the AMD Software Technology, Architecture, and AECG Research teams for insightful discussions and infrastructure support.
2308.00615
Cardiac MRI Orientation Recognition and Standardization using Deep Neural Networks
Orientation recognition and standardization play a crucial role in the effectiveness of medical image processing tasks. Deep learning-based methods have proven highly advantageous in orientation recognition and prediction tasks. In this paper, we address the challenge of imaging orientation in cardiac MRI and present a method that employs deep neural networks to categorize and standardize the orientation. To cater to multiple sequences and modalities of MRI, we propose a transfer learning strategy, enabling adaptation of our model from a single modality to diverse modalities. We conducted comprehensive experiments on CMR images from various modalities, including bSSFP, T2, and LGE. The validation accuracies achieved were 100.0\%, 100.0\%, and 99.4\%, confirming the robustness and effectiveness of our model. Our source code and network models are available at https://github.com/rxzhen/MSCMR-orient
Ruoxuan Zhen
2023-07-31T00:01:49Z
http://arxiv.org/abs/2308.00615v1
# Cardiac MRI Orientation Recognition and Standardization using Deep Neural Networks ###### Abstract Orientation recognition and standardization play a crucial role in the effectiveness of medical image processing tasks. Deep learning-based methods have proven highly advantageous in orientation recognition and prediction tasks. In this paper, we address the challenge of imaging orientation in cardiac MRI and present a method that employs deep neural networks to categorize and standardize the orientation. To cater to multiple sequences and modalities of MRI, we propose a transfer learning strategy, enabling adaptation of our model from a single modality to diverse modalities. We conducted comprehensive experiments on CMR images from various modalities, including bSSFP, T2, and LGE. The validation accuracies achieved were 100.0%, 100.0%, and 99.4%, confirming the robustness and effectiveness of our model. Our source code and network models are available at [https://github.com/txzhen/MSCMR-orient](https://github.com/txzhen/MSCMR-orient) ## 1 Introduction Cardiac Magnetic Resonance (CMR) images may exhibit variations in image orientations when recorded in DICOM format and stored in PACS systems. Recognizing and comprehending these differences are of crucial importance in deep neural network (DNN)-based image processing and computation, as DNN systems typically treat images merely as matrices or tensors, disregarding the imaging orientation and real-world coordinates. This study aims to investigate CMR image orientation, with a focus on referencing human anatomy and a standardized real-world coordinate system. The goal is to develop an efficient method for recognizing and standardizing the orientation of CMR images. By achieving this goal, we can ensure consistency and enhance the accuracy of DNN-based image analysis in the context of cardiac MRI. For CMR images, standardization of their orientations is a prerequisite for subsequent computing tasks utilizing DNN-based methodologies, such as image segmentation [4] and myocardial pathology analysis [1]. Deep learning methods have found widespread use in orientation recognition and prediction tasks. For instance, Wolterink et al. introduced an algorithm that employs a Convolutional Neural Network (CNN) to extract coronary artery centerlines in cardiac CT angiography (CCTA) images [5]. Building upon CMR orientation recognition, our work focuses on developing a method for standardizing and adjusting the image orientations. This study aims to design a DNN-based approach for achieving orientation recognition and standardization across multiple CMR modalities. Figure 1 illustrates the pipeline of our proposed method. The key contributions of this work are summarized as follows: 1. We propose a scheme to standardize the CMR image orientations and categorize them for classification purposes. 2. We present a DNN-based orientation recognition method tailored for CMR images and demonstrate its transferability to other modalities. 3. We develop a CMR image orientation adjustment tool embedded with a orientation recognition network. This tool greatly facilitates CMR image orientation recognition and standardization in clinical and medical image processing practices. ## 2 Method In this section, we introduce our proposed method for orientation recognition and standardization. Our proposed framework is built on the categorization of CMR image orientations. We propose a DNN to recognize the orientation of CMR images and embed it into the CMR orientation adjust tool. ### CMR Image Orientation Categorization Due to different data sources and scanning habits, the orientation of different cardiac magnetic resonance images may be different, and the orientation vector corresponding to the image itself may not correspond correctly. This may cause problems in tasks such as image segmentation or registration. Taking a 2D image as an example, we set the orientation of an image as the initial image and set the four corners of the image to \(\begin{bmatrix}1&2\\ 3&4\end{bmatrix}\), then the orientation of the 2D MR image may have the following 8 variations, which is listed in Table 1. For each image-label pair \((X_{t},\ Y_{t})\), we can flip \(X_{t},\ Y_{t}\) towards a picked orientation to get a new image-label pair. If we correctly recognize the orientation of an image, we can perform the reverse flip to standardize it. ### Deep Neural Network We employ a classical convolutional neural network for orientation recognition. It is a widely adopted approach in image classification tasks, adhering to the standard design pattern for CNNs. The neural network architecture comprises 3 convolutional blocks, each housing a convolutional layer, batch normalization, ReLU activation, and max pooling. These blocks effectively capture features from the input images. Additionally, an average pooling layer and 2 fully connected layers, with 8 units in the output layer, complete the network, enabling orientation prediction. Figure 1: The pipeline of the proposed CMR orientation recognition and standardization method. Initially, the image undergoes pre-processing. Subsequently, the image is input into a CNN to generate a orientation prediction. Guided by this orientation prediction, the adjust tool can standardize the image, ensuring its alignment with the desired orientation. To train the model effectively, we utilize the cross-entropy loss, which efficiently measures the discrepancy between the predicted orientation and the ground truth orientation label. ### Transfer Learning When adapting the proposed orientation recognition network to new datasets of different modalities, we employ a transfer learning approach to obtain the transferred model. Initially, we freeze the weights of the convolutional layers and fine-tune the fully connected layers. We repeat this process for the subsequent fine-tuning steps until the model converges. Afterwards, we unfreeze the weights of all layers and proceed to fine-tune the entire model. ## 3 Experiment ### Dataset We experiment with the MyoPS dataset [1, 2], which provides the three-sequence CMR (LGE, T2, and bSSFP) from the 45 patients. We divide the CMR data of 45 patients into training and validation sets at the ratio of 80% and 20%. ### Data Pre-processing For each CMR data, we initially apply 7 transformations according to Table 1 to ensure the dataset encompasses all 8 orientations. Subsequently, we slice the 3D CMR data into multiple 2D data instances. Given an image-label pair \((X_{t},Y_{t})\), for each \(X_{t}\), we identify the maximum gray value, denoted as \(G\). Subsequently, three truncation operations are performed on \(X_{t}\) using thresholds of \(0.6G,0.8G\), and \(G\) to generate \(X_{1t},X_{2t},\) and \(X_{3t}\), respectively. In this operation, pixels with gray values higher than the threshold are mapped to the threshold gray value. The utilization of different thresholds allows us to capture image characteristics under various gray value window widths, mitigating the impact of extreme gray values. Additionally, grayscale histogram equalization is applied to \(X_{1t},X_{2t},\) and \begin{table} \begin{tabular}{c c c c} \hline Label & Operation & Image & Correspondence of coordinates \\ \hline 0 & Initial state & \(\begin{bmatrix}1&2\\ 3&4\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[x,y,z]\) \\ 1 & Horizontal flip & \(\begin{bmatrix}2&1\\ 4&3\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[sx-x,y,z]\) \\ 2 & Vertical flip & \(\begin{bmatrix}3&4\\ 1&2\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[x,sy-y,z]\) \\ 3 & Rotate \(180^{\circ}\) clockwise & \(\begin{bmatrix}4&3\\ 2&1\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[sx-x,sy-y,z]\) \\ 4 & Flip along the main diagonal & \(\begin{bmatrix}1&3\\ 2&4\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[y,x,z]\) \\ 5 & Rotate \(90^{\circ}\) clockwise & \(\begin{bmatrix}3&1\\ 4&2\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[sx-y,x,z]\) \\ 6 & Rotate \(270^{\circ}\) clockwise & \(\begin{bmatrix}2&4\\ 1&3\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[y,sy-x,z]\) \\ 7 & Flip along the secondary diagonal & \(\begin{bmatrix}4&2\\ 3&1\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[sx-y,sy-x,z]\) \\ \hline \end{tabular} \end{table} Table 1: Orientation Categorization of 2D CMR Images. Here, \(sx\), \(sy\) and \(sz\) respectively denote the size of image in X-axis, Y-axis and Z-axis. \(X_{3t}\), resulting in \(X^{\prime}_{1t},X^{\prime}_{2t}\), and \(X^{\prime}_{3t}\). Finally, we concatenate these three 2D images into a 3-channel image \([X^{\prime}_{1t},X^{\prime}_{2t},X^{\prime}_{3t}]\), which serves as the input to our proposed DNN. We perform data augmentation by randomly rotating the image slightly and applying random crops and resizing. These approaches introduce variability in the orientation of the images, which aids in improving model generalization and enhances robustness to varying image sizes and aspect ratios. ### Results We initially train the model on the bSSFP modality and subsequently fine-tune it on the T2 and LGE modalities. The training process is depicted in Figure 2. The average accuracy on the dataset is presented in Table 2. The results highlight the model's ability to transfer learning to other modalities, showcasing a remarkable level of accuracy. ## 4 Conclusion We have introduced a DNN-based method for multi-sequence MRI images. The experimental results validate the effectiveness of the orientation recognition network in accurately classifying the orientation of multi-sequence CMR images. Our future research will focus on expanding the categorization of the CMR image orientation and refining the classification network to further enhance the classification accuracy.
2309.05646
A Novel Supervised Deep Learning Solution to Detect Distributed Denial of Service (DDoS) attacks on Edge Systems using Convolutional Neural Networks (CNN)
Cybersecurity attacks are becoming increasingly sophisticated and pose a growing threat to individuals, and private and public sectors. Distributed Denial of Service attacks are one of the most harmful of these threats in today's internet, disrupting the availability of essential services. This project presents a novel deep learning-based approach for detecting DDoS attacks in network traffic using the industry-recognized DDoS evaluation dataset from the University of New Brunswick, which contains packet captures from real-time DDoS attacks, creating a broader and more applicable model for the real world. The algorithm employed in this study exploits the properties of Convolutional Neural Networks (CNN) and common deep learning algorithms to build a novel mitigation technique that classifies benign and malicious traffic. The proposed model preprocesses the data by extracting packet flows and normalizing them to a fixed length which is fed into a custom architecture containing layers regulating node dropout, normalization, and a sigmoid activation function to out a binary classification. This allows for the model to process the flows effectively and look for the nodes that contribute to DDoS attacks while dropping the "noise" or the distractors. The results of this study demonstrate the effectiveness of the proposed algorithm in detecting DDOS attacks, achieving an accuracy of .9883 on 2000 unseen flows in network traffic, while being scalable for any network environment.
Vedanth Ramanathan, Krish Mahadevan, Sejal Dua
2023-09-11T17:37:35Z
http://arxiv.org/abs/2309.05646v1
A Novel Supervised Deep Learning Solution to Detect Distributed Denial of Service (DDoS) attacks on Edge Systems using Convolutional Neural Networks (CNN) ###### Abstract Cybersecurity attacks are becoming increasingly sophisticated and pose a growing threat to individuals, and private and public sectors. Distributed Denial of Service attacks are one of the most harmful of these threats in today's Internet, disrupting the availability of essential services. This project presents a novel deep learning-based approach for detecting DDoS attacks in network traffic using the industry-recognized DDoS evaluation dataset from the University of New Brunswick, which contains packet captures from real-time DDoS attacks, creating a broader and more applicable model for the real world. The algorithm employed in this study exploits the properties of Convolutional Neural Networks (CNN) and common deep learning algorithms to build a novel mitigation technique that classifies benign and malicious traffic. The proposed model preprocesses the data by extracting packet flows and normalizing them to a fixed length which is fed into a custom architecture containing layers regulating node dropout, normalization, and a sigmoid activation function to out a binary classification. This allows for the model to process the flows effectively and look for the nodes that contribute to DDoS attacks while dropping the "noise" or the distractors. The results of this study demonstrate the effectiveness of the proposed algorithm in detecting DDOS attacks, achieving an accuracy of.9883 on 2000 unseen flows in network traffic, while being scalable for any network environment. **Keywords: Distributed Denial of Service, Convolutional Neural Networks, Deep Learning, Intrusion Detection, Network Traffic Analysis** ## 1 Introduction In today's interconnected world, the internet plays a vital role in various domains such as communication, education, business, government, and more. However, with its widespread usage, the prevalence of cyber crimes has also increased, including activities such as spreading misinformation, hacking, and various types of attacks. Among these attacks, Distributed Denial of Service (DDoS) attacks have emerged as a significant threat, posing risks to basic internet standards and security. These attacks can cause temporary paralysis of business processes, disrupt critical services, and flood networks with malicious traffic [1]. ### Impact of DDoS Attacks In the first half of 2022, the world witnessed a staggering \(6,019,888\) DDoS attacks alone [2]. The sheer volume of these attacks has resulted in substantial financial losses and a lack of consumer trust. A recent study revealed that a single DDoS attack can cost a company over $1.6 million, a huge cost for companies of any size [3]. Also, the financial impact of DDoS attacks goes beyond immediate revenue loss, affecting various aspects of a corporation's operations. During an attack, the targeted service or website becomes inaccessible, leading to a loss of potential revenue and customers. Moreover, reputation damage and loss of consumer trust can have long-term consequences for businesses. The increasing frequency of these attacks necessitates the development of effective mitigation techniques to safeguard services and prevent revenue loss. Aside from business damages, DDoS attacks have emerged as a significant factor in geopolities, demonstrating their potential to impact international relations and national security. For example, state-sponsored threat actors targeted 128 governmental organizations in 42 countries supporting Ukraine during the Russia-Ukraine conflict [4]. By targeting such entities, the threat actors seek to create a sense of chaos, confusion, and instability within the geopolitical landscape. ### Legacy Detection Methods Current DDoS detection methods often rely on traditional approaches such as IP filtering or rate-limiting techniques [5] as shown in Figure 2. While these methods have been used for some time and have shown some effectiveness in certain scenarios, they also come with notable limitations that hinder their ability to provide comprehensive protection against evolving DDoS attack techniques. **Lack of Adaptability:** Traditional methods can struggle to adapt to new and sophisticated DDoS attack patterns. As attackers continuously develop novel methods to bypass existing defenses, traditional techniques may fail to keep up with these dynamic threats [6]. This can lead to an increased number of false negatives, allowing malicious traffic to go undetected. **Resource Intensiveness:** Some traditional solutions, such as rate-limiting, can consume significant network resources and processing power. Implementing these techniques may impact the overall performance and responsiveness of the network, potentially affecting legitimate user traffic Figure 1: DDoS Attack Architecture Figure 2: Current State of DDoS Mitigation Techniques and leading to service degradation [7]. Further, in commonplace networks, existing defense mechanisms against DDoS attacks have limited success because they cannot meet the considerable challenge of achieving simultaneously efficient detection, effective response, acceptable rate of false alarm, and the real-time transfer of all packets. **Dependency on Signatures in Attacks:** Some legacy systems rely heavily on signature-based detection, which involves matching incoming traffic patterns against known attack signatures. They use an index of patterns to match the incoming traffic with the known signatures to identify attacks [8]. While this can be effective against known attack types, it falls short against zero-day attacks or variants that have not been previously identified [9]. A common drawback to these entropy-based techniques is the requirement to select an appropriate detection threshold. Given the variation in traffic type and volume across different networks, it is a challenge to identify the appropriate detection threshold that minimizes false positive and false negative rates in different attack scenarios. **Limited Scalability:** Traditional solutions face scalability issues, particularly when dealing with large-scale attacks that generate massive amounts of traffic. Scaling up these methods to handle such attacks is challenging and resource-intensive [10]. ### Deep Learning Based Detection Methods In the modern state of DDoS detection, there is an increasing usage of neural networks and deep learning techniques (Figure 2) This section provides an overview of some recent research contributions in this space. In de Assis et al. [11] the authors used an SDN (Software-Defined Networking) model to detect and mitigate DDoS attacks over a targeted server. The proposed SDN model was compared to baseline logistic regression (LR) models, multilayer perceptron (MLP) networks, and Dense MLP [12]. The authors tested the above detection methods over two test scenarios: one using simulated SDN data, and the other using a more broader dataset. The overall results showed that CNN is efficient in detecting DDoS attacks for all these test scenarios and operates autonomously to allow for speed in the detection and mitigation processes. However, a key weakness of this model is its weak result on a more comprehensive dataset, such as CICDDoS 2019 [13]. In Shaaban et al [14] the authors proposed a neural network-based approach for DDoS attack detection. They compared their proposed model with classification algorithms such as k-nearest neighbors (KNN) and decision trees (DT). It was found that their proposed model performed well compared to other classification algorithms, achieving 99% accuracy on both datasets. However, the data was converted to matrix form by single-column padding, which may affect the learning of the model [12], as the spatial dimensions of the input data changed the way the convolution filters interacted with the data. In addition, their dataset lacked many common DDoS attacks (such as Man In The Middle), while only TCP and HTTP flood DDoS attacks were considered for their dataset 1. Based on these deep learning-based models, this research aims to build upon them by using better datasets that are specialized, such as the CIC DDoS 2019 dataset [13]. By incorporating key improvements such as scalability, flexibility, and reliability (see subsection 1.2), this research will work to improve upon existing models to effectively detect and mitigate DDoS attacks on edge systems. ### Proposed Solution Motivated by the limitations of current approaches and the demand for an advanced DDoS detection solution, this research aims to develop a novel supervised machine learning model capable of handling any data size and accurately differentiating between malicious and benign traffic. In commonplace networks, existing defense mechanisms against DDoS attacks have limited success because they cannot meet the considerable challenge of achieving simultaneously efficient detection and the real-time transfer of packets [6]. To meet this objective, we leverage Convolutional Neural Networks (CNNs), a deep learning approach that has shown promising success in malware detection [15] but remains relatively under-researched and underutilized in the field of cybersecurity. Convolutional Neural Networks (CNNs) are a type of deep learning algorithm that has demonstrated success in various applications, including pattern recognition and in industries such as medicine and biology [16]. Specifically well-suited for analyzing visual imagery, CNNs can learn and extract features from raw data, making them a powerful tool for image classification and object recognition tasks. In the context of cybersecurity, CNNs can be effectively employed to detect and classify malicious network traffic. By analyzing network traffic data, CNNs can learn to identify patterns and features associated with DDoS attacks, enabling them to accurately differentiate between benign and malicious traffic. ### Benchmark Standards To address the challenge of DDoS attacks, state-of-the-art mitigation techniques should possess certain characteristics. **Scalability:** Allows the solution to adapt to business growth and handle the increasing size of attacks. Attacks larger than 2 terabits per second (Tbps) have occurred, and there's no indication that attack traffic size will plateau or trend downward in the future.1 For this reason, attacks of large magnitudes should be expected and mitigated. Footnote 1: [https://www.cloudflare.com/learning/ddos/ddos-mitigation](https://www.cloudflare.com/learning/ddos/ddos-mitigation) **Flexibility:** Enabling the creation of ad hoc policies and patterns to respond to emerging threats in real time. The system must be adaptable to recognize attacks even when there are large fluctuations in legitimate traffic.2 Footnote 2: [https://www.fortinet.com/resources/cyberglossary/implementation-dos-mitigation-strategy](https://www.fortinet.com/resources/cyberglossary/implementation-dos-mitigation-strategy) **Reliability:** Ensuring the functionality of the DDoS protection system. Although various methods have been proposed to detect and identify DDoS attacks, many existing approaches do not fully meet these requirements. **Predictability:** DL methods exhibit the capability to extract features and classify data even with incomplete information [17]. By learning long-term dependencies of temporal patterns, DL methods should effectively identify low-rate attacks. Based on these standards, this paper aims to contribute to the field by introducing a DL-based DDoS detection architecture on edge systems. By employing CNNs, our proposed model reduces the need for extensive feature engineering and exhibits high detection accuracy. This novel solution has the potential to be deployed by customers and organizations to effectively detect and mitigate DDoS attacks on edge systems. The objective of this project is to create a supervised model capable of handling any data size and consistently and accurately differentiating malicious from benign traffic. Furthermore, this model can be implemented on any network size and is functional on private and public networks alike. The engineering goal of this project is to design and develop a dynamic deep learning model that can accurately identify malicious and benign network traffic across a wide range of attack methods and situations, even when dealing with large amounts of real-time data in short time constraints. Leveraging the capabilities of CNNs and their proven success in other domains, this study aims to develop a state-of-the-art model that can effectively detect and mitigate DDoS attacks on edge systems. ## 2 Methodology The CNN this study proposes is designed to learn malicious activity from traffic and identify DDoS patterns regardless of their topological positioning. This is a key advantage of CNNs in classic examples of image recognition [18], as they produce consistent output regardless of where a pattern appears in the input. By utilizing this feature in the preprocessing method, this research can utilize the key advantage of CNNs in the context of anomaly detection. This feature learning during model training eliminates the need for extensive feature engineering, ranking, and selection. We employ a novel network traffic preprocessing technique that creates a spatial data representation as input to the CNN to support real-time attack detection. This section introduces the network traffic preprocessing method, the CNN model architecture, and the learning process. ### Dataset The DDoS evaluation dataset (CIC-DDoS2019) is a dataset of PCAP files that contains both benign and DDoS traffic. This dataset is beneficial to our DDoS attack detection task because it contains real-world examples of DDoS traffic that provide more realistic and accurate results than synthetic datasets. The CIC DDoS2019 dataset has several features that are helpful for our analysis, including the inclusion of benign and DDoS traffic and the use of multiple types of DDoS attacks, including SYN (Synchronized) floods, UDP (User Datagram Protocol) floods, and HTTP floods. [19] The Canadian Institute of Cybersecurity has also split various attacks into unique timestamps that are used to visualize the dataset in a CSV format. While this research aims to make our model as dataset-agnostic as possible, this will allow us to create a comprehensive frame of reference to visualize and train/test on. (Figure 3) ### Preprocessing Procedure This section elucidates the imperative process of rendering input data amenable to the Convolutional Neural Network (CNN) model, all the while ensuring that this preprocessing is non-specific to any particular dataset. The essence of this procedure is to construct a dataset-agnostic preprocessing mechanism capable of generating traffic observations following those observed in contemporary online systems, thereby broadening the scope for testing and training, and enhancing the model's effectiveness. To rigorously analyze the dataset and efficiently implement our CNN model, it becomes data preprocessing is a requisite preliminary step. The primary objective herein is to ascertain fairness and equal distribution of data to attain the utmost precision in results. To achieve this, we embark on the utilization of PCAP (Packet Capture) files housing network traffic data, and employ the Pyshark library for data extraction. These extracted data components are then organized into discrete "flows." This structuring of input data in the form of packet flows gives rise to a spatial data representation, thereby endowing the CNN model with the capacity to discern salient features characterizing both DDoS (Distributed Denial of Service) attacks and benign network traffic [20]. In this comprehensive preprocessing endeavor, we present Algorithm 1, designed to effectuate the transformation of raw PCAP data into labeled samples, aptly tailored for CNN input. This algorithm ingests multiple parameters, including the original PCAP data, a user-defined time interval (\(t\)) for aggregating packets into flows, the maximum permissible number of packets per sample (\(m\)), and labels (\(l\)) that serve to classify each packet, discerning between categories such as DDoS attacks and benign traffic. The algorithm's core aim is to standardize the input data format, thereby simplifying the training and testing of CNN models while preserving data fairness. Symbols and their respective definitions are presented comprehensively in Table 1. The procedural sequence of the Data Preprocessing algorithm unfolds as follows: commencing with the initialization of an empty set (s) designated for storing flow data, the algorithm proceeds to establish a local variable (\(t_{0}\)), initially set to Figure 3: Illustration of Proposed Procedure zero, to function as a time counter. Simultaneously, an identifier (\(id\)) is introduced for packet labeling. Subsequently, the algorithm iteratively processes each packet from the PCAP data, continually updating the identifier (\(id\)) with pertinent packet headers, including Source IP and Destination IP, thereby facilitating accurate labeling. It ascertains whether the current packet signifies the commencement of a fresh time flow, contingent on the evaluation of the time counter (\(t_{0}\)) about the user-specified time interval (\(t\)). Should the number of packets within the current time flow fall below the stipulated maximum (\(m\)), the algorithm appends the packet to the ongoing flow. Consequently, the resultant sample undergoes normalization to accommodate any space, ensuring uniformity. Finally, the algorithm assigns labels to each flow within the model, contingent on the labels (\(l\)) provided, based on their respective identifiers, thereby culminating in the production of a labeled sample, aptly primed for CNN input. Furthermore, the intrinsic advantages of this algorithm extend to the emulation of the traffic-capturing process inherent to online Intrusion Detection Systems (IDSs). In this context, traffic is collected over a specified time interval (\(t\)) before being submitted to the anomaly detection algorithm. Consequently, such algorithms are necessitated to make decisions based on subsets of traffic flows, devoid of comprehensive knowledge regarding their entire lifespan. To replicate this operational paradigm, attributes of packets associated with the same bi-directional traffic flow are methodically grouped in chronological order. A rigorous normalization and zero-padding procedure is employed to ensure homogeneity in input sequence lengths. Herein, each attribute value is normalized to a \([0,1]\) scale. Additionally, the samples are augmented with zero-padding to ensure uniformity, with each sample achieving a fixed length (\(n\)), a prerequisite for effective CNN learning over the entire sample set. To preempt any inherent bias towards one class or the other, a \begin{table} \begin{tabular}{|c|p{284.5pt}|} \hline **Symbol** & **Description** \\ \hline \(pcap\) & Input PCAP data, which contains network traffic information \\ \hline \(t\) & Time interval for grouping packets into flows \\ \hline \(m\) & Maximum number of packets per sample (flow) \\ \hline \(l\) & Label for each packet (e.g., distinguishing between DDoS attacks and benign traffic) \\ \hline \(sample\) & Output, labeled samples for input to a CNN model \\ \hline \(s\) & Temporary storage for flow data \\ \hline \(t_{0}\) & Local variable representing the current time counter \\ \hline \(id\) & Identifier for each packet (e.g., based on packet headers like Source IP, Dest IP) \\ \hline \(packet\) & Individual packets within a flow \\ \hline \(flows\) & Individual flow extracted from the network traffic \\ \hline \end{tabular} \end{table} Table 1: Symbols for Preprocessing algorithm balancing procedure is instituted, affording more weight to the minority class or vice versa. ### Final Model Architecture In the following phase, we proceed with the implementation of our Convolutional Neural Network (CNN) model. The architecture of our CNN model, as illustrated in Figure 3, encompasses a sequence of designed layers, each of which has been rigorously substantiated in a plethora of publications. **Input Layer:** The initiation of our CNN model involves taking the output generated by Algorithm 1 as the input (Figure 4) for the express purpose of online attack detection. This model functions to classify traffic flows into one of two distinct categories: malicious (i.e., representing Distributed Denial of Service (DDoS) attacks) or benign. The paramount aim here is to optimize the model's simplicity and computational efficiency, rendering it suitable for deployment on resource-constrained devices. In terms of size, the input layer is \(nxm\), where \(m=11\) since \(11\) features are read by the algorithm. The output produced by the preprocessing algorithm serves as the input for the proposed CNN Architecture to undergo training (Figure 5). **2D Convolutional Layer:** Our architecture incorporates a 2D convolutional layer equipped with 64 filters, each having a kernel size of 3 x 3. This layer assumes the responsibility of feature extraction from the input data. It achieves this by employing filter sliding mechanisms over the input, calculating dot products [20]. It should be noted that this layer is attuned to accommodate the modified array data detailed in section 2.2. **Dropout Layer:** Following the convolutional layer, we introduce a dropout layer, employing a recommended dropout rate of 0.5 [21]. This layer's role is to randomly deactivate a certain percentage of input units during each training update, mitigating the risk of overfitting. Within this layer, we employ the Rectified Linear Unit (ReLU) activation function to introduce non-linearity into the model. The ReLU function is expressed mathematically as \[f(x)=max(0,x)\] where it essentially replaces negative inputs with zero, thereby turning off these neurons. This layer discerns the relevance of input nodes in the model's decision-making process. **GlobalMaxPooling2D Layer:** A pivotal component, the GlobalMaxPooling2D layer, executes max pooling on the input data, serving to reduce spatial dimensions while preserving salient features. By including max pooling, the model can focus on the most important features that separate a benign attack from a DDoS attack, making it much more efficient. After the Max Pooling, the output is then flattened to produce a final one-dimensional feature vector, which is used as input to the classification layer. This allows the model to make its final prediction on whether the input represents a benign or malicious traffic flow. **Final Fully Connected Layer:** The ultimate layer, in the form of a fully connected layer, is equipped with a sigmoid activation function, as described in Roopak's study on malicious traffic using CNN [22]. This layer serves the critical function of computing the final output, delivering a probability estimation regarding the input being a DDoS attack. The sigmoid function is formally represented as \[f(z)=\frac{1}{1+e^{-z}}\] The output of this function, denoted as \({}^{\prime}p\),\({}^{\prime}\) ranges between 0 and 1, making it particularly suited for models wherein probability prediction is pivotal. When \(p\) exceeds 0.5, the traffic is classified as a DDoS attack; otherwise, it is classified as benign. In summary, this model architecture holds notable advantages, especially its fully connected Figure 4: Output of Preprocessing Algorithm structure. It exhibits enhanced computational efficiency, with biases and weights exerting a less pronounced impact on the model's performance. This structural attribute augments its suitability for resource-constrained environments and applications. ## 3 Experimental Findings This section comprehensively outlines the training and evaluation procedures for our CNN model, accompanied by a report of commonly used evaluation metrics for measuring the performance of the model. Supervised learning serves as the foundation of our methodology, leveraging labeled datasets where each network traffic flow is distinctly categorized as either a DDoS attack or benign traffic (see 2.2). ### Common Performance Metrics To evaluate the performance of our CNN model in the realm of DDoS attack detection, we report a battery of well-established performance metrics. These metrics provide invaluable insights into the model's capacity to accurately distinguish between benign and DDoS traffic. Our analysis commences with a confusion matrix--a cornerstone for assessing classification algorithm performance. This matrix comprises four essential values: True Positives (\(TP\)), False Positives (\(FP\)), True Negatives (\(TN\)), and False Negatives (\(FN\)), where a positive prediction is used to flag potential malicious traffic. These values serve as the building blocks for widely used evaluation metrics: **Precision:** Precision quantifies the proportion of true positive predictions relative to the total positive predictions \[Precision=\frac{TP}{TP+FP}\] It reflects the model's ability to accurately classify DDoS attacks without mislabeling benign traffic. **Recall:** Recall measures the model's ability to correctly identify all actual positive instances \[Recall=\frac{TP}{TP+FN}\] It highlights the model's effectiveness in capturing all DDoS attacks present in the dataset. \(F1\) **Score:** The \(F1\) score represents a harmonic mean between precision and recall, offering a balanced assessment of the model's performance. \[F1=\frac{2\cdot(Precision\cdot Recall)}{Precision+Recall}\] The F1 score takes into account both false positives and false negatives, making it a valuable measure of overall performance. **Accuracy \((A)\):** Accuracy is defined as the proportion of correctly classified instances and is calculated using the formula: \[A=\frac{TP+TN}{TP+TN+FP+FN}\] It provides an overarching view of the model's correctness in its predictions. These performance metrics, derived from the confusion matrix, allow us to assess the CNN model's ability to distinguish between benign and DDoS traffic effectively. Furthermore, they enable comparisons with other state-of-the-art DDoS detection methods and provide insights into areas for potential improvement. Results are presented in tables and graphs, complemented by statistical analysis to determine the significance of observed differences. ### Training To train the model, we used the CIC DDoS 2019 Dataset, as discussed in 2.1, renowned as the Figure 5: Architecture for CNN Model standard benchmark dataset in the domain of anomaly detection [19]. Using convention, we split the dataset into a Training, Validation, and Testing distribution of \(80:10:10\) respectively. The inclusion of a validation set helps the model tune to optional hyperparameters that fine-tune the prediction in the model. Otherwise, such a split wouldn't be necessary. [23] (Table 2). For optimization during training, we employ the Adam optimizer, wherein key hyperparameters such as learning rate, batch size, and the number of epochs are tuned. Cross-validation is incorporated to assess the model's performance while mitigating the effects of overfitting. Training and evaluation occur on the preprocessed dataset, utilizing the common performance metrics described above. The inclusion of a validation dataset consistently enhanced accuracy over each epoch, highlighting the model's robustness and capacity to generalize effectively. The training process involved grid search cross-validation to perform hyperparameter tuning., A maximum of 1000 epochs is permitted for each grid point. Training halts if no discernible improvement in loss minimization is observed for ten consecutive epochs, as determined by the patience variable preset to 10. Through this process, the model attained a training accuracy of.987. Performance is gauged by the F1 score, which reached a maximum of.984. It was observed that the inclusion of more samples (\(n\)) contributed to higher F1 scores and accuracy. ## 4 Results The proposed CNN model demonstrates proficiency in classifying previously unseen traffic flows, distinguishing them as benign or malicious, and specifically identifying DDoS attacks. The model was evaluated against a dataset comprising 2000 previously unseen DDoS flow samples from the CIC Dataset. We used a confusion matrix (Figure 6) to calculate the metrics that were outlined in section 3.1. The results outlined in Table 3 underscore the model's capability to effectively classify network traffic flows, distinguishing between benign and malicious (DDoS) attacks with remarkable precision. Notably, the recall value (0.9784) emphasizes the model's proficiency in correctly identifying a substantial proportion of actual malicious flows. The model's accuracy of.9883, while maintaining a high True Positive Rate (\(TPR\)) and low False Positive Rate (\(FPR\)) (less than.01), further highlights its robustness in distinguishing between benign and malicious traffic flows. Moreover, the \(F1\) score of.9824 attests to the model's equilibrium between precision and recall. One of the unique features of this model is its efficiency in processing data, as elucidated in Section 2. The testing set, which encompassed a \begin{table} \begin{tabular}{|c|c|} \hline **Dataset** & **Number of Samples** \\ \hline Training Set & 18,735 \\ Validation Set & 2,082 \\ Test Set & 2,313 \\ \hline Total & 23,130 \\ \hline \end{tabular} \end{table} Table 2: Dataset Distribution \begin{table} \begin{tabular}{|l|c|} \hline **Metric** & **Value** \\ \hline Precision & 0.9864 \\ \hline Recall & 0.9784 \\ \hline F1 Score & 0.9824 \\ \hline Accuracy & 0.9883 \\ \hline \end{tabular} \end{table} Table 3: Performance Metrics Figure 6: Confusion Matrix for Results of Proposed Model significantly larger number of packets, was processed in just 0.28 seconds while consistently achieving high positive rates and minimizing both false positives and false negatives (both less than.01) as seen in Figure 6. Collectively, these exceptional metrics illustrate the model's potential for practical deployment in network security, particularly concerning DDoS attack detection, where timely identification and mitigation are paramount. ## 5 Discussion The successful implementation and evaluation of our Convolutional Neural Network (CNN) model for DDoS attack detection in network traffic data exemplifies the promising potential of deep learning techniques in the cybersecurity domain. In this section, we compare the model against state-of-the-art methods, deliberate on the strengths and weaknesses of our approach, and offer avenues for future exploration. ### Comparison with State-of-the-Art Methods In this subsection, we draw comparisons with the studies referenced in 1.3. In comparison to De Assis et al.'s work [11], which achieved an accuracy of.954 on the CIC-DDoS 2019 dataset, the proposed CNN model significantly outperforms it across all categories, showcasing its distinguished performance for the task of DDoS attack detection. While efficient, their model demonstrated less accuracy when tested on datasets with more variety of attacks and volume such as the dataset used in this research. Concerning Shaaban et al.'s work [14], no specific efficiency or performance ratings were reported for comparison. The proposed CNN model in this study contributes to the existing research landscape by providing a robust and high-performing solution for DDoS attack detection, demonstrating its potential applicability in various cybersecurity contexts. ### Strengths and Limitations #### Strengths: * **Effective Feature Identification:** The preprocessing algorithm adeptly extracts critical features from network traffic data, empowering the CNN model to acquire robust feature representations. This significantly contributed to the model's high accuracy in distinguishing DDoS attacks from benign traffic. * **Automated Hyperparameter Tuning:** Our approach incorporates automated hyperparameter tuning, optimizing the model for the specific characteristics of the dataset. This adaptability ensures that the model attains peak performance. * **Validation-Test Split:** Through the deployment of a validation-test split, our model can adapt to different features within PCAP files, rendering it versatile and adaptable to diverse network conditions. More research, in general, can be used to find the number of hyperparameters that are tuned, to determine the size of the split. [24] * **ReLU Activation and Kernel Technique:** The utilization of the Rectified Linear Unit (ReLU) activation function and kernel techniques proved effective in discerning the significance of specific features, enhancing the model's interpretability and predictive capabilities. * **Generalizability:** Our model demonstrated its ability to generalize beyond the training dataset, showcasing its potential for identifying unseen attack patterns effectively. #### Limitations: * **Dataset Dependency:** The model's performance is heavily contingent on the quality and diversity of the training dataset. Enhancing its robustness necessitates the inclusion of more diverse data sources and attacks. * **Zero-day Attacks:** Like many machine learning models [25], our CNN-based approach may grapple with the detection of zero-day attacks or those featuring previously unseen patterns. Continual model updates are imperative to address this limitation. ### Future Work Though this study has very high promises and outcomes, there are still critical considerations regarding the impact of data preprocessing techniques and other decisions chosen in this model. While our current methodology, which includes normalization and padding, has yielded favorable results, there is still room for exploration in evaluating alternative preprocessing techniques and optimizing these procedures. Furthermore, although our model exhibits proficiency within a controlled laboratory environment and a structured dataset, there is ample scope for its deployment in more complex real-world scenarios. Our model's role as a Detection System has the potential to expand towards proactive detection and quarantine mechanisms, significantly contributing to network security enhancement. Further, the high accuracy achieved by the model in controlled testing environments suggests its potential effectiveness in real-world scenarios. The logical next step involves deploying and integrating the model within a practical network security system where efficient and accurate DDoS threat detection is imperative. Additionally, applying and enhancing the model's performance requires careful attention and the adoption of supplementary performance metrics. In this regard, we propose incorporating Receiver Operating Characteristic (ROC) curves and computing the Area Under the ROC Curve (AUC). These metrics extend the model evaluation toolkit and offer a nuanced perspective on its discrimination capabilities. Addressing the inherent challenge of zero-day attacks, characterized by novel and previously unseen patterns, is imperative for ongoing research. While machine learning models excel under training and evaluation conditions that mirror known patterns, the dynamic nature of cybersecurity necessitates regular model updates to effectively accommodate emerging threats [12]. ## 6 Conclusion This research highlights the potential of Convolutional Neural Networks in the realm of security and anomaly detection broadly. We have fashioned an efficient and accurate DDoS attack detection model that surpasses state-of-the-art methodologies in key metrics. Our approach's adaptability, versatility, and generalization capabilities position it as a promising candidate for real-world deployment in network security systems, where the timely identification and mitigation of DDoS threats are paramount. DDoS Attacks pose a challenge to not only business servers but individuals as well. In this work, we have presented a CNN-based DDoS detection architecture that offers an effective solution but also advances the field of threat detection and network security in the digital age. The robust performance of our model paves the way for enhanced security measures, protecting critical networks and systems from evolving cybersecurity threats. Acknowledgments.Many thanks to Gayathri Easwaran and Ravi Ramanathan for providing constant support through the entire process. ## Declarations ### Funding Not applicable. ### Competing interests The authors declare that they have no competing interests. ### Ethics approval Not applicable. ### Consent to participate Not applicable. ### Consent for publication Not applicable. ### Availability of data and materials The dataset supporting the conclusions of this article is available in the DDoS Evaluation Dataset repository, [https://doi.org/10.1109/CCST.2019.888841](https://doi.org/10.1109/CCST.2019.888841). ### Authors' contributions VR and KM outlined the motivation and procedure of the research. VR collaborated with KM to produce the algorithm and preprocessing methods. VR, KM, and SD joined the discussion of the work and provided suggestions for future work regarding algorithms and data processing. VR and SD reviewed the manuscript and gave suggestions on the revision of the details of the article. All authors read and approved the final manuscript.
2309.04558
Towards Interpretable Solar Flare Prediction with Attention-based Deep Neural Networks
Solar flare prediction is a central problem in space weather forecasting and recent developments in machine learning and deep learning accelerated the adoption of complex models for data-driven solar flare forecasting. In this work, we developed an attention-based deep learning model as an improvement over the standard convolutional neural network (CNN) pipeline to perform full-disk binary flare predictions for the occurrence of $\geq$M1.0-class flares within the next 24 hours. For this task, we collected compressed images created from full-disk line-of-sight (LoS) magnetograms. We used data-augmented oversampling to address the class imbalance issue and used true skill statistic (TSS) and Heidke skill score (HSS) as the evaluation metrics. Furthermore, we interpreted our model by overlaying attention maps on input magnetograms and visualized the important regions focused on by the model that led to the eventual decision. The significant findings of this study are: (i) We successfully implemented an attention-based full-disk flare predictor ready for operational forecasting where the candidate model achieves an average TSS=0.54$\pm$0.03 and HSS=0.37$\pm$0.07. (ii) we demonstrated that our full-disk model can learn conspicuous features corresponding to active regions from full-disk magnetogram images, and (iii) our experimental evaluation suggests that our model can predict near-limb flares with adept skill and the predictions are based on relevant active regions (ARs) or AR characteristics from full-disk magnetograms.
Chetraj Pandey, Anli Ji, Rafal A. Angryk, Berkay Aydin
2023-09-08T19:21:10Z
http://arxiv.org/abs/2309.04558v1
# Towards Interpretable Solar Flare Prediction with Attention-based Deep Neural Networks ###### Abstract Solar flare prediction is a central problem in space weather forecasting and recent developments in machine learning and deep learning accelerated the adoption of complex models for data-driven solar flare forecasting. In this work, we developed an attention-based deep learning model as an improvement over the standard convolutional neural network (CNN) pipeline to perform full-disk binary flare predictions for the occurrence of \(\geq\)M1.0-class flares within the next 24 hours. For this task, we collected compressed images created from full-disk line-of-sight (LoS) magnetograms. We used data-augmented oversampling to address the class imbalance issue and used true skill statistic (TSS) and Heidke skill score (HSS) as the evaluation metrics. Furthermore, we interpreted our model by overlaying attention maps on input magnetograms and visualized the important regions focused on by the model that led to the eventual decision. The significant findings of this study are: (i) We successfully implemented an attention-based full-disk flare predictor ready for operational forecasting where the candidate model achieves an average TSS=0.54\(\pm\)0.03 and HSS=0.37\(\pm\)0.07. (ii) we demonstrated that our full-disk model can learn conspicuous features corresponding to active regions from full-disk magnetogram images, and (iii) our experimental evaluation suggests that our model can predict near-limb flares with adept skill and the predictions are based on relevant active regions (ARs) or AR characteristics from full-disk magnetograms. space weather, solar flares, deep neural networks, attention, and interpretability. + Footnote †: publicationid: _This is a preprint accepted at the 6th International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), 2023. OIEEE_ ## I Introduction Solar flares are relatively short-lasting events, manifested as the sudden release of huge amounts of energy with significant increases in extreme ultraviolet (EUV) and X-ray fluxes, and are one of the central phenomena in space weather forecasting. They are detected by the X-ray Sensors (XRS) instrument onboard Geostationary Operational Environmental Satellite (GOES) [1] and classified according to their peak X-ray flux level, measured in watts per square meter (\(Wm^{-2}\)) into the following five categories by the National Oceanic and Atmospheric Administration (NOAA): X (\(\geq 10^{-4}Wm^{-2}\)), M (\(\geq 10^{-5}\) and \(<10^{-4}Wm^{-2}\)), C (\(\geq 10^{-6}\) and \(<10^{-5}Wm^{-2}\)), B (\(\geq 10^{-7}\) and \(<10^{-6}Wm^{-2}\)), and A (\(\geq 10^{-8}\) and \(<10^{-7}Wm^{-2}\)) [2]. In solar flare forecasting, M- and X-class flares are large and relatively scarce events and are usually considered to be the class of interest as they are more likely to have a near-Earth impact that can affect both space-based systems (e.g., satellite communication systems) and ground-based infrastructures (e.g., electricity supply chain and airline industry) and even pose radiation hazards to astronauts in space. Therefore, it is essential to have a precise and reliable approach for predicting solar flares to mitigate the associated life risks and infrastructural damages. Active regions (ARs) are the areas on the Sun (visually indicated by scattered red flags in full-disk magnetogram image, shown in Fig. 1) with disturbed magnetic field and are considered to be the initiators of various solar activities such as coronal mass ejections (CMEs), solar energetic particle (SEP) events, and solar flares [3]. The majority of the approaches for flare prediction primarily target these ARs as regions of interest and generate predictions for each AR individually. The magnetic field measurements, which are the dominant feature employed by the AR-based forecasting techniques, are susceptible to severe projection effects as ARs get closer to limbs to the degree that after \(\pm\)60\({}^{\circ}\) the magnetic field readings are distorted [4]. Therefore, the aggregated flare occurrence probability (for the whole disk), in fact, is restricted by the capabilities of AR-based models. This is because the input data is restricted to ARs located in an area within \(\pm\)30\({}^{\circ}\) (e.g., [5]) to \(\pm\)70\({}^{\circ}\) (e.g., [6]) from the center due to severe projection effects [7]. As AR-based models include data up to \(\pm\)70\({}^{\circ}\), in the context of this paper, this upper limit (\(\pm\)70\({}^{\circ}\)) is used as a boundary for central location (within \(\pm\)70\({}^{\circ}\)) and near-limb Fig. 1: An annotated full-disk magnetogram image as observed on 2013-05-13 at 02:00:00 UTC, showing the approximate central location (within \(\pm\)70\({}^{\circ}\)) and near-limb (beyond \(\pm\)70\({}^{\circ}\) to \(\pm\)90\({}^{\circ}\)) region with all the visible active regions present at the noted timestamp, indicated by the red flags. Note that the directions East (E) and West (W) are reversed in solar coordinates. regions (beyond \(\pm\)70\({}^{\circ}\)) as shown in Fig. 1. Furthermore, to issue a full-disk forecast using an AR-based model, the usual approach involves aggregating the flare probabilities from each AR by applying a heuristic function, as outlined in [8]. This aggregated result estimates the probability of at least one AR experiencing a flare event, assuming that the occurrence of flares in different ARs is conditionally independent and assigning equal weights to each AR during full-disk aggregation. This uniform weighting approach may not accurately capture the true impact of each AR on the probability of predicting full-disk flares [9]. It is essential to note that the specific weights for these ARs are generally unknown, and there are no established methods for precisely determining these weights. While AR-based models are limited to central locations and require a heuristic to aggregate and issue comprehensive forecasts, full-disk models use complete, often compressed, magnetograms corresponding to the entire solar disk. These magnetograms are used for shape-based parameters such as size, directionality, sunspot borders [10], and polarity inversion lines [11]. Although projection effects still prevail in the original magnetogram rasters, deep-learning models can learn from the compressed full-disk images as observed in [12, 13, 14] and issue the flare forecast for the entire solar disk. Therefore, a full-disk model is appropriate to complement the AR-based counterparts as these models can predict the flares that appear on the near-limb regions of the Sun and add a crucial element to the operational systems. Deep learning-based approaches have significantly improved results in generic image classification tasks; however, these models are not easily interpretable due to the complex modeling that obscures the rationale behind the model's decision. Understanding the decision-making process is critical for operational flare forecasting systems. Recently, several empirical methods have been developed to explain and interpret the decisions made by deep neural networks. These are post hoc analysis methods (attribution methods) (e.g., [15]), meaning they focus on the analysis of trained models and do not contribute to the model's parameters while training. In this work, we primarily focus on developing a convolutional neural network (CNN) based full-disk model with trainable attention modules that can amplify the relevant features and suppress the misleading ones while predicting \(\geq\)M1.0-class solar flares as well as evaluating and explaining our model's performance by visualizing the attention maps overlaying on the input magnetograms to understand which regions on the magnetogram were considered relevant for the corresponding decision. To validate and compare our results, we train a baseline model with the same architecture as our attention model, which however, follows the standard CNN pipeline where a global image descriptor for an input image is obtained by flattening the activations of the last convolutional layer. By integrating attention modules into the standard CNN pipeline, we attain two significant advantages: enhanced model performance and the ability to gain insight into the decision-making process. This integration not only improves the predictive abilities but also provides an interpretable model that reveals the significant features influencing the model's decisions. The architecture combines the CNN pipeline with trainable attention modules as mentioned in [16]. Both of our model's architectures are based on the general CNN pipeline; details are described later in Sec. IV. The novel contributions of this paper are as follows: (i) We introduce a novel approach of a light-weight attention-based model that improves the predictive performance of traditional CNNs for full-disk solar flare prediction (ii) We utilize the attention maps from the model to understand the model's rationale behind prediction decision and show that the model's decisions are linked to relevant ARs (iii) We show that our models can tackle the prediction of flares appearing on near-limb regions of the Sun. The remainder of this paper is organized as follows: In Sec. II, we outline the various approaches used in solar flare prediction with contemporary work using deep learning. In Sec. III, we explain our data preparation and class-wise distribution for binary prediction mode. In IV we present a detailed description of our flare prediction model. In Sec. V, we present our experimental design and evaluations. In Sec. VI we present case-based qualitative interpretations of attention maps, and, lastly, in Sec. VII, we provide our concluding remarks with avenues for future work. ## II Related Work Solar flare prediction currently, to the best of our knowledge, relies on four major strategies: (i) empirical human prediction (e.g., [17, 18]), which involves manual monitoring and analysis of solar activity using various instruments and techniques, to obtain real-time information about changes in the Sun's magnetic field and surface features, which are often precursors to flare activity; (ii) physics-based numerical simulations (e.g., [19, 20]), which involves a detailed understanding of the Sun's magnetic field and the processes that drive flare activity and running simulations models to predict the occurrence of flares; (iii) statistical prediction (e.g., [21, 22]), which involves studying the historical behavior of flares to predict their likelihood in the future using statistical analysis and is closely related to (iv) machine learning and deep learning approaches (e.g., [23, 24, 25, 26, 27, 28, 29, 30]), which involves training algorithms with vast amount of historical data and creating data-driven models that detects subtle patterns associated with flares in solar activity and make predictions. The rapid advancements in deep learning techniques have significantly accelerated research in the field of solar flare prediction. A CNN-based flare forecasting model trained with solar AR patches extracted from line-of-sight (LoS) magnetograms within \(\pm\)30\({}^{\circ}\) of the central meridian to predict \(\geq\)C-, \(\geq\)M-, and \(\geq\)X-class flares was presented in [5]. Similarly, [26] use a CNN-based model to issue binary class predictions for both \(\geq\)C- and \(\geq\)M-class flares within 24 hours using Space-Weather Helioseismic and Magnetic Imager Active Region Patches (SHARP) data [31] extracted from solar magnetograms using AR patches located within \(\pm 45^{\circ}\) of the central meridian. Both of these models are limited to a small portion of the observable disk in central locations (\(\pm 30^{\circ}\) and \(\pm 45^{\circ}\)) and thus have limited operational capability. Moreover, in our previous studies [27, 28], we presented deep learning-based full-disk flare prediction models. These models were trained using smaller datasets and these proof-of-concept models served as initial investigations into their potential as a supplementary component for operational forecasting systems. More recently, we presented explainable full-disk flare prediction models [12, 13], utilizing attribution methods to comprehend the models' effectiveness for near-limb flare events. We observed that the deep learning-based full-disk models are capable of identifying relevant areas in a full-disk magnetogram, which eventually translates into the model's prediction. However, these models utilized a post-hoc approach for model explanation, which does not contribute to further improving the model's performance. In recent years, attention-based models, particularly Vision Transformers (ViTs) [32], have emerged as powerful contenders for image classification tasks, achieving competent results on large-scale datasets. ViTs leverage self-attention mechanisms to effectively capture long-range dependencies in images, enabling them to excel in complex visual recognition tasks. While ViTs offer state-of-the-art performance, they often come with a large number (86 to 632 million) of trainable parameters, making them resource-intensive and less practical for scenarios with limited computational resources or small-sized datasets. To address this issue, for our specific use case with a small dataset, we are exploring alternative models that strike a balance between accuracy and efficiency. By incorporating attention blocks into a standard CNN pipeline, we obtain a much lighter model, consisting of \(\sim\)7.5 million parameters. This approach allows for computationally efficient near-real-time predictions with relatively less resource demand on deployment infrastructure while ensuring competent performance for solar flare prediction compared to our prior work [13, 14] with customized AlexNet-based [33] full-disk model, with \(\sim\)57.25 million parameters and fine-tuned VGG16 [34] full-disk model in [12] with \(\sim\)134 million parameters. We use full-disk line-of-sight (LoS) magnetogram images obtained from the Helioseismic and Magnetic Imager (HMI) [35] instrument onboard Solar Dynamics Observatory (SDO) [36] publicly available from Heliviewer [37]. We collected hourly instances of magnetogram images at [00:00, 01:00,...,23:00] each day from December 2010 to December 2018. We labeled the magnetogram images for binary prediction mode (\(\geq\)M1.0-class flares) based on the peak X-ray flux converted to NOAA flare classes with a prediction window of the next 24 hours. To elaborate, if the maximum of GOES observed peak X-ray flux of a flare is weaker than M1.0, the corresponding magnetogram instances are labeled as "No Flare" (NF: \(<\)M1.0), and larger ones are labeled as "Flare" (FL: \(\geq\)M1.0) as shown in Fig.2. Our dataset includes a total of 63,649 full-disk LoS magnetogram images, where 54,649 instances belong to the NF-class and 9,000 instances (8,120 instances of M-class and 880 instances of X-class flares) to the FL-class 1. We finally create a non-chronological split of our data into four temporally non-overlapping tri-monthly partitions introduced in [27] for our cross-validation experiments. This partitioning of the dataset is created by dividing the data timeline from Dec 2010 to Dec 2018 into four partitions, where Partition-1 contains data from January to March, Partition-2 contains data from April to June, Partition-3 contains data from July to September, and finally, Partition-4 contains data from October to December as shown in Table. I. Because \(\geq\)M1.0-class flares are scarce, the data distribution exhibits a significant imbalance, with the highest imbalance occurring in Partition-2 (FL:NF \(\sim\)1:9). Overall, the imbalance ratio stands at \(\sim\)1:6 for FL to NF class. Footnote 1: The current total count of 63,649 magnetogram observations in our dataset is lower than it should be for the period of December 2010 to December 2018. This is due to the unavailability of some instances from Heliviewer. ## IV Model In this work, we develop two deep learning models: (i) standard CNN model as a baseline (denoted as _M1_), and (ii) attention-based model (denoted as _M2_) proposed in [16] to perform and compare in the task of solar flare prediction. The M1 model shown in Fig. 3 follows an intuition of standard CNN architecture where a global image descriptor (\(g\)) is derived from the input image from the activations of the last convolutional layer and passed through a fully connected layer to obtain class prediction probabilities. On the other hand, the attention-based full-disk model (M2) encourages the filters earlier in the CNN pipeline to learn similar mappings compatible with the one that produces a global image descriptor in the original architecture. Furthermore, it focuses on identifying salient image regions and amplifying their influence while suppressing the irrelevant and potentially spurious information in other regions during training and thus utilizing a trainable attention estimator by integrating it into the standard CNN pipeline. The architecture of our attention-based model is shown in Fig. 4. The architecture of the attention model Fig. 2: A visual representation of the data labeling process using hourly observations of full-disk LoS magnetograms and a prediction window of 24 hours considered to label the magnetograms. Here, ‘FL’ and ‘NF’ indicate ‘Flare’ and ‘No Flare’ for binary prediction mode (\(\geq\)M1.0-class flares). proposed in [16] integrates the trainable attention modules in a modified VGG-16 [34] architecture. We use a simpler VGG-like architecture with a reduced number of convolutional layers, which also reduces the number of parameters. Our first convolutional layer accepts a 1-channel input magnetogram image resized to 256\(\times\)256. Each convolutional layer (except the last one) is followed by a batch normalization layer before max pooling. The final convolutional layer outputs feature maps of size 512\(\times\)1\(\times\)1 that squeezed into a fully connected layer (FC-1) with a 512-dimensional vector, which is the global representation (\(g\)) of the input image. The M2 model follows the same architecture as in M1, except it has three trainable attention modules integrated after the third, fourth, and fifth convolution blocks before the max-pool layer. The similarity between the architectures is intentional to demonstrate the impact of the attention estimators on model performance. Similarly, integrating attention modules in the middle of the network is also a deliberate design choice. As the early layers in CNN primarily focus on low-level features [38], we position the attention modules further into the pipeline to capture higher-level features. However, there is a tradeoff involved, as pushing attention to the last layers is hindered by significantly reduced spatial resolution in the feature maps. Consequently, placing attention modules in the middle strikes a balance, making it a more suitable and pragmatic approach. In the M2 model, outputs from the convolutional blocks (denoted as \(L^{s}\)) are passed to the attention estimators. In other words, \(L^{s}\) is a set of feature vectors: \[L^{s}=\{l_{1}^{s},l_{2}^{s},...,l_{n}^{s}\}\] extracted at a given convolutional layer to serve as input to the \(s_{th}\) attention estimator, and \(l_{i}^{s}\) is the vector of output activations at \(i^{th}\) of total \(n\) spatial locations in the layer. \(g\) represents a global feature vector obtained by flattening the feature maps at the first fully connected layer, located at the end of the convolution blocks (referred to as FC-1 in Fig.4). The attention mechanism aims to compute a compatibility score, denoted as \(C(L^{s},g)\), utilizing the local features (\(L^{s}\)) and global feature representations (\(g\)), and replaces the final feature vector with a set of attention-weighted local features. As the compatibility scores \(C\) and \(L^{s}\) are required to have the same dimension, the dimension matching is performed by a linear mapping of vectors \(l_{i}^{s}\) to the dimension of \(g\). Then, the compatibility function \(C(L^{s},g)=\{c_{1}^{s},c_{2}^{s},...,c_{n}^{s}\}\) is a set for each vector \(l_{i}^{s}\), which is computed as an addition operation (additive attention) as follows: \[c_{1}^{s}=(l_{i}^{s},g),\text{ for }i\in\{1,2,...,n\}.\] The computed compatibility scores are then normalized using a softmax operation and represented as: \[A^{s}=\{a_{1}^{s},a_{2}^{s},...,a_{n}^{s}\}.\] Fig. 4: The architecture of our attention-based flare prediction model (M2). The model has three trainable attention modules integrated after the third, fourth, and fifth convolution blocks before the max-pool layer. Note: Each convolutional layer (except the last one) is followed by a batch normalization layer. Fig. 3: The architecture of our baseline model (M1). Note: Each convolutional layer (except the last one) is followed by a batch normalization layer. The normalized compatibility scores are then used to compute an element-wise weighted average, which results in a vector: \[g_{a}^{s}=\sum_{i=1}^{n}a_{i}^{s}.l_{i}^{s}\] for each attention layer, \(s\). Finally, the individual \(g_{a}^{s}\) vectors of size 512 are concatenated to get a new attention-based global representation to perform the binary classification in the (second) fully connected layer (FC-2). This approach allows the activations from earlier layers to influence and contribute to the final global feature representation, thereby enhancing the model's ability to capture relevant spatial information. ## V Experimental Evaluation ### _Experimental Settings_ We trained both of our models (M1 and M2) with stochastic gradient descent (SGD) as an optimizer and cross-entropy as the objective function. Both models are initialized using Kaiming initialization from a uniform distribution [39], and then we use a dynamic learning rate (initialized at 0.001 and reduced by half every 3 epochs) to further train the model to 40 epochs with a batch size of 128. We regularized our models with a weight decay parameter tuned at 0.5 to prevent overfitting. As mentioned earlier in Sec. III, we are dealing with an imbalanced dataset. Therefore, we address the class imbalance problem through data augmentation and oversampling exclusively for the training set while maintaining the imbalanced nature of the test set for realistic evaluation. Firstly, we use three augmentation techniques: vertical flipping, horizontal flipping, and +5\({}^{\circ}\) to -5\({}^{\circ}\) rotations on minority class (FL-class) which decreases the imbalance from 1:6 to approximately 2:3. Finally, we randomly oversampled the minority class to match the instances of NF-class resulting in a balanced dataset. We prefer augmentation and oversampling over undersampling as the flare prediction models trained with undersampled data are shown to lead to inferior performance [40] (usually transpiring as one-sided predictions). We employed a 4-fold cross-validation schema for validating our models, using the tri-monthly partitions (described in Sec. III), where we applied three partitions for training the model and one for testing. We evaluate the performance of our models using two widely-used forecast skills scores: True Skill Statistics (TSS, in Eq. 1) and Heidke Skill Score (HSS, in Eq. 2), derived from the elements of confusion matrix: True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN). In the context of this paper, the "FL-class" is considered as the positive outcome, while the "NF-class" is negative. \[TSS=\frac{TP}{TP+FN}-\frac{FP}{FP+TN} \tag{1}\] \[HSS=2\times\frac{TP\times TN-FN\times FP}{\left((P\times(FN+TN)+(TP+FP)\times N )\right)} \tag{2}\] ,where N = TN + FP and P = TP + FN. TSS and HSS values range from -1 to 1, where 1 indicates all correct predictions, -1 represents all incorrect predictions, and 0 represents no skill. In contrast to TSS, HSS is an imbalance-aware metric, and it is common practice to use HSS for the solar flare prediction models due to the high class-imbalance ratio present in the datasets. For a balanced test dataset, these metrics are equivalent [40]. Lastly, we report the subclass and overall recall for flaring instances (M- and X-class), which is calculated as (\(\frac{TP}{TP+FN}\)), to demonstrate the prediction sensitivity. ### _Evaluation_ We perform a 4-fold cross-validation using the tri-monthly separated dataset for evaluating our models. With the baseline model (M1) we obtain on an average TSS\(\sim\)0.35\(\pm\)0.13 and HSS\(\sim\)0.30\(\pm\)0.09. The M1 model following the standard CNN pipeline has fluctuations across folds and hence a high margin of error on skill scores is represented by the standard deviation. Model M2 improves over the performance of model M1 by \(\sim\)20% and \(\sim\)7% in terms of TSS and HSS respectively. Furthermore, it improves on the performance of [12, 13] by \(\sim\)3% in terms of TSS and shows comparable results in terms of HSS and is more robust as indicated by the deviations across the folds as shown in Table II. Moreover, the performance of model M2 becomes even more noteworthy when considering its parameter efficiency. With only \(\sim\)7.5 million parameters, it outperforms [13] an AlexNet-based model and [12] a VGG16-based model with a much higher parameter count of \(\sim\)57.25 and \(\sim\)134 million respectively, showcasing the effectiveness of attention mechanisms in achieving superior results while maintaining a significantly leaner model architecture. This highlights the potential of this approach to provide both performance gains and resource optimization. The findings of this study emphasize the significance of optimizing attention configurations to enhance model performance, taking into account both parameter complexities and the strategic combination of attention patterns for effective pattern recognition. In addition, we evaluate our results for correctly predicted and missed flare counts for class-specific flares (X-class and M-class) in central locations (within \(\pm\)70\({}^{\circ}\)) and near-limb locations (beyond \(\pm\)70\({}^{\circ}\)) of the Sun as shown in Table III. We observe that the attention-based model (M2) shows significantly better results compared to the baseline (M1). The M2 model made correct predictions for \(\sim\)95% of the X-class flares and \(\sim\)83% of the M-class flares in central locations. Similarly, it shows a compelling performance for flares appearing on near-limb locations of the Sun, where \(\sim\)77% of the X-class and \(\sim\)51% of the M-class flares are predicted correctly. This is important because, to our knowledge, the prediction of near-limb flares is often overlooked, although vital for predicting Earth-impacting space weather events. More false negatives in M-class are expected because of the model's inability to distinguish bordering class (C4+ to C9.9) flares from \(\geq\)M1.0-class flares as shown in Fig. 5. We observed an upward trend in the false positive rate for sub-classes (SFPR) within C-class flares when compared to other sub-classes, such as Flare-Quiet (FO), A-class, and B-class flares. More specifically we note that the count of false positives (FP) surpasses that of true negatives (TN) for flare classes ranging from \(\geq\)C4 to \(\leq\)C9. The prevalence of FP in \(\geq\)C4-class flares suggests a need for improved predictive capabilities between border classes. Overall, we observed that our model predicted \(\sim\)89% of the flares in central locations and \(\sim\)64% of the flares in near-limb locations. Furthermore, class-wise analysis shows that \(\sim\)91% and \(\sim\)74% of the X-class and M-class flares, respectively, are predicted correctly by our models. To reproduce this work, the source code is available in our open-source repository [41]. ## VI Discussion In this section, we visualize the attention maps learned by the M2 model to qualitatively analyze and understand regions in input magnetogram images that are considered relevant. We applied three attention layers in our model M2, where attention maps \((L1,L2,L3)\) has a spatial dimension \((\frac{1}{4},\frac{1}{8},\frac{1}{16})th\) of the input size respectively. To visualize the relevant features learned by the models using attention layers, we upscale these maps to the size of the magnetogram image using bilinear interpolation and overlay the maps on top of the original image. We present the attention maps from the Attention Estimator-2 because the first attention layer focuses on lower-level features, which are scattered and do not provide a globally detailed explanation. On the other hand, the Attention Estimator-3 focuses on higher-level features, and due to the high reduction in spatial dimension (\(\frac{1}{16}\) of the original input), upscaling through interpolation results in a spatial resolution that is insufficient for generating interpretable activation maps. As the primary focus of this study is to understand the capability of full-disk models on the near-limb flares, we showcase a near-limb (East) X3.2-class flare observed on 2013-05-14T00:00:00 UTC. Note that East and West are reversed in solar coordinates. The location of the flare is shown by a green flag in Fig. 6 (a)(i), along with the ARs (red flags). For this case-based qualitative analysis, we use an input image at 2013-05-13T06:00:00 UTC (\(\sim\)18 hours prior to the flare event), shown in Fig. 6 (a)(ii) and in Fig. 6 (a)(iii), we show the overlaid attention map, which pinpoints important regions in the input image where specific ARs are activated as relevant features, suppressing a large section of the full-disk magnetogram disk although there are 10 ARs (red flags). More specifically, the model focuses on the same AR that is responsible for initiating a flare 18 hours later. Similarly, we analyze another case of correctly predicted near-limb (West) X1.0-class flare observed on 2013-11-19T10:14:00 UTC shown in Fig. 6 (b)(i). For this, we used an input image at 2013-11-18T17:00:00 UTC (\(\sim\)17 hours prior to the flare event) shown in Fig. 6 (b)(ii). We again observed that the model focuses on the relevant AR even though other, relatively large ARs are present in the magnetogram image as shown in Fig. 6 (b)(iii). Furthermore, we provide an example to analyze a case of false positives as well. For this, we use an example of a C7.9 flare observed on 2014-02-03T00:12:43 UTC shown in Fig.6 (c)(i), and to explain the result, we used an input magnetogram instance at 2014-02-02T23:00:00 UTC (\(\sim\)14 hours prior to the event) shown in Fig.6 (c)(ii). For the given time, there are 7 ARs indicated by the red flags, however, on interpreting this prediction with attention maps shown in Fig.6 (c)(iii), we observed that the model considers only one region as a relevant feature for the corresponding prediction, which is indeed the location of the C7.9 flare. This incorrect prediction can be attributed to interference caused by bordering C-class flares as shown earlier in Fig. 5, where we noted that among the 25,150 C-class flares observed, \(\sim\)43% (10,935) resulted in incorrect predictions, constituting \(\sim\)91% of the total false positives. ## VII Conclusion and Future Work In this work, we presented an attention-based full-disk model to predict \(\geq\)M1.0-class flares in binary mode and compared the performance with standard CNN-based models. We observed that the trainable attention modules play a crucial role in directing the model to focus on pertinent features associated with ARs while suppressing irrelevant features in a magnetogram during training, resulting in an enhancement of model performance. Furthermore, we demonstrated, both quantitatively through recall scores and qualitatively by overlaying attention maps on input magnetogram images, that our Fig. 5: A bar-line plot showing the true negatives (TN), false positives (FP), and false positive rate for sub-classes in NF-class (SFPR) obtained from model M2. The results are aggregated from validation sets of 4-fold experiments. model effectively identifies and localizes relevant AR locations, which are more likely to initiate a flare. This prediction capability extends to near-limb regions, making it crucial for operational systems. As an extension, we plan to include the temporal aspects in our dataset and create a spatiotemporal model to capture the evolution of solar activity leading to solar flares. Furthermore, we plan to extend this work by developing an automated way of analyzing the interpretation results to identify the main causes of incorrect predictions. ## Acknowledgments This project is supported in part under two NSF awards #2104004 and #1931555, jointly by the Office of Advanced Cyberinfrastructure within the Directorate for Computer and Information Science and Engineering, the Division of Astronomical Sciences within the Directorate for Mathematical and Physical Sciences, and the Solar Terrestrial Physics Program and the Division of Integrative and Collaborative Education and Research within the Directorate for Geosciences. This work is also partially supported by the National Aeronautics and Space Administration (NASA) grant award #80NSSC22K0272. The data used in this study is a courtesy of NASA/SDO and the AIA, EVE, and HMI science teams, and the NOAA National Geophysical Data Center (NGDC).
2301.13821
Complete Neural Networks for Complete Euclidean Graphs
Neural networks for point clouds, which respect their natural invariance to permutation and rigid motion, have enjoyed recent success in modeling geometric phenomena, from molecular dynamics to recommender systems. Yet, to date, no model with polynomial complexity is known to be complete, that is, able to distinguish between any pair of non-isomorphic point clouds. We fill this theoretical gap by showing that point clouds can be completely determined, up to permutation and rigid motion, by applying the 3-WL graph isomorphism test to the point cloud's centralized Gram matrix. Moreover, we formulate an Euclidean variant of the 2-WL test and show that it is also sufficient to achieve completeness. We then show how our complete Euclidean WL tests can be simulated by an Euclidean graph neural network of moderate size and demonstrate their separation capability on highly symmetrical point clouds.
Snir Hordan, Tal Amir, Steven J. Gortler, Nadav Dym
2023-01-31T18:07:26Z
http://arxiv.org/abs/2301.13821v4
# Complete Neural Networks for Euclidean Graphs ###### Abstract We propose a \(2\)-WL-like geometric graph isomorphism test and prove it is complete when applied to Euclidean Graphs in \(\mathbb{R}^{3}\). We then use recent results on multiset embeddings to devise an efficient geometric GNN model with equivalent separation power. We verify empirically that our GNN model is able to separate particularly challenging synthetic examples, and demonstrate its usefulness for a chemical property prediction problem. Equivariant machine-learning models are models that respect data symmetries. Notable examples include Convolutional Neural Networks, which respect translation symmetries of imgaes, and Graph Neural Networks (GNNs), which respect the symmetry of graphs to permutations of their vertices. In this paper we focus on equivariant networks for point clouds (which are often also called Euclidean or geometric graphs). Point clouds are sets of \(n\) points in \(\mathbb{R}^{d}\), whose symmetries include permutation of the \(n\) points, as well as translation, rotation and possibly also reflection. We denote the group of permutations by \(S_{n}\), the group of translations and rotations by \(SE(d)\), and the group obtained by also including reflections by \(E(d)\). Our interest is thus in functions on \(\mathbb{R}^{d\times n}\) that are invariant or equivariant to the action of \(E(d)\times S_{n}\) or \(SE(d)\times S_{n}\). In the past few years many works have focused on symmetry-preserving networks for point clouds in \(\mathbb{R}^{3}\), and on their applications for 3D computer vision and graphics (Deng et al., 2021), chemistry (Gasteiger et al., 2020) and physics simulation (Kondor, 2018). There are also applications for \(d>3\) for graph generation (Victor Garcia Satorras, 2021) and processing of Laplacian eigendecompositions (Lim et al., 2022). The search for equivariant networks with good empirical performance is complemented by the theoretical study of these networks and their approximation power. These typically focus on two strongly related concepts: (i) _separation_ - the ability of a given invariant architecture to distinguish between two objects that are not related by a group symmetry, and (ii) _universality_ - the ability of the architecture to approximate any continuous equivariant function. These two concepts are intimately related and one typically implies the other, as discussed in Section 4 and in (Chen et al., 2019). Recent results (Pozdnyakov and Ceriotti, 2022) show that distance-based message passing networks cannot separate point clouds better than a geometric variant of the 1-WL test (1-Geo), and that this test cannot separate all point clouds. On the other extreme, several works describe equivariant architectures that are universal, but these rely on high-dimensional representations of \(SO(d)\)(Dym and Maron, 2020; Finkelshtein et al., 2022; Gasteiger et al., 2021) or \(S_{n}\)(Lim et al., 2022). Thus, there is a large gap between the architectures needed for universality and those used in practice. For example, (Lim et al., 2022) requires hidden tensors of dimension \(n^{n}\) for universality, but uses tensors of dimension \(n^{2}\) in practice. ### Main results In this paper, we make a substantial step towards closing this gap, by showing that complete separation can be obtained using efficient architectures of practical size. We begin by expanding upon the notion of _geometric graph isomorphism_ tests, recently presented in (Pozdnyakov and Ceriotti, 2022; Anonymous, 2023). We show that while 1-Geo is not complete, it does separate _almost all_ distinct pairs. We then build on ideas from (Kurlin, 2022) to propose a _geometric \(2\)-WL_ test (2-Geo), which separates _any_ pair of 3D point clouds. Similarly, for general \(d\), we achieve separation using a geometric \(d-1\)-WL test. These results and some variations are discussed in Section 2. In Section 3 we explain how to construct invariant architectures whose separation power is equivalent to 2-Geo (or 1-Geo), and thus are separating. This problem has been addressed successfully for graphs with discrete labels (see discussion in Section 7), but is more challenging for geometric graphs and other graphs with continuous labels. The main difficulty in this construction is the ability to construct efficient continuous injective multiset-valued mappings. We will show that the complexity of computing standard injective multiset mappings is very high, and show how this complexity can be considerably improved using recent results from (Dym and Gortler, 2022). As a result we obtain \(\mathcal{SO}(3)\) and \(\mathcal{O}[3,n]\) separating invariant architectures with a computational complexity of \(O(n^{4}\log(n))\) and embedding dimension of \(6n+1\), which is approximately \(n^{2}\) times lower than what can be obtained with standard approaches. This advantage is even more pronounced when considering 'deep' 1-Geo tests (see Figure 2). In Section 4 we use our separation results to prove the universality of our models, which is obtained by appropriate postprocessing steps to our separating architectures. To empirically validate our findings, we present a dataset of point-cloud pairs that are difficult to separate, based on examples from (Pozdnyakov and Ceriotti, 2022; Pozdnyakov et al., 2020) and new challenging examples we construct. We verify that our architectures can separate all of these examples, and evaluate the performance of some competing architectures as well. We also show that our architectures achieve improved results on a benchmark chemical property regression task, in comparison to similar non-universal architectures. These results are described in Section 5. ## 1 Mathematical notation A (finite) _multiset_\(\{\!\!\{y_{1},\ldots,y_{N}\}\!\!\}\) is an unordered collection of elements where repetitions are allowed. Let \(\mathcal{G}\) be a group acting on a set \(\mathcal{X}\) and \(f:\mathcal{X}\to\mathcal{Y}\) a function. We say that \(f\) is _invariant_ if \(f(gx)=f(x)\) for all \(x\in X,g\in G\), and we say that \(f\) is _equivariant_ if \(\mathcal{Y}\) is also endowed with some action of \(G\) and \(f(gx)=gf(x)\) for all \(x\in\mathcal{X},g\in\mathcal{G}\). A separating invariant mapping is an invariant mapping that is injective, up to group equivalence. Formally, we denote \(X\underset{\mathcal{G}}{=}Y\) if \(X\) and \(Y\) are related by a group transformation from \(\mathcal{G}\), and we define **Definition 1.1** (Separating Invariant).: Let \(\mathcal{G}\) be a group acting on a set \(\mathcal{X}\). We say \(F:\mathcal{X}\to\mathbb{R}^{K}\) is a _\(\mathcal{G}\)-separating invariant_ if for all \(X,Y\in\mathcal{X}\), 1. **(Invariance)**\(X\underset{\mathcal{G}}{=}Y\Rightarrow F(X)=F(Y)\) 2. **(Separation)**\(F(X)=F(Y)\Rightarrow X\underset{\mathcal{G}}{=}Y\). We call \(K\) the _embedding dimension_ of \(F\). We focus on the case where \(\mathcal{X}\) is some Euclidean domain and require the separating mapping to be continuous and differentiable almost everywhere, so that it can be incorporated in deep learning models -- which typically require this type of regularity for gradient-descent-based learning. The natural symmetry group of point clouds \((x_{1},\ldots,x_{n})\in\mathbb{R}^{d\times n}\) is generated by a translation vector \(t\in\mathbb{R}^{d}\), a rotation matrix \(R\in\mathcal{SO}(d)\), and a permutation \(\sigma\in S_{n}\). These act on a point cloud by \[(R,t,\sigma)_{*}(x_{1},\ldots,x_{n})=(Rx_{\sigma^{-1}(1)}{+}t,\ldots,Rx_{ \sigma^{-1}(n)}{+}t).\] We denote this group by \(\mathcal{SO}[d,n]\). In some instances, reflections \(R\in\mathcal{O}(d)\) are also permitted, leading to a slightly larger symmetry group, which we denote by \(\mathcal{O}[d,n]\). For simplicity of notation, throughout this paper we focus on the case \(d=3\). In Appendix D we explain how our constructions and theorems can be generalized to \(d>3\). ## 2 Geometric Graph isomorphism tests In this section we discuss geometric graph isomorphism tests, namely, tests for checking whether two given point clouds \(X,Y\in\mathbb{R}^{3\times n}\) are related by a permutation, rotation and translation (and possibly also reflection). Given two point clouds \(X,Y\), these tests typically compute some feature \(F(X),F(Y)\) and check whether \(F(X)=F(Y)\). This feature is \(\mathcal{G}\)-invariant, with \(\mathcal{G}\) denoting our symmetry group of choice, so that \(F(X)\neq F(Y)\) automatically implies that \(X\underset{\mathcal{G}}{\neq}Y\). Ideally, we would like to have _complete_ tests, meaning that \(X\underset{\mathcal{G}}{\neq}Y\) implies that \(F(X)\neq F(Y)\). Typically, these require more computational resources than _incomplete tests_. ### Incomplete geometric graph isomorphism test Perhaps the most well-known graph isomorphism test is 1-WL. Based on this test, (Pozdnyakov and Ceriotti, 2022) formulated the following test, which we refer to as the 1-Geo test: Given two point clouds \(X=(x_{1},\ldots,x_{n})\) and \(Y=(y_{1},\ldots,y_{n})\) in \(\mathbb{R}^{3\times n}\), this test iteratively computes for each point \(x_{i}\) an \(\mathcal{O}(3)\)-invariant feature \(h_{i}^{t}\) via \[h_{i}^{t}=\mathbf{Embed}^{(t)}\left(h_{i}^{t-1},\{\!\!\{h_{j}^{t-1},\|x_{i}-x_ {j}\|\}\,,j\neq i\!\!\}\right), \tag{1}\] using an arbitrary initialization \(h_{i}^{t}\) for \(t=0\). This process is repeated \(T\) times, and then a final global feature for the point cloud is computed via \[F^{\text{1-Geo}}(X)=\mathbf{Embed}^{(T+1)}\{\!\!\{h_{i}^{T}\mid i=1,\ldots,n \!\!\}.\] A similar computation is performed on \(Y\) to obtain \(F^{\text{1-Geo}}(Y)\). In this test \(\mathbf{Embed}^{(t)}\) are hash functions, namely, they are discrete mappings of multisets to vectors, defined such that they assign distinct values to the finite number of multisets encountered during the computation of \(F^{\text{1-Geo}}\) for \(X\) and \(Y\). Note that by this construction, these functions are defined differently for different pairs \(X,Y\). The motivation in (Pozdnyakov & Ceriotti, 2022) in considering this test is that many distance-based symmetry-preserving networks for point clouds are in fact a realization of this test, though they use **Embed** functions that are continuous, defined globally on \(\mathbb{R}^{3\times n}\), and in general may assign the same value to different multisets. Consequently, the separation power of these architectures is at most that of \(F^{\text{1-Geo}}\) with discrete hash functions. We note that continuous multiset functions _can_ be used to construct architectures with the separation power of geometric isomorphism tests. This will be discussed in Section 3. The separation power of \(F^{\text{1-Geo}}\) is closely linked to the notion of _geometric degree_: For a point cloud \(X=(x_{1},\ldots,x_{n})\), we define the geometric degree \(d(i,X)\) to be the multiset \[d(i,X)=\{\hskip-1.422638pt\{\|x_{1}-x_{i}\|,\ldots,\|x_{n}-x_{i}\|\}\hskip-1.422638pt \},\quad i\in[d],\] and the geometric degree histogram \(d_{H}(X)\) to be \[d_{H}(X)=\{\hskip-1.422638pt\{d(1,X),\ldots,d(n,X)\}\hskip-1.422638pt\}.\] It is not difficult to see that if \(d_{H}(X)\neq d_{H}(Y)\) then \(X\) and \(Y\) can be separated by \(F^{\text{1-Geo}}\) even with a single iteration \(T=1\). With \(T=2\), as we show in the following theorem, \(F^{\text{1-Geo}}\) can do even more, and separate \(X\) and \(Y\) even if \(d_{H}(X)=d_{H}(Y)\), provided that the values in their histograms are all distinct, namely that \(X\) and \(Y\) belong to \[\mathbb{R}^{3\times n}_{distinct}=\{X\in\mathbb{R}^{3\times n}|\,d(i,X)\neq d (j,X)\;\;\forall i\neq j\}.\] Figure 1 depicts the distance matrices of two point clouds \(A,B\) that belong to \(\mathbb{R}^{3\times n}_{distinct}\) while having the same degree histogram. **Theorem 2.1**.: _Suppose that \(X,Y\in\mathbb{R}^{3\times n}_{distinct}\), and \(\textbf{Embed}^{(t)},t=1,2,3\), are multiset-to-vector functions that assign distinct values to the finite number of multi-sets encountered when computing \(F^{\text{1-Geo}}(X)\) and \(F^{\text{1-Geo}}(Y)\). Then \(F^{\text{1-Geo}}(X)=F^{\text{1-Geo}}(Y)\) if and only if \(X\underset{\mathcal{O}[3,n]}{=}Y\)._ While _almost any_ pair of point clouds \(X,Y\) (in the Lebesgue sense) belongs to \(\mathbb{R}^{3\times n}_{distinct}\), and thus can be separated by \(F^{\text{1-Geo}}\), this test is not complete in general. This was shown in (Pozdnyakov & Ceriotti, 2022), by providing an example of point clouds \(C,D\in\mathbb{R}^{3\times 6}\) (see Figure 1)) that cannot be distinguished by \(F^{\text{1-Geo}}\). ### \(\mathcal{SO}[3,n]\)-isomorphism test We now describe a \(2\)-WL-like geometric graph isomorphism test for \(\mathcal{SO}[d,n]\), which we name 2-Geo. Unlike 1-Geo, this test is _complete_. It is inspired by a similar test described in (Kurlin, 2022). The relationship between this work and ours is discussed in Section 7. Figure 1: Distance matrices (Left) and degree histograms \(d_{H}\) (Right) of three pairs of point clouds \((A,B)\), \((C,D)\), \((E,F)\). These pairs are hard to separate by distance-based methods, as they have the same degree histogram. Nonetheless, \((A,B)\) can be separated by two iterations of 1-Geo, since \(A,B\in\mathbb{R}^{3\times n}_{distinct}\). Each of \(C\), \(D\) is comprised of three pairs of points, each of which share the same degree. While it was shown in (Pozdnyakov & Ceriotti, 2022) that 1-Geo cannot separate \(C\) from \(D\), our 2-Geo can separate _any_ distinct pair of 3D point clouds. \(E\) and \(F\) are especially challenging 6-dimensional point clouds in which all points have the same geometric degree. As a first step, we eliminate the translation symmetry by centering the point clouds \(X\) and \(Y\). The centering of \(X=(x_{1},\dots,x_{n})\) is the point cloud \((x_{1}^{c},\dots,x_{n}^{c})\) defined by \(x_{i}^{c}=x_{i}-\frac{1}{n}\sum_{j=1}^{n}x_{j}\). It is known (Dym & Gortler, 2022) that the original \(X\) and \(Y\) are related by a symmetry in \(\mathcal{SO}[3,n]\) if and only if the centralized point clouds are related by a rotation and permutation. Now, let us make two simplifying assumptions that we shall later dispose of: (a) the first two points of \(X\) and \(Y\) are in correspondence- meaning that if \(X\) and \(Y\) can be aligned by an \(\mathcal{SO}[3,n]\) transformation, then the permutation component assigns \(x_{1}\) to \(y_{1}\) and \(x_{2}\) to \(y_{2}\), and (b) the first two points in each point cloud are linearly independent. Under these assumptions, we can define bases for \(\mathbb{R}^{3}\) by \(x_{1},x_{2},x_{1}\times x_{2}\) and \(y_{1},y_{2},y_{1}\times y_{2}\). These two bases are related by a rotation if and only if the Gram matrices of these two bases are identical. If indeed they are, this still only implies that the first two points are related by a rotation. To check whether the remaining points are related by a rotation and permutation, it suffices to check that the unordered collection of inner products of the remaining points with the basis we defined are identical. Formally, we define for \((i,j)=1,2\) or any other pair of indices \((i,j)\) \[X_{[i,j]} =\big{[}x_{i}^{c},x_{j}^{c},x_{i}^{c}\times x_{j}^{c}\big{]}\in \mathbb{R}^{3\times 3} \tag{2}\] \[P_{[i,j,k]} =X_{[i,j]}^{T}x_{k}^{c}\] (3) \[G_{[i,j]}(X) =X_{[i,j]}^{T}X_{[i,j]}\] (4) \[h_{[i,j]}(X) =\textbf{Embed}^{(1)}\big{\{}\mathbb{R}P_{[i,j,k]}\;\mid\;k=3, \dots,n\big{\}}\] (5) \[m_{[i,j]}(X) =\big{(}G_{[i,j]}(X),h_{[i,j]}(X)\big{)} \tag{6}\] and we define \(m_{[i,j]}(Y)\) a similar manner, where **Embed** is some multiset-valued hash functions. The above construction guarantees that if \(X,Y\) satisfy the simplifying assumptions (a)-(b), then \(X\) and \(Y\) are related by a symmetry in \(\mathcal{SO}[3,n]\) if and only if \(m_{[1,2]}(X)=m_{[1,2]}(Y)\). Let us now remove the simplifying assumption (a)-(b). Since we no longer know the correspondence, instead of just considering \(m_{[1,2]}\), we consider the multiset of all possible \(m_{[i,j]}\) and define \[F^{2\text{-Geo}}(X)=\textbf{Embed}^{(2)}\big{\{}m_{[i,j]}(X)\;\mid\;1\leq i \neq j\leq n\big{\}}. \tag{7}\] We define \(F^{2\text{-Geo}}(Y)\) similarly. In the appendix we prove **Theorem 2.2**.: _Let \(X,Y\in\mathbb{R}^{3\times n}\), and let \(\textbf{Embed}^{(1)},\textbf{Embed}^{(2)}\) be multiset-to-vector functions that assign distinct values to the finite number of multisets encountered when computing \(F^{2\text{-Geo}}(X)\) and \(F^{2\text{-Geo}}(Y)\). Then \(F^{2\text{-Geo}}(X)=F^{2\text{-Geo}}(Y)\) if and only if \(\underset{\mathcal{SO}[3,n]}{=}Y\)._ Proof ideaIf the centralized point cloud \(X^{c}\) has rank \(\geq 2\), there are some \(i,j\) such that \(x_{i}^{c},x_{j}^{c}\) are linearly independent. If \(F^{2\text{-Geo}}(X)=F^{2\text{-Geo}}(Y)\) then there are some \(s,t\) such that \(m_{[i,j]}(X)=m_{[s,t]}(Y)\), and the argument above then shows that \(\underset{\mathcal{SO}[3,n]}{=}Y\). The full proof (which does not assume any rank assumptions on \(X^{c}\)) is given in the appendix. ### \(\mathcal{O}[3,n]\)-isomorphism test The 2-Geo test described above can be modified to the scenario where reflections are also considered symmetries of the point cloud, so we would like the test to distinguish point clouds up to \(\mathcal{O}[3,n]\) symmetries. A simple way to achieve this is to consider for each pair \(x_{i},x_{j}\) both orientations of the vector product \[X_{[i,j]}^{pos} =X_{i,j}=[x_{i},x_{j},x_{i}\times x_{j}]\] \[X_{[i,j]}^{neg} =[x_{i},x_{j},-x_{i}\times x_{j}].\] The details of this construction are given in Appendix B. Ultimately this leads to a complete \(\mathcal{O}[3,n]\) test with twice the time and space complexity of the 2-Geo test for \(\mathcal{SO}[3,n]\). An interesting but less efficient alternative to the above test is to use the standard \(3\)-WL graph isomorphism test, with the initial label for each triplet of indices taken to be the Gram matrix corresponding to those indices. The details of this construction are described in Appendix B as well. ## 3 Separating Architectures In the previous section we described incomplete and complete geometric graph isomorphism tests for \(\mathcal{SO}[3,n]\) and \(\mathcal{O}[3,n]\) symmetries. Our goal now is to build separating architectures based on these tests. Let us first focus on the 2-Geo test for \(\mathcal{SO}[3,n]\). To construct a separating \(\mathcal{SO}[3,n]\)-invariant architecture based on our 2-Geo test, we first need to choose a realization for the multiset-to-vector functions \(\textbf{Embed}^{(1)}\) and \(\textbf{Embed}^{(2)}\). To this end, we use parametric functions \(\textbf{Embed}_{\alpha}:\mathbb{R}^{3\times(n-2)}\rightarrow\mathbb{R}^{K_{1}}\) and \(\textbf{Embed}_{\beta}:\mathbb{R}^{K_{1}\times(n^{2}-n)}\rightarrow\mathbb{R}^{ K_{2}}\) which are invariant to permutation of their second coordinate. Note that this also renders \(F^{2\text{-Geo}}\) parametric \(F^{2\text{-Geo}}_{\alpha,\beta}\). We will also want them to be continuous and piecewise differentiable with respect to \(X,\alpha\) and \(\beta\). The main challenge is of course to guarantee that for some parameters \(\alpha,\beta\), the function \(F^{2\text{-Geo}}_{\alpha,\beta}\) is \(\mathcal{SO}[3,n]\) separating. A standard way (see (Anonymous, 2023; Maron et al., 2019)) to achieve this is to require that for some \(\alpha,\beta\), the functions \(\textbf{Embed}_{\alpha}\) and \(\textbf{Embed}_{\beta}\) are injective as functions of multisets, or equivalently that they are permutation-invariant and separating. Note that by Theorem 2.2, this requirement will certainly suffice to guarantee that \(F^{2\text{-Geo}}\) is an \(\mathcal{SO}[3,n]\)-separating invariant. Our next step is therefore to choose permutation invariant and separating \(\mathbf{Embed}_{\alpha}\), \(\mathbf{Embed}_{\beta}\). Naturally, we will want to choose these so that the dimensions \(K_{1},K_{2}\) and the complexity of computing these mappings are as small as possible. ### \(S_{n}\) separating invariants We now consider the problem of finding separating invariants for the action of \(S_{N}\) on \(\mathbb{R}^{D\times N}\). Let us begin with the scalar case \(D=1\). Two well-known separating invariant mappings in this setting are the power-sum polynomials \(\Psi_{pow}\) and the sort mapping \(\Psi_{sort}\), defined by \[\Psi_{sort}(s_{1},\dots,s_{N}) =\mathrm{sort}(s_{1},\dots,s_{N})\] \[\Psi_{pow}(s_{1},\dots,s_{N}) =\left(\sum_{j=1}^{N}s_{j},\sum_{j=1}^{N}s_{j}^{2},\dots,\sum_{j= 1}^{N}s_{j}^{N}\right).\] It is clear that \(\mathrm{sort}\) is permutation-invariant and separating. It is also continuous and piecewise linear, and thus meets the regularity conditions we set out for separating invariant mappings. The power-sum polynomials are clearly smooth. Their separation can be obtained from the separation of the elementary symmetric polynomials, as discussed e.g., in (Zaheer et al., 2017). We now turn to the case \(D>1\), which is our case of interest. One natural idea is to use lexicographical sorting. However, for \(D>1\), this sorting is not continuous. The power-sum polynomials can be generalized to multi-dimensional input, and these were used in the invariant learning literature (Maron et al., 2019). However, a key disadvantage is that to achieve separation, they require an extremely high embedding dimension \(K=\binom{N+D}{D}\). A more efficient approach was recently proposed in (Dym and Gortler, 2022). This method initially applies linear projections to obtain \(N\) scalars and then applies a continuous \(1\times N\)-separating mapping \(\Psi=\Psi_{pow}\) or \(\Psi=\Psi_{sort}\), namely, one-dimensional power-sum polynomials or sorting. In more detail, for some natural \(K\), the function \(\mathbf{Embed}_{\theta}:\mathbb{R}^{D\times N}\rightarrow\mathbb{R}^{K}\) is determined by a vector \(\theta=(a_{1},\dots,a_{K},b_{1},\dots,b_{K})\in\mathbb{R}^{K(D+N)}\) where each \(a_{i}\) and \(b_{i}\) are \(D\)- and \(N\)-dimensional respectively, and \[\mathbf{Embed}_{\theta}(X)=\langle b_{j},\ \Psi\left(a_{j}^{T}X\right) \rangle,\ j=1,\dots,K. \tag{8}\] The following theorem shows that this mapping is permutation invariant and separating. **Theorem 3.1** ((Dym and Gortler, 2022)).: _Let \(\mathcal{X}\) be an \(S_{N}\)-invariant semi-algebraic subset of \(\mathbb{R}^{D\times N}\) of dimension \(D_{\mathcal{X}}\). Denote \(K=2D_{\mathcal{X}}+1\). Then for Lebesgue almost every \(\theta\in\mathbb{R}^{K(D+N)}\) the mapping \(\mathbf{Embed}_{\theta}:\mathcal{X}\rightarrow\mathbb{R}^{K}\) is \(S_{N}\) invariant and separating._ When choosing \(\mathcal{X}=\mathbb{R}^{D\times N}\) we get that \(D_{\mathcal{X}}=N\cdot D\). The embedding dimension of \(\mathbf{Embed}_{\theta}\) would then be \(2N\cdot D+1\). This already is a significant improvement over the cardinality of the power-sum polynomials. Another important point that we will soon use is that if \(\mathcal{X}\) is a strict subset of \(\mathbb{R}^{D\times N}\) the number of separators will depend linearly on the intrinsic dimension \(D_{\mathcal{X}}\), and not on the ambient dimension \(ND\). To conclude this subsection, we note that sort-based permutation invariants such as those we obtain when choosing \(\Psi=\Psi_{sort}\) are common in the invariant learning literature (Zhang et al., 2018, 2019). In contrast, polynomial-based choices such as \(\Psi=\Psi_{pow}\) are not so popular. However, this choice does provide us with the following corollary. **Corollary 3.2**.: _Under the assumptions of the previous theorem, there exists a smooth parametric function \(q_{\theta}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{2D_{\mathcal{X}}+1}\) such that the separating permutation-invariant mapping \(\mathbf{Embed}_{\theta}:\mathbb{R}^{3\times N}\rightarrow\mathbb{R}^{2D_{ \mathcal{X}}+1}\) defined using \(\Psi=\Psi_{pow}\) is given by_ \[\mathbf{Embed}_{\theta}(x_{1},\dots,x_{N})=\sum_{i=1}^{N}q_{\theta}(x_{i})\in \mathbb{R}^{2D_{\mathcal{X}}+1}.\] Accordingly, in all approximation results which we will get based on the embedding \(\Psi_{pow}\) we can approximate \(\mathbf{Embed}_{\theta}\) with a function of the form \(\sum_{i=1}^{n}\mathcal{N}(x_{i})\) where \(\mathcal{N}\) is a neural networks whose input and output dimension are the same as those of \(\mathbf{Embed}_{\theta}\). ### Dimensionality of separation From the discussion above we see that we can choose \(\mathbf{Embed}_{\alpha}\) to be a separating invariant mapping on \(\mathbb{R}^{3\times(n-2)}\), with an embedding dimension of \(K_{1}=6n-11\). It would then seem natural to choose the embedding dimension of \(\mathbf{Embed}_{\beta}\) so that separation on \(\mathbb{R}^{K_{1}\times(n^{2}-n)}\) is guaranteed. This would require a rather high embedding dimension of \(\sim n^{3}\). We note that any permutation invariant and separating mapping on \(\mathbb{R}^{D\times N}\) with reasonable regularity will have embedding dimension of at least \(N\cdot D\) (see (Anonymous, 2023)), and as a result we will always obtain an embedding dimension of \(n^{3}\) when requiring \(\mathbf{Embed}_{\alpha},\mathbf{Embed}_{\beta}\) to be separating on all of their (ambient) domain. Significant savings can be obtained by the following observation: while the ambient dimension of the domain of \(\mathbf{Embed}_{\beta}\) is large, we only need injectivity for multisets in this domain which were obtained from some point cloud in \(\mathbb{R}^{3\times n}\). Accordingly, (once \(\alpha\) is fixed) \(\mathbf{Embed}_{\beta}\) needs only to be injective on a subset \(\mathcal{X}\) of the domain whose intrinsic dimension is at most \(3n\). Using Theorem 3.1 (see details in the proof of Theorem 3.4 stated below) we can take the embedding dimension of \(\mathbf{Embed}_{\beta}\) to be \(K_{2}=6n+1\). This idea is visualized in Figure 2(a). We note that the advantage of the intrinsic separation technique presented here is even more pronounced when considering the implementation of \(F^{1\text{-Geo}}\) with \(T\) large. If we require the mappings \(\textbf{Embed}^{(t)}\) to be separating and invariant on their (ambient) domain, the embedding dimension \(K_{t}\) of the \(t\)-th mapping is at least \(n+1\) times larger than the previous embedding dimension \(K_{t-1}\), so that the final embedding dimension is roughly \(\sim n^{T+1}\). In contrast, since the intrinsic dimension at each step is \(3n\), we get a constant embedding dimension of \(\sim 6n\) for all \(t\), by using a variation of Theorem 3.1 for vector-multiset pairs. See Appendix E for a full explanation, and Figure 2(b) for an illustration. ### Separation by feed-forward Neural Network Architectures To summarize our discussion, the \(\mathcal{SO}[3,n]\) geometric graph isomorphism discussed in Theorem 2.2 can be realized as a separating invariant architecture by replacing \(\textbf{Embed}^{(1)}\) and \(\textbf{Embed}^{(2)}\) with the parametric functions \(\textbf{Embed}_{\alpha}\) and \(\textbf{Embed}_{\beta}\) respectively as in (8), with embedding dimension \(K_{1}=6n-11\) and \(K_{2}=6n+1\) as discussed above, and \(\Psi=\Psi_{sort}\) or \(\Psi=\Psi_{pow}\). This leads to the architecture described in Algorithm 1. ``` Input:\(X=(x_{1},...,x_{n})\in\mathbb{R}^{3\times n}\) \(X_{[i,j]}\leftarrow\big{[}x_{i}^{c},x_{j}^{c},x_{i}^{c}\times x_{j}^{c}\big{]}\) \(h_{[i,j]}(X)\leftarrow\textbf{Embed}_{\alpha}\big{\{}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### Universality for Rotation-Equivariant models Separating invariants are useful not only for proving universality of _invariant_ models, but also for _equivariant_ models. In our context, we use a result from (Villar et al., 2021) which showed that a permutation invariant and equivariant functions can be written as combinations of general invariant functions and simple equivariant functions. Since this result requires \(\mathcal{O}(3)\) invariance, we use a modification of Algorithm 2 to \(\mathcal{O}(3)\) invariance which is described formally in the appendix in Algorithm 3. We then obtain the following equivariant universality result: **Theorem 4.2** (Equivariant Universality).: _Let \(f:\mathbb{R}^{3\times n}\rightarrow\mathbb{R}^{3}\) be continuous, \(\mathcal{O}(3)\)-equivariant and translation and permutation invariant. Then for any compact \(M\subset\mathbb{R}^{3\times n}\) and any \(\varepsilon>0\), \(f\) can be approximated to \(\epsilon\)-accuracy uniformly on \(M\) by functions of the form_ \[\tilde{f}(X)=\sum_{k=1}^{n}\mathcal{N}(h_{k},h_{global})x_{k}^{c},\] _where \(h_{k}(X),h_{global}(X)\) are the output of Algorithm 3 and \(\mathcal{N}\) is a fully connected neural network._ ## 5 Experiments ### Separation experiment To evaluate the separation power of different architectures, we constructed a dataset consisting of pairs of point clouds that are particularly difficult to separate. This dataset will be made available for public use. Each pair of point clouds \(X_{1},X_{2}\) is used as a prototype, from which we generate data samples for a binary classification task. Samples are generated by randomly choosing one of \(X_{1},X_{2}\), applying a random rotation and permutation to it and adding noise. The task is to determine whether each sample originates from \(X_{1}\) or \(X_{2}\). We used the following challenging pairs: (i) **Hard1-Hard3**: Three pairs of 3D point clouds from (Pozdnyakov et al., 2020). In each pair, both clouds have the same degree histogram, but are members of the set \(\mathbb{R}^{3\times n}_{distinct}\) -- which can be separated by 1-Geo according to Theorem 2.1. The distance matrices for one such pair are visualized in \(A,B\) of Figure 1. (ii) **Harder**: A pair of 3D point clouds from (Pozdnyakov and Ceriotti, 2022) that are not in \(\mathbb{R}^{3\times n}_{distinct}\), and provably cannot be separated by 1-Geo. These are \(C,D\) in Figure 1. (iii) **Cholesky dim=d**: Pairs \(X_{1},X_{2}\) of \(d\) points in \(\mathbb{R}^{d}\), with \(d=6,8,12\). All points in \(X_{1}\), \(X_{2}\) have the \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Point Clouds & GramNet & GeoEGNN & EGNN & LinearEGNN & MACE & TFN & DimeNet & GVPGNN \\ \hline Hard1[2] & 1.0 & 0.998 & 0.5 & 1.0 & 1.0 & 0.5 & 1.0 & 1.0 \\ Hard2 [2] & 1.0 & 0.97 & 0.5 & 1.0 & 1.0 & 0.5 & 1.0 & 1.0 \\ Hard3 [2] & 1.0 & 0.85 & 0.5 & 1.0 & 1.0 & 0.55 & 1.0 & 1.0 \\ Harder [1] & 1.0 & 0.899 & 0.5 & 0.5 & 1.0 & 0.5 & 1.0 & 1.0 \\ Cholesky dim=6 & 1.0 & Irrelevant & 0.5 & 0.5 & 1.0 & Irrelevant & Irrelevant & Irrelevant \\ Cholesky dim=8 & 1.0 & Irrelevant & 0.5 & 0.5 & 1.0 & Irrelevant & Irrelevant & Irrelevant \\ Cholesky dim=12 & N/A & Irrelevant & 0.5 & 0.5 & 0.5 & Irrelevant & Irrelevant & Irrelevant \\ \hline \hline \end{tabular} \end{table} Table 1: Results of our models on challenging point clouds [1](Pozdnyakov and Ceriotti, 2022)(Pozdnyakov et al., 2020) Fig. S4 Figure 2: The standard method for constructing architectures that are equivalent to isomorphism tests uses injective multiset mappings on the ambient domain. The embedding dimension of such mapping increases exponentially with depth. In contrast, Theorem 3.1 allows for injective multiset functions whose dimensionality is twice the intrinsic dimension of the features which is always \(3n\). The figure shows the implementation of these two approaches for the computation of \(F^{\text{2-Geo}}\) and \(F^{\text{1-Geo}}\). same degree histogram. The point clouds \(E,F\) for \(d=6\) appear in Figure 1. Further details appear in Appendix A. The results appear in Table 1. First, as our theory predicts, \(\mathbf{GramNet}\) achieves perfect separation on all examples, while in the \(0.1\) noise level we use \(\mathbf{GeoEGNN}\) to achieve good but not perfect separation. Next, note that EGNN (Victor Garcia Satorras, 2021) fails for all examples. Surprisingly, replacing the neural networks in EGNN with simple linear functions (LinearEGNN) does yield successful separation of examples in \(\mathbb{R}^{3\times n}_{distinct}\), as predicted in Theorem 2.1. Finally, note that Tensor Field Networks (Thomas et al., 2018) does not separate our examples well, while MACE (Batatia et al.,), DimeNet (Gasteiger et al., 2020) and GVPGNN (Jing et al.) do. Of these methods, only MACE is applicable to problems with \(d>3\), and it has failed to separate for the \(d=12\) case. **GramNet** was unable to run for \(d=12\) due to memory constraints. ### Invariant regression on QM9 We evaluated our architectures on the QM9 Dataset for molecular property prediction (Ramakrishnan et al., 2014). To implement \(\mathbf{GeoEGNN}\) we used the original implementation of **EGNN**(Victor Garcia Satorras, 2021) augmented by the addition of our \(h_{[i,j]}\) of Equation (5) as edge features. As shown in Table 2, this minor modification of EGNN typically leads to improved results. In contrast, despite its excellent separation properties \(\mathbf{GramNet}\) is not competitive on this task. ## 6 Conclusion We presented fully separating architectures whose embedding dimension depends linearly on the point cloud's dimension, in stark contrast to contemporary approaches with an exponential dependence. Our implementation of these architectures achieves good separation results in practice and yields improved results in several tasks on the QM9 benchmark. We believe these results will open the door to further improvements, both in the theoretical understanding of what is necessary for separation and in the development of separating architectures with good practical performance. ## 7 Related Work WL equivalence.The relationship between the \(k\)-WL test and GNNs is very well-studied. Proofs of equivalence of GNNs and \(k\)-WL often assume a countable domain (Xu et al., 2018) or require separation only for a single graph (Morris et al., 2018). To the best of our knowledge, our separation result is the first one in which, for a fixed parameter vector (and in fact for almost all such vectors), separation is guaranteed for _all_ graphs with features in a _continuous_ domain. This type of separation could be obtained using the power-sum methodology in (Maron et al., 2019), but the complexity of this construction is exponentially worse than ours (see Subsections 3.1 and 3.2). Complete Invariants and universality.As mentioned earlier, several works describe \(\mathcal{SO}[3,n]\) and \(\mathcal{O}[3,n]\) equivariant point-cloud architectures that are universal. However, these rely on high-dimensional representations of \(SO(d)\)(Dym and Maron, 2020; Finkelshtein et al., 2022; Gasteiger et al., 2021) or \(S_{n}\)(Lim et al., 2022). In the planar case \(d=2\), universality using low-dimensional features was achieved in (Bokman et al., 2022). For general \(d\), a complete test similar to our 2-Geo was proposed in (Kurlin, 2022). However, it uses Gram-Schmidt orthogonalization, which leads to discontinuities at point clouds with linearly-dependent points. Moreover, the complete invariant features defined there are not vectors, but rather sets of sets. As a result, measuring invariant distances for \(d=3\) requires \(O(n^{7.5}+n^{3.5}log^{3}(n))\) arithmetic operations, whereas using GramNet invariant features only requires \(O(n^{4}log(n))\) operations. Finally, we note that more efficient tests for equivalence of geometric graphs were suggested in (Brass and Knauer, 2000), but there does not seem to be a straightforward way to modify these constructions to efficiently compute a complete, continuous invariant feature. Weaker notions of universalityWe showed that 1-Geo is complete on the subset \(\mathbb{R}^{3\times n}_{distinct}\). Similar results for a simpler algorithm, and with additional restrictions, were obtained in (Widdowson and Kurlin, 2022). Efficient separation/universality can also be obtained for point clouds with distinct principal axes (Puny et al., 2021; Kurlin, 2022), or when only considering permutation (Qi et al., 2017) or rigid (Wang et al., 2022) symmetries, rather than considering both symmetries simultaneously. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Property & \(\alpha\) & \(\varepsilon_{HOMO}\) & \(H\) & \(\varepsilon_{LUMO}\) & \(\Delta\varepsilon\) & \(\mu\) & \(C_{\nu}\) & \(G\) & \(R^{2}\) & \(U\) & \(U_{0}\) & \(ZPVE\) \\ \([units]\) & \(bohr^{3}\) & meV & meV & meV & meV & D & cal/mol K & meV & \(bohr^{3}\) & meV & meV & meV \\ \hline **EGNN** & 0.073 & 29 & 12 & **25** & 48 & **0.029** & **0.031** & 12 & 0.106 & 12 & 11 & **1.55** \\ **GeoEGNN** & **0.068** & **27.9** & **11.6** & 38.3 & **45.8** & 0.032 & **0.031** & **10.75** & **0.1004** & **11.5** & **10.5** & 1.61 \\ \hline \hline \end{tabular} \end{table} Table 2: Results on QM9 Dataset AcknowledgementsN.D. acknowledges the support of the Horev Fellowship.
2309.11810
Extragalactic Test of General Relativity from Strong Gravitational Lensing by using Artificial Neural Networks
This study aims to test the validity of general relativity (GR) on kiloparsec scales by employing a newly compiled galaxy-scale strong gravitational lensing (SGL) sample. We utilize the distance sum rule within the Friedmann-Lema\^{\i}tre-Robertson-Walker metric to obtain cosmology-independent constraints on both the parameterized post-Newtonian parameter $\gamma_{\rm PPN}$ and the spatial curvature $\Omega_{k}$, which overcomes the circularity problem induced by the presumption of a cosmological model grounded in GR. To calibrate the distances in the SGL systems, we introduce a novel nonparametric approach, Artificial Neural Network (ANN), to reconstruct a smooth distance--redshift relation from the Pantheon+ sample of type Ia supernovae. Our results show that $\gamma_{\rm PPN}=1.16_{-0.12}^{+0.15}$ and $\Omega_k=0.89_{-1.00}^{+1.97}$, indicating a spatially flat universe with the conservation of GR (i.e., $\Omega_k=0$ and $\gamma_{\rm PPN}=1$) is basically supported within $1\sigma$ confidence level. Assuming a zero spatial curvature, we find $\gamma_{\rm PPN}=1.09_{-0.10}^{+0.11}$, representing an agreement with the prediction of 1 from GR to a 9.6\% precision. If we instead assume GR holds (i.e., $\gamma_{\rm PPN}=1$), the curvature parameter constraint can be further improved to be $\Omega_k=0.11_{-0.47}^{+0.78}$. These resulting constraints demonstrate the effectiveness of our method in testing GR on galactic scales by combining observations of strong lensing and the distance--redshift relation reconstructed by ANN.
Jing-Yu Ran, Jun-Jie Wei
2023-09-21T06:28:39Z
http://arxiv.org/abs/2309.11810v2
Extragalactic Test of General Relativity from Strong Gravitational Lensing by using Artificial Neural Networks ###### Abstract This study aims to test the validity of general relativity (GR) on kiloparsec scales by employing a newly compiled galaxy-scale strong gravitational lensing (SGL) sample. We utilize the distance sum rule within the Friedmann-Lemaitre-Robertson-Walker metric to obtain cosmology-independent constraints on both the parameterized post-Newtonian parameter \(\gamma_{\rm PPN}\) and the spatial curvature \(\Omega_{\rm k}\), which overcomes the circularity problem induced by the presumption of a cosmological model grounded in GR. To calibrate the distances in the SGL systems, we introduce a novel nonparametric approach, Artificial Neural Network (ANN), to reconstruct a smooth distance-redshift relation from the Pantheon+ sample of type Ia supernovae. Our results show that \(\gamma_{\rm PPN}=1.16^{+0.13}_{-0.12}\) and \(\Omega_{\rm k}=0.89^{+1.97}_{-1.00}\), indicating a spatially flat universe with the conservation of GR (i.e., \(\Omega_{\rm k}=0\) and \(\gamma_{\rm PPN}=1.1\)) is basically supported within \(1\sigma\) confidence level. Assuming a zero spatial curvature, we find \(\gamma_{\rm PPN}=1.09^{+0.14}_{-0.10}\), representing an agreement with the prediction of 1 from GR to a 9.6% precision. If we instead assume GR holds (i.e., \(\gamma_{\rm PPN}=1\)), the curvature parameter constraint can be further improved to be \(\Omega_{\rm k}=0.11^{+0.78}_{-0.47}\). These resulting constraints demonstrate the effectiveness of our method in testing GR on galactic scales by combining observations of strong lensing and the distance-redshift relation reconstructed by ANN. ## I Introduction As an important cornerstone of modern physics, Einstein's theory of general relativity (GR) has withstood very strict tests (e.g., [1; 2; 3; 4]). But testing GR at a much higher precision is still a vital task, because any possible violation of GR would have profound effects on our understanding of fundamental physics. Within the parameterized post-Newtonian (PPN) formalism, GR predicts that the PPN parameter \(\gamma_{\rm PPN}\) which describes the amount of space curvature produced per unit rest mass should be exactly [5]. Measuring \(\gamma_{\rm PPN}\) therefore serves as a test of the validity of GR on large scales. That is, any deviation from \(\gamma_{\rm PPN}=1\) implies a possible violation of GR. On solar system scales, the GR prediction for \(\gamma_{\rm PPN}\) has been confirmed with high accuracy. By measuring the round-trip travel time of radar signals passing near the Sun, the Cassini spacecraft yielded \(\gamma_{\rm PPN}=1+(2.1\pm 2.3)\times 10^{-5}\)[6]. However, the extragalactic tests of GR are still insufficiency and much less precise. On galactic scales, strong gravitational lensing (SGL), combined with stellar kinematics in the lensing galaxy, provides an effective way to test the validity of GR by constraining the PPN parameter \(\gamma_{\rm PPN}\). The pioneering work by Ref. [7] first utilized this approach and reported a result of \(\gamma_{\rm PPN}=0.98\pm 0.07\) based on observations of 15 elliptical lensing galaxies from the Sloan Lens ACS Survey. Since then, numerous studies have been conducted to test GR using different SGL samples [8; 9; 10; 11; 12; 13; 14]. In this paper, we further explore the validity of GR by employing a newly compiled SGL sample [15], which consists of 161 galaxy-scale strong lensing systems. This larger SGL sample allows us to perform a more comprehensive analysis and obtain further insights into the behavior of gravity on galactic scales. In practice, in order to constrain the PPN parameter \(\gamma_{\rm PPN}\) using SGL systems, one has to know a ratio of three angular diameter distances (i.e., the distances from the observer to the lens, \(D_{ls}\), the observer to the source, \(D_{s}\), and the lens to the source, \(D_{ls}\)). In most previous works, the required distance ratio is calculated within the context of the standard \(\Lambda\)CDM cosmological model. However, \(\Lambda\)CDM itself is established on the framework of GR, which leads to a circularity problem in testing GR [13; 14]. To circumvent this problem, we will introduce the distance sum rule (DSR) in the Friedmann-Lemaitre-Robertson-Walker (FLRW) metric. The two distances \(D_{l}\) and \(D_{s}\) can be directly determined from observations of type Ia supernovae (SNe Ia), but not the distance \(D_{ls}\). The DSR enables us to convert \(D_{ls}\) into a relationship with \(D_{l}\), \(D_{s}\), and the spatial curvature \(\Omega_{k}\). Based on the DSR in the FLRW metric, cosmology-independent constraints on both \(\gamma_{\rm PPN}\) and \(\Omega_{k}\) can thus be obtained by combing observations of strong lensing and SNe Ia [10; 14]. Very recently, by employing the Gaussian Process (GP) method, Liu et al. [13] reconstructed a smooth distance-redshift relation directly from SN Ia observation to calibrate the distances in the SGL sample. GP allows for the reconstruction of a function from a dataset without assuming a specific model or parameterization, and it has been widely used in cosmological researches [16; 17; 18; 19; 20; 21]. In the GP analysis, the errors in the observational data are assumed to follow a Gaussian distribution [22]. However, the actual observations might not follow Gaussian distributions. This may thus be a strong assumption for reconstructing a function from observational data. Moreover, due to the sparsity and scatter of data points at high redshifts, the GP reconstructed function from SN Ia data exhibits strange oscillations with large uncertainties. To address these concerns and ensure the reliability of the reconstructed function, we employ the Artificial Neural Network (ANN) method, which is a machine learning technique and has been proven to be a "universal approximator" that can reconstruct a great variety of functions [23; 24]. Thanks to the powerful property of neural networks, methods based on ANNs have been widely used in regression and estimation tasks. In this work, we will reconstruct the distance-redshift relation from SN Ia data using the ANN method, utilizing a code developed in Ref [25]. This paper is organized as follows: in Section II, we introduce the methodology and observations used for testing GR on galactic scales. Cosmology-independent constraints on \(\gamma_{\rm PPN}\) and \(\Omega_{k}\) are shown in Section III. In Section IV, we make a summary and end with some discussions. ## II Methodology and data In the weak-field limit, the metric of space-time can be characterized as \[{\rm d}s^{2}=c^{2}{\rm d}t^{2}\left(1-\frac{2GM}{c^{2}r}\right)-{\rm d}r^{2} \left(1+\frac{2\gamma_{\rm PPN}GM}{c^{2}r}\right)-r^{2}{\rm d}\Omega^{2}\, \tag{1}\] where \(\gamma_{\rm PPN}\) is the PPN parameter, \(M\) is the mass of the central object, and \(\Omega\) is the angle in the invariant orbital plane. In the framework of GR, \(\gamma_{\rm PPN}\) is equal to unity. ### Gravitational Lensing Theory The main idea of testing the validity of GR via SGL systems is that the mass enclosed within the Einstein radius derived separately from the gravitational theory and the dynamical theory should be equivalent, i.e., \(M_{\rm E}^{\rm eff}=M_{\rm E}^{\rm dyn}\). From the theory of gravitational lensing [26], the Einstein angle \(\theta_{\rm E}\) reflecting the angular separations between multiple images is related to the gravitational mass \(M_{\rm E}^{\rm eff}\), \[\theta_{\rm E}=\sqrt{\frac{1+\gamma_{\rm PPN}}{2}}\left(\frac{4GM_{\rm E}^{ \rm eff}}{c^{2}}\frac{D_{ls}}{D_{l}D_{s}}\right)^{1/2}\, \tag{2}\] where \(D_{l}\), \(D_{s}\), and \(D_{ls}\) are, respectively, the angular diameter distances from the observer to the lens, the observer to the source, and the lens to the source. By introducing the Einstein radius \(R_{\rm E}=D_{l}\theta_{\rm E}\), Equation (2) can be rearranged as \[\frac{GM_{\rm E}^{\rm eff}}{R_{\rm E}}=\frac{2}{1+\gamma_{\rm PPN}}\frac{c^{2} }{4}\frac{D_{s}}{D_{ls}}\theta_{\rm E}. \tag{3}\] To estimate the dynamical mass \(M_{\rm E}^{\rm dyn}\) from the spectroscopic measurement of the lens velocity dispersion, one must first set a mass distribution model for the lensing galaxy. Here we use the common mass model with power-law density profiles [27; 15]: \[\left\{\begin{array}{l}\rho(r)=\rho_{0}\left(\frac{r}{r_{0}}\right)^{-\alpha }\\ \nu(r)=\nu_{0}\left(\frac{r}{r_{0}}\right)^{-\delta}\\ \beta(r)=1-\sigma_{r}^{2}/\sigma_{r}^{2}\,\end{array}\right. \tag{4}\] where \(r\) is defined as the spherical radial coordinate from the lens centre, \(\rho\left(r\right)\) is the total (including luminous and dark matter) mass density distribution, and \(\nu\left(r\right)\) represents the distribution of luminous density. The parameter \(\beta\left(r\right)\) describes the anisotropy of the stellar velocity dispersion, where \(\sigma_{r}\) and \(\sigma_{r}\) are the velocity dispersions in the tangential and radial directions, respectively. In the literature, \(\beta\) is always assumed to be independent of \(r\) (e.g., [28; 27]). Following previous studies [13; 14; 15; 9; 7; 10], we set a Gaussian prior \(\beta=0.18\pm 0.13\), informed by the constraint from a well-studied sample of elliptical galaxies [29]. That is, \(\beta\) will be marginalized using a Gaussian prior of \(\beta=0.18\pm 0.13\) over the \(2\sigma\) range of \([-0.08,\ 0.44]\). Also, \(\alpha\) and \(\delta\) are the power-law indices of the total mass density profile and the luminosity density profile, respectively. It has been confirmed in previous works [15; 30] that \(\alpha\) is significantly related with the lens redshift \(z_{l}\) and the surface mass density of the lensing galaxy. Therefore, we treat the parametrized model of \(\alpha\) as [15] \[\alpha=\alpha_{0}+\alpha_{z}z_{l}+\alpha_{s}\log_{10}\tilde{\Sigma}\, \tag{5}\] where \(\alpha_{0}\), \(\alpha_{z}\) and \(\alpha_{s}\) are arbitrary constants. Here \(\tilde{\Sigma}\) stands for the normalized surface mass density, and is expressed as \(\tilde{\Sigma}=\frac{\left(\sigma_{0}/100\ {\rm km\ s}^{-1}\right)^{2}}{R_{\rm E}/10 \ {\rm kpc}}\), where \(\sigma_{0}\) is the observed velocity dispersion, \(R_{\rm eff}\) is the lensing galaxy's half-light radius, and \(h=H_{0}/(100\ {\rm km\ s}^{-1}\ {\rm Mpc}^{-1})\) is the reduced Hubble constant. Following the well-known radial Jeans equation in spherical coordinate [31], the radial velocity dispersion of the luminous matter \(\sigma_{r}\) in early-type lens galaxies takes the form \[\sigma_{r}^{2}\left(r\right)=\frac{G\int_{r}^{\infty}{\rm d}r^{\prime}r^{ \prime 2\beta-2}\nu\left(r^{\prime}\right)M\left(r^{\prime}\right)}{r^{\prime 2 \beta}\nu\left(r\right)}\, \tag{6}\] where \(M\left(r\right)\) is the total mass included within a sphere with radius \(r\), \[M\left(r\right)=\int_{0}^{r}{\rm d}r^{\prime}4\pi r^{\prime 2}\rho\left(r^{ \prime}\right)=4\pi\frac{\rho_{0}}{r_{0}^{-\alpha}}\frac{r^{3-\alpha}}{3- \alpha}. \tag{7}\] The dynamical mass \(M_{\rm E}^{\rm dyn}\) enclosed within a cylinder of radius equal to the Einstein radius \(R_{\rm E}\) can be written as [15] \[M_{\rm E}^{\rm dyn}=2\pi^{3/2}\frac{R_{\rm E}^{3-\alpha}}{3-\alpha}\frac{ \Gamma\left(\frac{\alpha-1}{2}\right)}{\Gamma\left(\frac{\alpha}{2}\right)} \frac{\rho_{0}}{r_{0}^{-\alpha}}\, \tag{8}\] where \(\Gamma(x)\) is Euler's Gamma function. By combining Equations (7) and (8), we get the relation between \(M\left(r\right)\) and \(M_{\rm E}^{\rm dyn}\): \[M(r)=\frac{2}{\sqrt{\pi}}\frac{1}{\lambda(\alpha)}\left(\frac{r}{R_{\rm E}} \right)^{3-\alpha}M_{\rm E}^{\rm dyn}\, \tag{9}\] where \(\lambda(\alpha)=\Gamma\left(\frac{\alpha-1}{2}\right)/\Gamma\left(\frac{\alpha} {2}\right)\). By substituting Equations (9) and (4) into Equation (6), we obtain \[\sigma_{r}^{2}\left(r\right)=\frac{2}{\sqrt{\pi}}\frac{GM_{\rm E}^{\rm dyn}}{R_{ \rm E}}\frac{1}{\xi-2\beta}\frac{1}{\lambda(\alpha)}\left(\frac{r}{R_{\rm E}} \right)^{2-\alpha}\, \tag{10}\] where \(\xi=\alpha+\delta-2\). The actual velocity dispersion of the lensing galaxy is the component of luminosity-weighted average along the line of sight and measured over the effective spectroscopic aperture \(R_{\rm A}\), that can be expressed as (see Ref. [15] for more details) \[\sigma_{0}^{2}\left(\leq R_{\rm A}\right)=\frac{c^{2}}{2\sqrt{\pi}}\frac{2}{1+ \gamma_{\rm PPN}}\frac{D_{s}}{D_{ls}}\theta_{\rm E}F\left(\alpha,\ \delta,\ \beta\right)\left(\frac{R_{\rm A}}{R_{\rm E}}\right)^{2-\alpha}\, \tag{11}\] where \[F\left(\alpha,\ \delta,\ \beta\right)=\frac{3-\delta}{\left(\xi-2\beta\right) \left(3-\xi\right)}\frac{\lambda\left(\xi\right)-\beta\lambda\left(\xi+2 \right)}{\lambda\left(\alpha\right)\lambda\left(\delta\right)}. \tag{12}\] The theoretical value of the velocity dispersion inside the radius \(R_{\rm eff}/2\) can then be calculated by [27] \[\sigma_{0}^{\rm th}=\sqrt{\frac{c^{2}}{2\sqrt{\pi}}\frac{2}{1+\gamma_{\rm PPN}} \frac{D_{s}}{D_{ls}}\theta_{\rm E}F\left(\alpha,\ \delta,\ \beta\right)\left(\frac{\theta_{\rm eff}}{2\theta_{\rm E}}\right)^{2-\alpha}}\, \tag{13}\] where \(\theta_{\rm eff}=R_{\rm eff}/D_{l}\) denotes the effective angular radius of the lensing galaxy. Based on the spectroscopic data, one can measure the luminosity-weighted average of the line-of-sight velocity dispersion \(\sigma_{\rm ap}\) within the circular aperture with the angular radius \(\theta_{\rm ap}\). In practice, \(\sigma_{\rm ap}\) should be normalized to the velocity dispersion within the typical physical aperture with a radius \(R_{\rm eff}/2\), \[\sigma_{0}^{\rm obs}=\sigma_{\rm ap}\left[\theta_{\rm eff}/(2\theta_{\rm ap}) \right]^{\eta}\, \tag{14}\] where the value of the correction factor is taken as \(\eta=-0.066\pm 0.035\)[32]. Then, the total uncertainty of \(\sigma_{0}^{\rm obs}\) can be obtained by \[\left(\Delta\sigma_{0}^{\rm SGL}\right)^{2}=\left(\Delta\sigma_{0}^{\rm stat} \right)^{2}+\left(\Delta\sigma_{0}^{\rm AC}\right)^{2}+\left(\Delta\sigma_{0} ^{\rm sys}\right)^{2}\, \tag{15}\] where \(\Delta\sigma_{0}^{\rm stat}\) is the statistical error propagated from the measurement error of \(\sigma_{\rm ap}\), and \(\Delta\sigma_{0}^{\rm AC}\) is the aperture-correction-induced error propagated from the uncertainty of \(\eta\). The systematic error due to the extra mass contribution from the outer matters of the lensing galaxy along the line of sight, \(\Delta\sigma_{0}^{\rm sys}\), is taken as an uncertainty of \(\sim 3\%\) to the velocity dispersion [33]. Once we know the ratio of the angular diameter distances \(D_{s}/D_{ls}\), the constraints on the PPN parameter \(\gamma_{\rm PPN}\) can be derived by comparing the observational and theoretical values of the velocity dispersions (see Equations (13) and (14)). Conventionally the distance ratio \(D_{s}/D_{ls}\) is calculated within the standard \(\Lambda\)CDM cosmological model [9; 10]. However, \(\Lambda\)CDM itself is built on the framework of GR and this leads to a circularity problem [13; 14]. To avoid this problem, we will use a cosmological-model-independent method which is based upon the sum rule of distances in the FLRW metric to constrain \(\gamma_{\rm PPN}\). ### Distance Sum Rule In a homogeneous and isotropic space, the dimensionless comoving distance \(d\left(z_{l},\ z_{s}\right)\equiv\left(H_{0}/c\right)\left(1+z_{s}\right)D_{ A}\left(z_{l},\ z_{s}\right)\) can be written as \[d(z_{l},z_{s})=\frac{1}{\sqrt{|\Omega_{k}|}}{\rm sinn}\left(\sqrt{|\Omega_{k}| }\int_{z_{l}}^{z_{s}}\frac{{\rm d}z^{\prime}}{E(z^{\prime})}\right)\, \tag{16}\] where \(\Omega_{k}\) denotes the spatial curvature density parameter at the present time and \(E(z)=H(z)/H_{0}\) is the dimensionless expansion rate. Also, \({\rm sinn}(x)\) is \({\rm sinh}(x)\) when \(\Omega_{k}>0\), \(x\) when \(\Omega_{k}=0\), and \({\rm sin}(x)\) when \(\Omega_{k}<0\). By applying the notations \(d(z)\equiv d\left(0,\ z\right)\), \(d_{ls}\equiv d\left(z_{l},\ z_{s}\right)\), \(d_{l}\equiv d\left(0,\ z\right)\), and \(d_{s}\equiv d\left(0,\ z_{s}\right)\), one can derive a sum rule of distances along the null geodesics of the FLRW metric as [34; 35; 36] \[\frac{d_{ls}}{d_{s}}=\sqrt{1+\Omega_{k}d_{l}^{2}}-\frac{d_{l}}{d_{s}}\sqrt{1+ \Omega_{k}d_{s}^{2}}. \tag{17}\] This relation provides a cosmology-independent probe to test both the spatial curvature and the FLRW metric. The validity of the FLRW metric can be tested by comparing the derived \(\Omega_{k}\) from the three distances (\(d_{l}\), \(d_{s}\), and \(d_{ls}\)) for any two pairs of (\(z_{l}\), \(z_{s}\)). With Equation (17), the distance ratio \(D_{s}/D_{ls}\)1 in Equation (13) is only related to the curvature parameter \(\Omega_{k}\) and the dimensionless distances \(d_{l}\) and \(d_{s}\). If independent measurements of \(d_{l}\) and \(d_{s}\) are given, we can put constraints on both \(\gamma_{\rm PPN}\) and \(\Omega_{k}\) from Equations (13) and (17) without assuming any specific cosmological model. Footnote 1: Note that \(D_{s}/D_{ls}\) is actually equal to the dimensionless distance ratio \(d_{s}/d_{ls}\). ### Artificial Neural Network To calibrate the distances \(d_{l}\) and \(d_{s}\) of the SGL systems (i.e., the distances \(d_{l}\) and \(d_{s}\) on the right side of Equation (17)), we use a new nonparametric approach, ANN, to reconstruct a smooth distance-redshift relation \(d(z)\) from SN Ia observation. ANNs possess several desirable properties, including high-level abstraction of neural input-output transformation, the ability to generalize from learned instances to new unseen data, adaptability, self-learning, fault tolerance, and nonlinearity [37]. According to the universal approximation theorem [38; 23], ANNs can function as universal function approximations to simulate arbitrary input-output relationships using multilayer feedforward networks with a sufficient number of hidden units. Therefore, we can input the redshift \(z\) into the neural network, with the corresponding comoving distance \(d(z)\) and its associated error \(\sigma_{d(z)}\) as the desired outputs. Once the network has been trained using the Pantheon+ sample, we will obtain an approximate function capable of predicting both \(d(z)\) and its error \(\sigma_{d(z)}\) at any given redshift \(z\). Ref. [25] has developed a Python code for the reconstruction of functions from observational data employing an ANN. They have substantiated the reliability of these reconstructed functions by estimating cosmological parameters through the utilization of the reconstructed Hubble parameter \(H(z)\) and the luminosity distance \(D_{L}(z)\), in direct comparison with observational data. In our study, we will employ this code to reconstruct the distance-redshift relation. The general structure of an ANN consists of an input layer, one or more hidden layers, and an output layer. The basic unit of these layers are referred to as neurons, which serve as both linear transformation units and nonlinear activation functions for the input vector. In accordance with Ref. [25], we employ the Exponential Linear Unit as our chosen activation function, as defined by its form in [39]: \[f\left(x\right)=\left\{\begin{array}{cc}x&x>0\\ \alpha\left(e^{x}-1\right)&x\leq 0\end{array}\right., \tag{18}\] where the hyperparameter \(\alpha\) is set to 1. The network is trained by minimizing a loss function, which quantitatively measures the discrepancy between the ground truth and predicted values. In this analysis, we adopt the mean absolute error (MAE), also known as the L1 loss function, as our choice of loss function. The linear weights and biases within the network are optimized using the back-propagation algorithm. We employ the Adam optimizer [40], a gradient-based optimization technique, to iteratively update the network parameters during training. This choice of optimizer also contributes to faster convergence. After multiple iterations, the network parameters are adjusted to minimize the loss. We have determined that a sufficient number of iterations for training convergence is reached when the loss no longer decreases, which we set to be \(3\times 10^{5}\). Batch Normalization [41] is a technique designed to stabilize the distribution of inputs within each layer, allowing for higher learning rates and reduced sensitivity to initialization. To determine the optimal network model, we train the network using 1701 SNe Ia from the Pantheon+ sample (more on this below) and assess the fitting effect through K-fold cross-validation [42]. In K-fold cross-validation, the training set is divided into k smaller sets, with k-1 folds used as training data for model training, and the remaining fold used for validation. This process is repeated k times, with each fold serving as the validation data once. The final performance of the model is determined by averaging the performance across these k iterations. This approach is particularly useful when the number of samples available for learning is insufficient to split into traditional train, validation, and test sets, as is the case in our analysis. Additionally, it helps mitigate issues arising from the randomness in data partitioning. As a general guideline, we have selected \(k=10\) for our cross-validation procedure and have utilized the mean squared error (MSE) as the metric for validating the performance of the model. Through our experimentation, we have found that the network model with a single hidden layer comprising 4096 neurons and without batch normalization yields the best results. We conduct comparisons with models having varying numbers of hidden layers, and we observe diminished performance as the number of hidden layers increased, accompanied by increased computational resource consumption. Regarding the number of neurons in the hidden layer, we observe negligible impact on the results, as reflected by the final MSE values consistently hovering around 0.0042, regardless of whether the number of neurons was set to 1024, 2048, 4096, or 8192. Importantly, the final MSE value with 4096 neurons was slightly smaller compared to the other three configurations, and as a result, we select this configuration. The validation values for implementing batch normalization or not implementing it is 0.0049 or 0.0042, respectively. Subsequently, we will employ the optimal network model, as described above, to reconstruct our distance-redshift curve. ### Supernova Data In order to reconstruct the distance function \(d(z)\), we choose the latest combined sample of SNe Ia called Pantheon+ [43], which consists of 1701 light curves of 1550 SNe Ia, covering the redshift range \(0.001<z<2.3\). For each SN Ia, the distance modulus \(\mu\) is related to the luminosity distance \(D_{L}\) by \[\mu(z)=5\log_{10}\left[\frac{D_{L}(z)}{\text{Mpc}}\right]+25\, \tag{19}\] and the observed distance modulus is \[\mu_{\text{obs}}(z)=m_{B}(z)+\kappa\cdot X_{1}-\omega\cdot\mathcal{C}-M_{B}\, \tag{20}\] where \(m_{B}\) is the rest-frame \(B\) band peak magnitude, \(X_{1}\) and \(\mathcal{C}\), respectively, represent the time stretch of light curve and the SN color at maximum brightness, and \(M_{B}\) is the absolute \(B\)-band magnitude. Through the BEAMS with Bias Corrections method [44], the two nuisance parameters \(\kappa\) and \(\omega\) can be calibrated to zero. Then, the observed distance modulus can be simplified as \[\mu_{\text{obs}}(z)=m_{\text{corr}}(z)-M_{B}\, \tag{21}\] where \(m_{\text{corr}}\) is the corrected apparent magnitude. The absolute magnitude \(M_{B}\) is exactly degenerate with the Hubble constant \(H_{0}\). Once the value of \(M_{B}\) or \(H_{0}\) is known, the luminosity distances \(D_{L}(z)\) can be obtained from SNe Ia. In this work, we adopt \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\) to normalize the SN Ia \(D_{L}(z)\) data as the observational \(d(z)\). That is, \(d(z)=(H_{0}/c)D_{L}(z)/(1+z)\). Note that the choice of \(H_{0}\) has no impact on our results, since the required distance ratio \(D_{s}/D_{ls}\) (see Equation (13)) is completely independent of \(H_{0}\). Having obtained the dataset of \(d(z)\), we adopt ANN to reconstruct the distance function \(d(z)\), and the results are shown in Figure 1. The black line represents the reconstructed function of \(d(z)\), and the shaded region is the corresponding \(1\sigma\) confidence level. ### Strong-lensing Data Recently, Ref. [15] compiled a galaxy-scale SGL sample including 161 systems with stellar velocity dispersion measurements, which is assembled with strict selection criteria to meet the assumption of spherical symmetry on the lens mass model. The observational information for each SGL system are listed in Appendix of Ref. [15], including the lens redshift \(z_{l}\), the source redshift \(z_{s}\), the Einstein angle \(\theta_{E}\), the central velocity dispersion of the lensing galaxy \(\sigma_{\rm ap}\), the spectroscopic aperture angular radius \(\theta_{\rm ap}\), and the half-light angular radius of the lensing galaxy \(\theta_{\rm off}\). By fitting the two-dimensional power-law luminosity profile convolved with the instrumental point spread function to the high-resolution Hubble Space Telescope imaging data over a circle of radius \(\theta_{\rm eff}/2\) centered on the lensing galaxies, Ref. [15] measured the slops of the luminosity density profile \(\delta\) for the 130 lensing galaxies in the full sample. They showed that \(\delta\) should be treated as as an observable for each lens in order to get an unbiased estimate of the cosmological parameter \(\Omega_{\rm m}\). Therefore, the SGL sample we adopt here is the truncated sample of 130 SGL systems with \(\delta\) measurements, for which the redshift ranges of lenses and sources are \(0.0624\leq z_{l}\leq 0.7224\) and \(0.1970\leq z_{s}\leq 2.8324\), respectively. In this work, we use the reconstructed distance function \(d(z)\) from Pantheon+ SNe Ia to calibrate the distances \(d_{l}\) and \(d_{s}\) of the SGL systems. However, the SN Ia catalog extends only to \(z=2.3\). As such, we shall employ only a sub-set of the SGL sample that overlaps with the SN Ia data for the calibration. Thus, only 120 SGL systems with \(z_{s}\leq 2.3\) are available in our analysis. ### The Likelihood Function By using the Python Markov Chain Monte Carlo module EMCEE [45] to maximize the likelihood function \(\mathcal{L}\), we simultaneously place limits on the PPN parameter \(\gamma_{\rm PPN}\), the curvature parameter \(\Omega_{k}\), and the lens model parameters (\(\alpha_{0}\), \(\alpha_{z}\), and \(\alpha_{s}\)). The likelihood function is defined as \[\mathcal{L}=\prod_{i=1}^{120}\frac{1}{\sqrt{2\pi}\Delta\sigma_{0,i}^{\rm tot}} \exp\left[-\frac{1}{2}\left(\frac{\sigma_{0,i}^{\rm th}-\sigma_{0,i}^{\rm obs} }{\Delta\sigma_{0,i}^{\rm tot}}\right)^{2}\right]\, \tag{22}\] where the variance \[\left(\Delta\sigma_{0}^{\rm tot}\right)^{2}=\left(\Delta\sigma_{0}^{\rm SGL} \right)^{2}+\left(\Delta\sigma_{0}^{\rm SN}\right)^{2} \tag{23}\] is given in terms of the total uncertainty \(\Delta\sigma_{0}^{\rm SGL}\) derived from the SGL observation (Equation (15)) and the propagated uncertainty \(\Delta\sigma_{0}^{\rm SN}\) derived from the distance calibration by SNe Ia. With Equation (13), the propagated uncertainty \(\Delta\sigma_{0}^{\rm SN}\) can be estimated as \[\Delta\sigma_{0}^{\rm SN}=\sigma_{0}^{\rm th}\frac{\Delta D_{r}}{2D_{r}}\, \tag{24}\] where \(D_{r}\) is a convenient notation for the distance ratio in Equation (13), i.e., \(D_{r}\equiv D_{s}/D_{ls}=d_{s}/d_{ls}\), and its uncertainty is \(\Delta D_{r}\). With the reconstructed distance function \(d(z)\), as well as its \(1\sigma\) uncertainty \(\Delta d(z)\), from the SN Ia data, we can calibrate the distances (\(d_{l}\) and \(d_{s}\)) and their corresponding uncertainties (\(\Delta d_{l}\) and \(\Delta d_{s}\)) for each SGL system. Thus, the uncertainty \(\Delta D_{r}\) of the distance ratio can be easily derived from Equation (17), i.e., \[\begin{split}(\Delta D_{r})^{2}=& D_{r}^{4}\left( \frac{\Omega_{k}d_{l}}{\sqrt{1+\Omega_{k}d_{l}^{2}}}-\frac{\sqrt{1+\Omega_{k} d_{s}^{2}}}{d_{s}}\right)^{2}(\Delta d_{l})^{2}\\ &+D_{r}^{4}\left(\frac{d_{l}}{d_{s}^{2}\sqrt{1+\Omega_{k}d_{s}^{ 2}}}\right)^{2}(\Delta d_{s})^{2}\.\end{split} \tag{25}\] ## III Results The 1D marginalized probability distributions and 2D plots of the \(1-2\sigma\) confidence regions for the PPN parameter \(\gamma_{\rm PPN}\), the cosmic curvature \(\Omega_{k}\), and the lens model parameters (\(\alpha_{0}\), \(\alpha_{z}\), and \(\alpha_{s}\)), constrained by 120 SGL systems, are presented in Figure 2, and the best-fitting results are listed in Table 1. These contours show that at the \(1\sigma\) confidence level, the inferred parameter values are \(\gamma_{\rm PPN}=1.16^{+0.15}_{-0.12}\), \(\Omega_{k}=0.89^{+1.97}_{-1.00}\), \(\alpha_{0}=1.2^{+0.15}_{-0.15}\), \(\alpha_{z}=-0.37^{+0.22}_{-0.26}\), and \(\alpha_{s}=0.70^{+0.10}_{-0.09}\). We find that the measured \(\gamma_{\rm PPN}\) is consistent with the prediction of \(\gamma_{\rm PPN}=1\) from GR, and its constraint accuracy is about 11.6%. While \(\Omega_{k}\) is weakly constrained, it is still compatible with zero spatial curvature within \(1\sigma\) confidence level. We also find that the inferred \(\alpha_{z}\) and \(\alpha_{s}\) separately deviate from zero at \(\sim 2\sigma\) and \(\sim 8\sigma\) levels, confirming previous finding that the total mass density slope \(\alpha\) strongly depends on both the lens redshift and the surface mass density [15]. We further explore the scenario of adopting a prior of flatness, i.e., \(\Omega_{k}=0\). For this scenario, as shown in Figure 3 and Table 1, the marginalized distribution gives \(\gamma_{\rm PPN}=1.09^{+0.11}_{-0.10}\), representing a precision of 9.6%, in good agreement with the prediction of GR. If instead we adopt a prior of \(\gamma_{\rm PPN}=1\) (i.e., assuming GR holds) and allow \(\Omega_{k}\) to be a free parameter, the resulting constraints on \(\Omega_{k}\) and the lens model parameters are displayed in Figure 4 and Table 1. The marginalized \(\Omega_{k}\) constraint is \(\Omega_{k}=0.12^{+0.78}_{-0.47}\), consistent with a spatially flat universe. The comparison among lines 1-3 of Table 1 suggests that different choices of priors have little effect on the lens model parameters (\(\alpha_{0}\), \(\alpha_{z}\), and \(\alpha_{s}\)). Figure 1: Reconstruction of the dimensionless comoving distance \(d(z)\) from Pantheon+ SNe Ia using ANN. The shaded area is the \(1\sigma\) confidence level of the reconstruction. The blue dots with error bars represent the observational data. ## IV Conclusion and Discussions Galaxy-scale SGL systems, combined with stellar velocity dispersion measurements of lensing galaxies, provide a powerful probe to test the validity of GR by constraining the PPN parameter \(\gamma_{\rm PPN}\) on kiloparsec scales. Testing GR in this manner, however, it is necessary to know the angular diameter distances between the observer, lens, and source. Conventionally, the required distances are calculated within the standard \(\Lambda\)CDM cosmological model. Such distance calculations would involve a circularity problem in testing GR, since \(\Lambda\)CDM itself is established on the framework of GR. In this paper, in order to address the circularity problem, we have employed the DSR in the FLRW metric to estimate not only \(\gamma_{\rm PPN}\) but also the spatial curvature \(\Omega_{k}\) independently of any specific cosmological model. To calibrate the distances of the SGL systems, we have introduced a new nonparametric approach for reconstructing the distance-redshift relation from the Pantheon+ SN Ia sample using an ANN, which has no assumptions about the observational data and is a completely \begin{table} \begin{tabular}{c c c c c} \hline \hline Priors & \(\gamma_{\rm PPN}\) & \(\Omega_{k}\) & \(\alpha_{0}\) & \(\alpha_{z}\) & \(\alpha_{s}\) \\ \hline None & \(1.16^{+0.15}_{-0.10}\) & \(0.89^{+1.57}_{-1.00}\) & \(1.20^{+0.15}_{-0.14}\) & \(-0.37^{+0.22}_{-0.17}\) & \(0.70^{+0.10}_{-0.09}\) \\ \(\Omega_{k}=0\) & \(1.09^{+0.11}_{-0.10}\) & & \(1.22^{+0.14}_{-0.14}\) & \(-0.20^{+0.11}_{-0.11}\) & \(0.67^{+0.09}_{-0.09}\) \\ \(\gamma_{\rm PPN}=1\) & & \(0.12^{+0.78}_{-0.47}\) & \(1.10^{+0.11}_{-0.12}\) & \(-0.20^{+0.15}_{-0.16}\) & \(0.74^{+0.08}_{-0.08}\) \\ \hline \end{tabular} \end{table} Table 1: Constraint results for All Parameters with Different Priors Figure 2: 1D marginalized probability distributions and 2D \(1-2\sigma\) confidence contours for the PPN parameter \(\gamma_{\rm PPN}\), the cosmic curvature \(\Omega_{k}\), and the lens model parameters (\(\alpha_{0}\), \(\alpha_{z}\), and \(\alpha_{s}\)). The dashed lines represent \(\gamma_{\rm PPN}=1\) and \(\Omega_{k}=0\), corresponding to a flat universe with the validity of GR. data-driven approach. By combining 120 well-selected SGL systems with the reconstructed distance function from 1701 data points of SNe Ia, we have obtained simultaneous estimates of \(\gamma_{\rm PPN}\) and \(\Omega_{k}\) without any specific assumptions about the contents of the universe or the theory of gravity. Our results show that \(\gamma_{\rm PPN}=1.16^{+0.15}_{-0.12}\) and \(\Omega_{k}=0.89^{+1.97}_{-1.00}\). The measured \(\gamma_{\rm PPN}\) is in good agreement with the prediction of GR with 11.6% accuracy. If we use flatness as a prior (i.e., \(\Omega_{k}=0\)), we infer that \(\gamma_{\rm PPN}=1.09^{+0.11}_{-0.10}\), representing a precision of 9.6%. If we instead assume the conservation of GR (i.e., \(\gamma_{\rm PPN}=1\)) and allow \(\Omega_{k}\) to be a free parameter, we find \(\Omega_{k}=0.12^{+0.78}_{-0.47}\). The measured \(\Omega_{k}\) is consistent with zero spatial curvature, suggesting that there is no significant deviation from a flat universe. In the literature, based on a sample of 80 SGL systems, Ref. [10] obtained the constraint accuracy of the PPN parameter \(\gamma_{\rm PPN}\) to be 25% under the assumption of a flat \(\Lambda\)CDM model with parameters taken from Planck observations. Within the same context of \(\Lambda\)CDM, Ref. [11] concluded that \(\gamma_{\rm PPN}=0.97\pm 0.09\) (representing a precision of 9.3%) by analyzing the nearby lens ESO 325-G004. Through the reanalysis of four time-delay lenses, Ref. [12] obtained simultaneous constraints of \(\gamma_{\rm PPN}\) and the Hubble constant \(H_{0}\) for flat \(\Lambda\)CDM, yielding \(\gamma_{\rm PPN}=0.87^{+0.19}_{-0.17}\) (representing a precision of 21%) and \(H_{0}=73.65^{+1.95}_{-2.26}\) km s\({}^{-1}\) Mpc\({}^{-1}\). Within a flat FLRW metric, Ref. [13] used 120 lenses to achieve a model-independent estimate of \(\gamma_{\rm PPN}=1.065^{+0.064}_{-0.074}\) (representing a precision of 6.5%) by employing the GP method to reconstruct the SN distances. As a further refinement, Ref. [14] removed the flatness assumption and implemented the DSR to obtain model-independent constraints of \(\gamma_{\rm PPN}=1.11^{+0.11}_{-0.09}\) (representing a precision of 9.0%) and \(\Omega_{k}=0.48^{+1.09}_{-0.71}\). Note that in Ref. [14] the distances of the SGL systems were determined by fitting a third-order polynomial to the SN Ia data. Unlike the polynomial fit that rely on the assumed parameterization, the ANN used in this work is a completely data-driven approach that could reconstruct a function from various data without assuming a parameterization of the function. Moreover, unlike the GP method that rely on the assumption of Gaussian distributions for the observational random variables, the ANN method has no assumptions about the data. More importantly, compared to previous results, our work yielded comparable resulting constraints on \(\gamma_{\rm PPN}\), which indicates the effectiveness of data-driven modeling based on the ANN. Looking forward, the forthcoming Large Synoptic Survey Telescope (LSST) survey, with its excellent operation performance, holds great promise for detecting a large number of lenses, potentially reaching up to 120,000 in the most optimistic scenario [46]. By setting a prior on the curvature parameter \(-0.007<\Omega_{k}<0.006\), Ref. [10] showed that 53,000 simulated LSST strong lensing data would set a stringent constraint of \(\gamma_{\rm PPN}=1.000^{+0.009}_{-0.0011}\), reaching a precision of \(10^{-3}\sim 10^{-4}\). Similarly, Ref. [47] performed a robust extragalactic test of GR using a well-defined sample of 5,000 Figure 3: Same as Figure 2, except now for the scenario with a prior of \(\Omega_{k}=0\). The dashed line represents \(\gamma_{\rm PPN}=1\) predicted by GR. simulated strong lenses from LSST, yielding an accuracy of 0.5%. In brief, much more severe constraints on both \(\gamma_{\rm PPN}\) and \(\Omega_{k}\), as discussed in this work, can be expected with the help of future lens surveys. ###### Acknowledgements. This work is partially supported by the National Natural Science Foundation of China (grant Nos. 12373053 and 12041306), the Key Research Program of Frontier Sciences (grant No. ZDBS-LY-7014) of Chinese Academy of Sciences, the Natural Science Foundation of Jiangsu Province (grant No. BK20221562), and the Young Elite Scientists Sponsorship Program of Jiangsu Association for Science and Technology.
2309.16158
FireFly v2: Advancing Hardware Support for High-Performance Spiking Neural Network with a Spatiotemporal FPGA Accelerator
Spiking Neural Networks (SNNs) are expected to be a promising alternative to Artificial Neural Networks (ANNs) due to their strong biological interpretability and high energy efficiency. Specialized SNN hardware offers clear advantages over general-purpose devices in terms of power and performance. However, there's still room to advance hardware support for state-of-the-art (SOTA) SNN algorithms and improve computation and memory efficiency. As a further step in supporting high-performance SNNs on specialized hardware, we introduce FireFly v2, an FPGA SNN accelerator that can address the issue of non-spike operation in current SOTA SNN algorithms, which presents an obstacle in the end-to-end deployment onto existing SNN hardware. To more effectively align with the SNN characteristics, we design a spatiotemporal dataflow that allows four dimensions of parallelism and eliminates the need for membrane potential storage, enabling on-the-fly spike processing and spike generation. To further improve hardware acceleration performance, we develop a high-performance spike computing engine as a backend based on a systolic array operating at 500-600MHz. To the best of our knowledge, FireFly v2 achieves the highest clock frequency among all FPGA-based implementations. Furthermore, it stands as the first SNN accelerator capable of supporting non-spike operations, which are commonly used in advanced SNN algorithms. FireFly v2 has doubled the throughput and DSP efficiency when compared to our previous version of FireFly and it exhibits 1.33 times the DSP efficiency and 1.42 times the power efficiency compared to the current most advanced FPGA accelerators.
Jindong Li, Guobin Shen, Dongcheng Zhao, Qian Zhang, Yi Zeng
2023-09-28T04:17:02Z
http://arxiv.org/abs/2309.16158v1
FireFly v2: Advancing Hardware Support for High-Performance Spiking Neural Network with a Spatiotemporal FPGA Accelerator ###### Abstract Spiking Neural Networks (SNNs) are expected to be a promising alternative to Artificial Neural Networks (ANNs) due to their strong biological interpretability and high energy efficiency. Specialized SNN hardware offers clear advantages over general-purpose devices in terms of power and performance. However, there's still room to advance hardware support for state-of-the-art (SOTA) SNN algorithms and improve computation and memory efficiency. As a further step in supporting high-performance SNNs on specialized hardware, we introduce FireFly v2, an FPGA SNN accelerator that can address the issue of non-spike operation in current SOTA SNN algorithms, which presents an obstacle in the end-to-end deployment onto existing SNN hardware. To more effectively align with the SNN characteristics, we design a spatiotemporal dataflow that allows four dimensions of parallelism and eliminates the need for membrane potential storage, enabling on-the-fly spike processing and spike generation. To further improve hardware acceleration performance, we develop a high-performance spike computing engine as a backend based on a systolic array operating at 500-600MHz. To the best of our knowledge, FireFly v2 achieves the highest clock frequency among all FPGA-based implementations. Furthermore, it stands as the first SNN accelerator capable of supporting non-spike operations, which are commonly used in advanced SNN algorithms. FireFly v2 has doubled the throughput and DSP efficiency when compared to our previous version of FireFly and it exhibits \(\times 1.33\) the DSP efficiency and \(\times 1.42\) the power efficiency compared to the current most advanced FPGA accelerators. Spiking Neural Networks, Field-programmable gate array, Hardware Accelerator, Non-Spike Operation, Spatiotemporal Dataflow ## I Introduction Manuscript created 20 September 2023. This work was supported by the Chinese Academy of Sciences Foundation Frontier Scientific Research Program (ZDBS-LY- JSC013). (_Corresponding authors: Qian Zhang: Yi Zeng._)Jindong Li and Qian Zhang are with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China, and also with the Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China (e-mail: lijijindong2022@ia.ac.cn, q.zhang@ia.ac.cn).Guobin Shen is with the School of Future Technology, University of Chinese Academy of Sciences, Beijing 100049, China, and also with the Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China (e-mail: shenguibon201@ia.ac.cn).Dongcheng Zhao is with the Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China (e-mail: zhadongcheng2016@ia.ac.cn).Yi Zeng is with the Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and University of Chinese Academy of Sciences, Beijing 100049, China, and Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China (e-mail: yi.zeng@ia.ac.cn). Spiking neural networks (SNNs) are considered a promising alternative to artificial neural networks (ANNs) due to their high biological plausibility, event-driven nature, and low power consumption [1]. Recent advancements in SNN algorithms have drawn inspiration from both biological evidence and deep learning insights, narrowing the performance gap with ANNs [2][3]. However, current neuromorphic hardware [4][5][6] cannot match the performance of ANN accelerator counterparts and, worse still, cannot support state-of-the-art SNN algorithms. Ongoing research aimed at developing high-performance SNN algorithms has made significant strides in narrowing the benchmark accuracy gap between ANNs. However, it also presents challenges for specialized SNN hardware due to the introduction of hardware-unfriendly computation, shown in Fig.1. Current SNN algorithms typically use direct input encoding [7] with analog pixel values applied to the initial convolutional layer, followed by spiking neurons for end-to-end backpropagation, improved benchmark accuracy and reduced time steps. However, the direct encoding convolutional layer poses compatibility challenges with existing specialized SNN hardware designed for spike-based computation. Current deep SNN models incorporate a residual connection by applying spike-element-wise summation between the residual path and the shortcut path [8]. However, the sum of spikes operations introduce non-spike operation in the next convolutional layer. Furthermore, the commonly employed Average Pooling function in SNN models introduces fractional-spike convolution, which is not supported by current SNN hardware equipped with spike-based computing engines. Existing SNN software frameworks for deployment, such as Lava [9] for Loihi [4], are unable to address these issues since neuromorphic hardware inherently lacks support for non-spike operations. Advancements in current specialized SNN hardware continue to pursue low power and high performance through architectural designs. However, there is still room for improvement in terms of spatiotemporal dataflow, parallelism scheme and computing engine design, particularly in field-programmable gate array (FPGA) implementations. Current FPGA SNN accelerators either have no parallelism in the temporal domain [10] or sacrifice spatial parallelism for temporal parallelism [11]. Moreover, systolic-array-based SNN accelerators still have naive implementations of spiking computing engines [12] that run at low frequencies and have limited parallelism. Our previous version of FireFly [13] employed a high-performance systolic array running at 300MHz with DSP optimizations, but it still had limited parallelism dimensions and ran at a frequency far from the theoretical extreme frequency on Xilinx Ultrascale FPGA. FireFly adopted a weight-stationary dataflow and designed a synaptic weight delivery hierarchy to enable efficient weight data reuse, but it still required large on-chip membrane potential storage and did not support temporal parallelism. As agile development methodologies for customized hardware evolve, the gap between the development cycle of customized hardware and the iteration speed of algorithms is gradually closing. We acknowledge the importance of aligning research on SNN hardware accelerators with the advancements in SNN algorithms. In this work, we introduce FireFly v2 as another step to advance specialized hardware support for SOTA SNN algorithms while further enhancing hardware performance. FireFly v2 brings several significant improvements: 1) FireFly v2 is a general FPGA SNN accelerator that can support A) non-spike operations in direct input encoding [7] and spike-element-wise ResNet [8]. B) multiple neurodynamics, such as IF [14], LIF [15] and RMP [16] neurons. C) arbitrary convolutional configurations such as different kernel sizes, strides, and pads. These integrations cover many recent SNN advancements. 2) FireFly v2 utilizes a spatiotemporal dataflow scheme for SNNs that enables four dimensions of parallelism. FireFly v2 not only process firing neurons on the fly but also generate spikes on the fly, eliminating the need for expensive membrane potential storage thus greatly reducing on-chip memory consumption and inference latency compared to the serial processing of spikes at each time step. 3) FireFly v2 integrates a high-performance spiking computing engine with a systolic array that supports four dimensions of parallelism and runs at 500-600MHz on different FPGA devices, which is closer to the extreme clock frequency of Xilinx Ultrascale FPGA and thus achieves \(\times 1.67-\times 2\) throughput and DSP efficiency compared to the original FireFly [13]. Compared with the existing most advanced SNN accelerator DeepFire2, FireFly v2 achieves \(\times 1.33\) the DSP efficiency and \(\times 1.42\) power efficiency on a much smaller FPGA edge device. The remaining sections of the paper are organized as follows: Section II presents related work that shares motivation with our research. Section III introduces how we address the non-spike operation challenge. Section IV describes our proposed spatiotemporal dataflow. Section V outlines the hardware architecture of FireFly v2, including the design of the 500-600MHz spike computing engine. Section VI provides details on experiments regarding hardware specifications and benchmark evaluations. Finally, Section VII concludes the paper. ## II Related Work Rather than attempting to cover all neuromorphic hardware or SNN accelerators relevant to our research, we will focus on studies that share a similar motivation to our own and explore potential improvements to these works. ### _Supporting Versatile SNNs with a Single Hardware Engine_ Using a single computation engine has emerged as a favored design option for FPGA-based neural network accelerators, allowing for the deployment of a variety of models without requiring fabric reconfiguration. While ANN variants primarily differ in convolutional configurations and structural designs, SNN variants are much more varied and complex as they also differ in input encoding schemes and neuron types. However, only a limited amount of research has explored the design of a unified SNN accelerator. Cerebon [10] designed a reconfigurable compute engine compatible with a variety of spiking convolutional layers including depthwise and pointwise convolutions. Ye et al. [17] designed a neuromorphic platform that supports SNNs with MLP and CNN Topologies. Zhang et al. [18] proposed an architecture that supports multiple coding schemes including rate coding and temporal coding. However, these studies combined only address a small fraction of the many SNN variants and did not cover the recent SNN advancements. In this paper, our objective is to narrow the gap between modern SNN algorithms and hardware accelerators. We achieve this by supporting general non-spike operations, including direct encoding, spike-element-wise residual connections, and the frequently used Average Pooling operation. Additionally, our approach supports various convolutional configurations in a dynamically reconfigurable fashion and offers versatility in neuron types through static reconfiguration. ### _Dataflow and Parallelism Schemes for SNNs_ Existing SNN accelerators with dataflow and parallelism schemes exploration have limited parallelism dimensions. Lee et al. [11] proposed a Psum-friendly dataflow and a 2D systolic array that enables spatiotemporal parallelism. However, they only support output channel parallelism in the spatial domain, which sacrifices spatial parallelism for temporal parallelism. Lien et al. [19] investigated the convolution dataflow and identified a specific loop order that suits its spatial parallelism scheme, or pixel-level parallelism scheme using block convolution. However, the proposed method does not support Fig. 1: A) An SNN backbone with full-spike operation. B) The non-spike convolution in the direct coding layer. C) The sum of spikes in SEW-ResNet introduces non-spike convolution. D) The average pooling layer introduces fractional-spike convolution. temporal parallelism. Spinalflow [20] presented a specialized architecture and dataflow for SNNs that exclusively supports temporal coding, where neurons only fire once in all time steps. It sorts the synaptic input spikes chronologically and can handle a maximum of 2048 non-zero spikes within the receptive field, processing spikes sequentially. It updates 128 neurons from different channels one spike packet at a time, which means that it only supports output channel parallelism. SATO [21] expands the neuron-level parallelism to additional temporal-level parallelism by parallelizing the integration of received spikes at each time step. SATO also achieved impressive sparsity acceleration and supported workload balancing. However, similar to Spinalflow, SATO only supports temporal-coded SNNs. In this paper, we propose a spatiotemporal dataflow with four dimensions of parallelism, namely input channel parallelism, output channel parallelism, pixel-level parallelism and time step parallelism. ### _FPGA Accelerator with DSP Optimization Techniques_ DSP optimization techniques are commonly used in FPGA-based computational-intensive accelerator designs. Here, we focus on DSP48E2 optimization techniques in Xilinx Ultra-scale FPGAs. Previous ANN accelerators have demonstrated the ability to fully utilize the capabilities that DSP slices provide. Xilinx's white paper [22] documents that the \(27\times 18\) multiplier in DSP48E2 can be split into two \(8\times 8\) multipliers with a shared input operand. Recent studies [23] in ANN accelerators have proposed the concept of DSP supertiles by storing operands in distributed RAM around the DSP48E2, which allows for reaching the extreme clock frequency of Xilinx Ultrascale FPGAs. Vitis AI has adopted the DSP double data clock technique in its DPU design. This allows the systolic array to run at double the clock frequency while the rest of the system runs at the base frequency, achieving high inference performance for FPGA platforms. Nonetheless, these optimization techniques are tailored for multiplication-intensive ANN accelerator designs and may not be directly applicable to the SNN computation workload. The effectiveness of DSP48E2 in accelerating SNN computation is not immediately apparent. DeepFire [24] was the first research to utilize the SIMD feature of DSP48E2s in the SNN accelerator domain. They implemented the AND operations between spikes and synaptic weights using the LUT fabric and used one DSP48E2 and three fabric adders to build an 8-input synaptic currents integration circuit. DeepFire2 [25] recognized the inefficiency of implementing a 2-input AND operation using a 6-input LUT and avoided AND operations by considering the spike as the synchronous reset of the flip-flop register. This further improved the system frequency but was still not the most ideal implementation. Our previous version of FireFly [13] proposed the most efficient implementation of synaptic operation with a fabric-free approach. We utilized a wide-bus multiplexer, SIMD adder, and dedicated cascade path inside the DSP48E2 to construct a \(16\times 4\) synaptic crossbar using eight cascaded DSP48E2 slices. This cascaded chain was used in FireFly to build a large systolic-array-based spiking computing engine. However, fabric circuits of other system components limit the frequency in a single clock system. In this paper, we continue to adopt the fabric-free implementation of synaptic operation proposed in our previous version of FireFly but with a different dataflow. We further incorporate DSP double data rate techniques adopted by the Vitis AI DPU. As a result, FireFly v2 achieves significant speed up compared to its previous version. ## III Addressing the Non-Spike Operation Challenge In recent SNN advancements, the inclusion of non-spike operations has been proven to be a challenging task for specialized SNN hardware. The primary challenge lies in achieving concurrent support for both spike and non-spike operations while maintaining an acceptable level of hardware overhead. Remarkably, none of the existing specialized SNN hardware has undertaken this challenge or even considered its feasibility. In FireFly v2, we present a solution for tackling the challenge posed by non-spike operations. Without loss of generality, we consider three typical non-spike operation scenarios: pixel operation in direct encoding, multi-bit spike operation in SEW ResNet, and fractional spike operation introduced by average pooling. Instead of designing separate processing units for these different non-spike operation cases, FireFly v2 draws inspiration from the field of bit-serial accelerators to address the non-spike operation challenge. In bit-serial accelerators [26][27], operands are divided into smaller bits, computations are carried out using low-bit arithmetic logic in the processing unit, and the partial sums are shifted and merged to reconstruct full-precision results. In FireFly v2, we utilize a single spike computing engine to perform spike-weight multiplications, decompose the non-spike operands and flexibly merge the partial sums to support non-spike operations. We focus on the three aforementioned common scenarios of non-spike operations. Pixel Convolution in Direct Encoding LayerIn the case of 8-bit pixel convolution using direct input encoding, we treat the 8-bit pixel as spikes occurring over 8 equivalent time steps. We then perform spike-weight convolution for each time step and combine the resulting 8 partial sums using shift-add logic. Multi-bit Spike Convolution in SEW-ResNetIn a certain SEW ResNet layer, a \(B\)-bit spike in \(T\) simulation time steps can be deconstructed into a sequence of binary spikes spanning \(B\times T\) equivalent time steps. Partial sums are then shifted and merged \(B\) in a group to reconstruct the actual partial sums of \(T\) actual time steps. As the sum of spikes accumulates, SEW-ResNet will produce \(log(N+1)\)-bit spikes after \(N\) spike-element-wise residual connections. With a 4-bit spike representation, this accommodates up to 16 residual connections, aligning with the SEW-ResNet34 architecture. Fractional Spike Convolution introduced by Average-PoolingThe fractional values can be left-shifted to integers, treated as multi-bit spikes, and the partial sums can be right-shifted by the same amount to obtain the accurate partial sum. For example, consider a \(2\times 2\) average pooling operation that yields values such as \(0,0.25,0.5,0.75,1\). These values can be left-shifted by 2 positions, resulting in \(0,1,2,3,4\), and treated as 3-bit spikes. Fig. 2 illustrates the computation flow for 4-bit spikes with \(T=4\). In this scenario, the spiking neuron receives two 4-bit spike inputs from synapses and generates binary spikes. The 4-bit spike sequence is initially decomposed bit by bit, creating a \(4\times 4\) bit matrix. Each bit is then multiplied with the corresponding synaptic weight, and the partial sums of each bit are further scaled by factors of 8, 4, 2, and 1 (or left-shifted by 3, 2, 1, and 0), respectively. These scaled values are then summed together, resulting in the actual synaptic current at each time step. Subsequently, these synaptic currents are accumulated and reset when they reach a certain threshold, leading to the generation of spikes according to specific neurodynamic behaviors. Although the bit-serial decomposition of the multi-bit spikes can fully support the non-spike operations existing in state-of-the-art SNN algorithms without any accuracy drop, it inevitably leads to an increased computational workload. After three cascaded residual connections in SEW ResNet, the maximum value of spikes reaches \(4\), necessitating a 3-bit representation. Similarly, spikes with fractional values yielded by the most commonly seen \(2\times 2\) average pooling also need a minimum 3-bit representation. To prevent the potential escalation of spike bit-width as the SNN network increases in depth, we introduce a saturate-or-shift approach to confine the bit-width of spikes within 2 bits. In our approach, an analysis of data distribution will be conducted in advance on a batch of representative samples in software, similar to the post-training quantization. This analysis will guide the decision of whether to saturate the spike value exceeding \(4\) to \(3\), or perform a right shift operation on all spike values, resulting in the range of \(0,1,2\), accompanied by a corresponding left shift of the partial sum to recover the results. This method ensures that the bit-width of spikes remains confined within 2 bits, effectively mitigating the escalation of the computational workload. ## IV Optimized Spatiotemporal Dataflow Although the dataflow of ANNs has undergone extensive study, achieving the balance between the spatial and temporal dimensions in the dataflow of SNNs proves to be a challenging task. In FireFly v2, we propose a spatiotemporal dataflow that builds upon the output stationary dataflow while incorporating variable tiling and parallelism schemes designed specifically for SNNs. In contrast to the previous version of FireFly [13], our approach significantly reduces memory consumption and enables a higher degree of parallelism and reconfigurability. We focus on the dataflow of a single convolution layer. Although FireFly v2 does support non-spike convolution, we focus on the 1-bit spike case in this section to simplify the dataflow illustration. We first explain the variables used in this paper, which are also listed in Table.I. \(T\) represents the total number of time steps. \(H_{o}\) and \(W_{o}\) represent the height and width of the output feature map, respectively, while \(H_{i}\) and \(W_{i}\) represent the height and width of the input feature map, respectively. \(K_{h}\) and \(K_{w}\) denote the height and width of the kernel, while \(C_{i}\) and \(C_{o}\) represent the input and output channels, respectively. We leverage four dimensions of parallelism and perform variable tiling on the output channel \(C_{o}\), input channel \(C_{i}\), the width of the output feature map \(W_{o}\), and equivalent time step \(T_{c}\). We denote the output channel parallelism as \(M\), input channel parallelism as \(V\), pixel parallelism as \(N\) and time step parallelism as \(S\). Similar to convolution in ANNs, convolution in SNNs can be expressed using nested for-loops, accompanied by an additional time-step loop. The permutation of loop order does not alter the computation results, yet it does influence data locality and data reuse opportunities. To simplify the illustration, we employ an ordered tuple to represent the permutation of folded or unrolled for-loops, with the unrolled for-loops enclosed in square brackets, denoted as \([]\). The fully folded computation flow of an SNN convolution layer can be represented as \((T,C_{o},H_{o},W_{o},C_{i},K_{h},K_{w})\). Here, we employ a similar loop notation as presented in SATO [21]. The dimensions \((C_{o},H_{o},W_{o})\) correspond to the neuron loops, representing independent neurons within a convolutional layer, indicated by the color blue in Fig.3. On the other hand, the dimensions \((C_{i},K_{h},W_{h})\) represent the spatial loops, denoting spatially fan-in neurons and are marked with the color orange. Fig. 2: Addressing the computation of multi-bit spikes through bit decomposition, bit-weighted GeMM, and partial sums merging. The figure illustrates a case of 4-bit spikes computation. Inefficient Dataflow in FireFlyIn the previous version of FireFly, we parallelled the computation for both the input-output channel dimension and the kernel dimension. This resulted in a dataflow represented as \((\frac{C_{o}}{M},T,\frac{C_{i}}{V},H_{o},W_{o},[M,K_{h},K_{w},V])\), as depicted in Fig. 3A. The spike computing engine within FireFly conducts matrix-vector multiplication between the \(K_{h}K_{w}\times V\) binary spikes and the \(M\times K_{h}K_{w}\times V\) synaptic weight matrix, yielding a \(1\times M\) partial sum. Subsequently, when the last fragment of spikes from the tiled input channel passes through, the neurodynamics calculation is performed to generate a \(1\times M\) output spike vector. To achieve weight data reuse across \(T\) time steps, an on-chip cache is necessary to store \(M\times K_{h}K_{w}\times C_{i}\) synaptic weights. To avoid the need for off-chip storage and loading of multi-bit membrane potential, it becomes imperative to temporarily store \(M\times H_{o}W_{o}\) membrane potential values on-chip. However, this poses potential issues, particularly when dealing with large feature maps. The aforementioned tensor shapes or buffer sizes in FireFly's architecture are denoted in Fig.3B. Spatiotemporal Dataflow in FireFly v2In FireFly v2, we tackle the following limitations in the FireFly architecture's dataflow: 1) The need for large on-chip storage of membrane potential. 2)The constraint of a fixed convolution configuration resulting from parallelism on the kernel dimension. 3) The absence of parallelism at both the temporal and pixel levels. The adopted spatiotemporal dataflow scheme is depicted in Fig.3C. To address the long data dependencies spanning \(C_{o}\times H_{o}\times W_{o}\) neurons across \(T\) time steps, we've rearranged the loops by placing the temporal loop after the neuron loops and before the spatial loops. We have also tiled on both the width of the output feature map \(W_{o}\), and the time step \(T\), to introduce two additional dimensions of parallelism. After the reordering and tiling, the resulting dataflow takes the form of \((\frac{C_{o}}{M},H_{o},\frac{W_{o}}{N},\frac{T}{S},K_{h},K_{w},\frac{C_{i}}{V},[M,V,N,S])\), shown in Fig.3C. The spike computing engine performs matrix multiplication between the \(V\times N\times S\) binary spike matrix and the \(M\times V\) weight matrix, yielding a \(M\times N\times S\) partial sum matrix. After accumulating the fan-in pre-synaptic currents across the \(K_{h},K_{w},\frac{C_{i}}{V}\) spatial loops, neurodynamic calculations are carried out by incorporating the \(M\times N\times S\) partial sums along with the residual \(M\times N\) membrane potential \(V_{Pre}\) from the previous time step batch. This process results in the generation of \(M\times V\times S\) output spikes and the next residual membrane potentials \(V_{Next}\). The aforementioned tensor shapes or buffer sizes in FireFly v2's architecture are denoted in Fig.3D. This spatiotemporal dataflow framework enables four dimensions of parallelism including output channel parallelism, input channel parallelism, pixel-level parallelism, and time step parallelism. Through the separation of the parallelism scheme from the kernel dimension, we enable support for various convolution configurations, accommodating differences in kernel size and stride. By positioning the temporal loop as the innermost loop before the spatial loops, the necessity to cache only \(M\times N\) residual membrane potentials on-chip becomes inconsequential. Without the explicit storage of the membrane potentials, spikes are not only processed but also generated on the fly. The only additional overhead is the need for increased storage space for input spikes. Given that input spikes require much less memory storage compared to membrane potentials, this added requirement is of minimal concern. ## V Hardware Architecture The hardware architecture of FireFly v2 is illustrated in Fig.4. Fig.4A provides an overview of FireFly v2's system block diagram. In the customized PL system of the Zynq Ultrascale SoC, a clock wizard IP is instantiated to generate two synchronous clocks, with one clock operating at twice the frequency of the other. One master M-AXI-HPM port of the Zynq Ultrascale SoC is utilized for command configuration status control, directly connected to the FireFly v2 IP. Two 128-bit S-AXI-HP ports are enabled and are connected to two read-only AXI DataMovers respectively, facilitating high-speed PS to PL data transfer. Additionally, another 128-bit S-AXI-HP port is employed and connected to a write-only AXI DataMover, enabling PL to PS data transfer. The FireFly v2 IP interfaces with all three AXI DataMovers. Please note that we have not utilized all of the S-AXI-HP ports available on the Zynq SoC, and the peak memory bandwidth of the three instantiated S-AXI-HP ports is far from the Zynq Ultrascale device's limit of 19.2 GB/s. We have designed an efficient data loader and data server module to fully harness the bandwidth capacity of the three S-AXI-HP ports, as illustrated in Fig.4B. We have instantiated two CmdGen-DataCache Units to generate Address-Length data transfer commands for input or residual spikes, as well Fig. 3: Comparison of the dataflow in FireFly v2 to its previous version FireFly. A) FireFly’s dataflow design. B) Mentioned tensor shape or buffer size in FireFly’s dataflow. C) FireFly v2’s dataflow design. D) Mentioned tensor shape or buffer size in FireFly v2’s dataflow. as parameters such as weights, bias, and thresholds, respectively. Additionally, another CmdGen-DataCache is dedicated to handling output spikes or partial sums transfer, utilizing the write-only AXI DataMover interface. Given the availability of two read-only AXI DataMover interfaces, the simplest approach would be to assign each CmdGen-DataCache unit to occupy one AXI DataMover interface. However, in various convolutional configurations, the required data bandwidth for input spikes and synaptic weights differs. For instance, in a \(1\times 1\) convolution, the synaptic weights demand relatively less bandwidth, while input spikes require more. Conversely, in a \(3\times 3\) convolution with small \(4\times 4\) images, the synaptic weights require greater bandwidth, while the input spikes need less. To efficiently utilize the total available bandwidth and prevent one AXI DataMover from being fully utilized while the other remains idle, we have introduced an arbiter. This arbiter arbitrates incoming commands from the CmdGen-DataCache units and the data flow from the AXI DataMover interfaces, enabling flexible bandwidth balancing for varying spikes and weight workloads. The key system components of FireFly v2 are arranged as illustrated in Fig. 4C. The blue blocks in Fig.4C represent a series of input spike preprocessing modules, whereas the orange blocks signify a sequence of synaptic weight pre-processing modules. The purple blocks represent the post-processing modules for the partial sums and output spikes. The input spike stream and synaptic weight stream pass through the preprocessing modules before reaching the spike computing engine. Subsequently, the spike computing engine generates a partial sums stream, which is then further processed by the post-processing modules to produce output spikes. We will provide a brief overview of these key components below. Input Data PreprocessingThe input spike preprocessing modules initially buffer and adjust the input spike stream's stream width via a FIFO and stream width adapter. Subsequently, two sub-modules handle stream padding. One handles zero-padding based on the current convolution padding configurations, while the other focuses on memory coalescing to prevent bank conflicts. After padding, the data stream enters the im2col unit sequentially but is then read out in a strided fashion to perform the im2col transformation. The im2col unit comprises \(N\) memory banks, and a strided address generator generates \(N\) conflict-free addresses for these banks to read the spike data. An \(N\)-port crossbar is responsible for routing the data output from the memory banks to their respective ports, thereby delivering the data stream to the spike computing engine. The input weight stream undergoes a similar buffering and width adjustment procedure and then is pushed to the partial reuse FIFO, a component introduced in our prior version of FireFly [13]. Clock Crossing in the Computing EngineThe spike computing engine joins the spike stream from the im2col unit with the weight stream from the partial reuse FIFO and sends them to the fast clock region. The spike computing engine consists of two slow-to-fast gearboxes, functioning as parallel-to-serial converters. These gearboxes reduce the data elements by half while doubling the clock rate, maintaining the data bandwidth. The PEs(processing elements) of the spike computing engine operate at the fast clock region, performing matrix multiplications between the \(V\times N\times S\) spike stream and the \(M\times V\) weight stream and generating \(M\times N\times S\) partial sum every \(K_{h}\times K_{w}\times\frac{C_{i}}{V}\) cycles. The partial sums are then gathered, aligned and sent back to the slow clock region. Fig. 4: Hardware Architecture of FireFly v2. A) The system block design of FireFly v2. B) The efficient data loader and data sever of FireFly v2. C) The key system components of FireFly v2. The blue blocks represent a series of input spike preprocessing modules, whereas the orange blocks signify a sequence of synaptic weight preprocessing modules. The purple blocks represent the post-processing modules for the partial sums and output spikes. Partial Sums PostprocessingThe post-processing modules first flexibly process the partial sums, dealing with spike and non-spike cases, and then generate spikes through the neurodynamic unit. The following pooling unit performs optional Maxpooling and AvgPooling or just bypasses the spike stream if pooling is not needed. The residual connection module of the FireFly v2 performs the optional spike-element-wise residual connection between the shortcut spikes from the data loader and the calculated spike stream. The spike accumulation module optionally counts spikes from the spike stream to record firing rates, a common operation in the last classification layer in SNN models. The output spike stream flows back to the external memory map through the data saver and serves as the input spike stream for the subsequent layer. In the following subsections, we will first fully elaborate on the design of the spike computing engine. Next, we will provide thorough details of how the spike computing engine and the partial sum merging unit cooperate to perform non-spike operations, which is essential in supporting direct encoding and multi-bit spike convolution. Additionally, we will delve into the two-phase design of the neurodynamics unit which can generate spikes across multiple time steps. Lastly, we will present the residual connection unit that supports spike-element-wise functions in various cases. ### _Spatiotemporal Spiking Computing Engine_ The spike computing engine acts as the core of FireFly v2 since it is responsible for the heavy computing workload. We adopt the systolic architecture same as FireFly [13], shown in Fig.5A, but with several distinctions: 1) FireFly employs a weight-stationary systolic array, whereas FireFly v2 implements an output-stationary systolic array. This choice is better aligned with our spatiotemporal dataflow requirements. 2) The systolic array in FireFly enables spatial parallelism across input channels, kernel sizes, and output channel dimensions. However, the parallelism in the kernel dimension imposes constraints on the convolution scheme, as FireFly exclusively supports \(3\times 3\) convolutions. In contrast, FireFly v2 leverages spatiotemporal parallelism in input channels, output channels, pixel level, and the time-step dimension. We support various kinds of convolution configurations enabled by the flexible im2col unit. 3) In FireFly, the systolic array operates at a frequency of 300MHz, identical to the overall system clock frequency. FireFly v2 successfully decouples the slower fabric logic from the faster DSP unit. This method enables the spike computing engine to operate at 500-600MHz, doubling its performance capabilities compared to FireFly. Designing a high-performance systolic array is non-trivial. To bridge the gap between the operating frequency of DSP48E2 and its theoretical extreme frequency, we adopt the DSP double data rate technique as the Vitis AI DPU. We follow three key principles: First, the circuits in the doubled-frequency domain should be well-decoupled from the circuits in the low-frequency domain. Second, we avoid the use of LUTs in the doubled frequency domain, and instead, only use DSP48E2 and flip-flops. Third, we avoid high-fanout nets and instead use simple and local connections between circuit components. This helps to minimize net delays and reduce congestion. Before sending the spike sub tensor and weight sub tensor to the spiking computing engine, a slow-to-fast converter, or the gearbox, is utilized to facilitate communication between circuits operating at different frequencies. This gearbox, which operates at twice the frequency of the low-frequency domain, essentially functions as a multiplexer, selecting the data being transmitted from the low-frequency domain. The data being transmitted from the low-frequency domain has twice the data elements, but the data being transmitted to the doubled-frequency domain has twice the clock rate. As a result, the bandwidth at each side of the gearbox remains the same. The spiking computing array is organized as a systolic array, with processing elements (PEs) arranged in a 2D fashion. To simplify the depiction, Fig.5B only illustrates a systolic array with \(4\times 4\) PEs. The array employs an output stationary dataflow, with weight inputs flowing vertically from top to bottom and spike inputs flowing horizontally from left to right. Fig. 5: Comparision of the spike computing engine in FireFly v2 to its previous version of FireFly. A) Spike computing engine in FireFly supporting only spatial parallelism B) Spatiotemporal spike computing engine in FireFly v2. Partial sums are stored in each PE and are collected once the accumulation process is complete. Each processing element (PE) comprises several columns of DSP48E2s. To simplify the illustration, Fig.5B only depicts a single PE with four columns of DSP48E2s, where each column contains four DSP48E2s cascaded in a chain. Similar to the previous version of FireFly [13], a single DSP48E2 slice functions as a \(2\times 4\) synaptic crossbar, receiving two 1-bit spike inputs and eight 8-bit weight inputs. The dedicated cascaded path of the DSP48E2 in the same column behaves like dendrites, integrating the synaptic current through the DSP48E2 adder chain. Each column of the processing element produces four 12-bit partial sums, utilizing the single instruction, multiple data (SIMD) feature of DSP48E2. Within the same PE, DSP48E2s on the same row share the same weight inputs, while each DSP48E2 has its own spike inputs. In the illustrated example of a single PE with \(4\times 4\) DSP48E2s, it receives \(4\times 8\times 8=256\)-bit weight inputs and \(4\times 4\times 4=32\)-bit spike inputs, generating \(4\times 4\times 12=192\)-bit partial sums. The weights and spikes are staged and then fed to the adjacent PEs, and the partial sums are collected after the accumulation process is completed. The parallelism factors \(M,V,N,S\) play a vital role in determining the dimensions of the systolic array. Specifically, Table.II outlines the relationship between these factors and the corresponding dimensions of the systolic array and PEs. The height of the systolic array \(SA_{h}\) is determined by \(\frac{M}{4}\), where each column of DSP48E2 in one PE can compute 4 channels. The width of the systolic array \(SA_{w}\) is directly equal to \(N\). The height of the PE \(PE_{h}\) is determined by \(\frac{V}{4}\) due to the fact that the computing engine operates at a doubled frequency, and each DSP48E2 in one PE can integrate two synaptic currents. The width of the PE is determined by \(S\). It's worth noting that within a single Processing Element (PE), synaptic weights are broadcast to \(S\) columns of DSP48E2 units. A critical consideration here is the choice of \(S\), where a larger value would lead to a larger fan-out for synaptic weights, potentially failing step-up requirements. Conversely, opting for a smaller value of \(S\) would elevate the consumption of flip-flops. Based on experimental insights, we have determined the optimal value for \(S\) to be 4. This empirical setting strikes a balance between managing fan-out effects and optimizing flip-flop usage. We use different \(SA_{h}\) and \(SA_{w}\) in different FPGA devices with different amounts of resources. The spike computing engine generates \(M\times N\times S\) partial sums every \(K_{h}\times K_{w}\times\frac{C}{\mathcal{V}}\) cycles. Given that \(K_{h}\times K_{w}\times\frac{C_{i}}{\mathcal{V}}\) is larger than \(N\) in most scenarios, we aggregate the partial sums \(N\) in a group, align the partial sums from \(\frac{M}{4}\) PE columns and use a cross clock region FIFO to transfer the \(M\times N\times S\) partial sums back to slow clock region, shown in Fig.4. ### _Flexible Partial Sum Processing Unit_ Multi-bit spikes are decomposed into equivalent time steps using the same spike computing engine to compute the decomposed partial sums. The flexible partial processing unit shown in Fig.6 reconstructs the partial sums by shift-add logic. The processing unit consists of \(M\) identical sub-modules to processing \(M\) channels of partial sums. As stated in the previous subsection, \(S\) is set to an empirical value of \(4\), so each sub-module receives partial sums, namely \(P_{0},P_{1},P_{2},P_{3}\), of \(4\) equivalent time steps, shown in Fig.6. The processing unit can handle 4 cases: 1) In cases where the input spike is binary, four partial sums are bypassed and directly sent to the next stage. 2) When dealing with a 2-bit input spike, two adjacent partial sums are shifted-merged, yielding \(Q_{0}=P_{0}+(P_{1}<<1)\) and \(Q_{1}=P_{2}+(P_{3}<<1)\). The processing unit waits for another round of the shift-merge process to collect 4 partial sums and send them to the next stage. 3) When dealing with a 4-bit input spike, all four partial sums are shifted-merged, yielding \(R_{0}=Q_{0}+(Q_{1}<<2)\). The processing unit must wait for three additional rounds of the shift-merge process to gather four partial sums before transmitting them to the next stage. 4) When dealing with direct input coding where the input pixel is 8-bit, eight partial sums are shifted-merged, yielding \(R_{1}=R_{0}+(R_{0}^{\prime}<<4)\), in which \(R_{0}^{\prime}\) is the previous round of \(R_{0}\) temporarily stored in registers. In direct encoding, the convolution results of the static images are replicated \(T\) times and sent to the neurodynamics unit. Therefore, we directly replicate \(R_{1}\) for four times and send them to the next stage. After the partial sums are shifted-merged, the bias is added, and the partial sums can be optionally left-shifted by 1 if the input spikes from the preceding layer are right-shifted by 1. The input precision of the partial sums is 12-bit. After being shifted and merged by the processing unit, the output precision of the partial sums is extended to 18-bit. ### _Two-Phase Neurodynamics Unit_ After the partial sums are flexibly merged(or just bypassed in the 1-bit spike case), the partial sums flow into the neurodynamics unit. In extreme cases, partial sums flowing into the neurodynamics unit are consecutively valid, the neurodynamics unit needs to process partial sums, generate spikes for all \(S\) time steps and calculate the membrane potential for the next batch of time steps in just a single clock cycle. The naive implementation of the neurodynamics unit is similar to the idea of the ripple carry adder, where each bit's sum depends on the carry generated by the previous bit, causing a serial carry propagation through the circuit. In the spike generation process, each spike and membrane potential depends on the previous spike and membrane potential, causing a series of Fig. 6: Flexible partial sum processing unit dealing with four spike or non-spike cases. data propagation through the circuit. It is impossible to achieve timing closure at 250-300MHz with such high logic levels. To address this problem, we design a two-phase neurodynamics unit to tackle this problem. We take inspiration from the carry look-ahead adder, where the carry signals are pre-computed for each bit, enabling parallel carry calculation and faster addition. We decompose the spike generation process in two phases. Fig.7 shows the integrate-and-fire neurodynamics generation process. In phase 1, we precompute the postfix sums of the partial sums of \(S\) time steps and compare them to the threshold, yielding the spike candidates of each time step. In this phase, calculations are pipelined since they have no data dependency on previous time steps. In phase 2, we select the spikes from these precomputed candidates based on spikes selected in previous time steps, as shown by blue arrows in Fig.7. In this phase, timing closure can be satisfied since the accumulation of logic levels is only determined by the number of cascaded multiplexers and comparators. The neurodynamics unit is statically reconfigurable to support integrate-and-fire [14], leaky-integrate-and-fire [15] and residual membrane potential [16] neurons since most SNN models adopt a single neuron type across all layers. Designs of the neurodynamics unit of different neuron types share the same methodology. ### _Residual Connection Unit_ The residual connection unit receives spikes of the current backbone from the flow-to-stream unit and receives spikes of the shortcut branch from the data loader. The residual connection unit performs IAND or ADD spike-element-wise function to the two spike streams. To align with the 8-bit byte standard, the bit-width values of spikes from the shortcut branch are restricted to one, two, or four. The bit-width of spikes from the current backbone is one in most cases unless the optional average pooling datapath is selected. In that case, the bit-width increases to two. FireFly v2 does not support residual connection after the backbone is downsampled by average pooling since such situations are not typical in most SNN models. Therefore, when performing residual connections, the bit-width of spikes from the current backbone is always one. The residual connection unit contains dedicated logic of the IAND function for binary shortcut spikes and low-bit spike-element-wise ADD function for different shortcut spikes' bit-width. The IAND function always produces binary spikes, which is the most hardware-friendly spike-element-wise function. When performing the spike-element-wise ADD function, if the added results exceed the representation range of a two-bit integer, we can extend the added results to four bits or adopt the saturate-or-shift method to constrain the results back to two bits. If the added results exceed the representation range of a four-bit integer, we will directly saturate the results to four bits. ## VI Implementation and Experiments ### _Experiments Setup_ Similar to its previous version, FireFly v2 targets FPGA edge devices to cut down the budget in real-world applications. FireFly v2 is mapped onto Ultra96v2, KV260 and ZCU104 evaluation boards with different \((M,V,N,S)\) configurations. FireFly v2 is designed using SpinalHDL. The Verilog codes generated by the SpinalHDL compiler are synthesized and implemented in the Xilinx Vivado 2022.2. Power consumption estimates are provided by the reports of the Vivado Design Suite. FireFly v2 is based on the Brain-Inspired Cognitive Engine (BrainCog) [28] and is another step toward the software-hardware codesigns for the BrainCog project ([http://www.brain-cog.network/](http://www.brain-cog.network/)) [29]. All the evaluated SNN models are trained using the BrainCog's infrastructures. ### _Comparison with the Previous Version of FireFly_ Table.III shows the comparison between FireFly v2 and its previous version FireFly in hardware specifications. In terms of LUT consumption, FireFly v2 mapped on Ultra96 consumes slightly more LUTs than FireFly. This difference arises from the increased complexity of the overall architecture in FireFly v2. However, FireFly v2 mapped on ZCU104 is roughly the same as FireFly since the proportion of the resource consumption taken up by the computing array becomes more significant as parallelism increases and FireFly v2 adopts a DSP-only and fabric-free spike computing engine. In terms of DSP48E2 consumption, FireFly's DSP48E2 consumption aligns with multiples of 9 since FireFly seeks parallelism in kernel dimension by flattening the \(3\times 3\) kernel window computation, while FireFly v2's DSP48E2 consumption aligns with multiples of 8 with each dimension in FireFly v2's parallelism being the power of two. Consequently, FireFly v2's DSP48E2 consumption is equivalent to \(\frac{8}{9}\) of FireFly's consumption on the same device. In terms of the DSP efficiency, power efficiency and throughput performance, Fig. 7: Neurodynamics unit. The blue-shaded logic performs independent calculations of spike candidates for each time step. The orange-shaded logic, on the other hand, handles the calculation of postfix sums for each time step. The purple multiplexer selects the spike candidates and postfix sums, generating both the output spikes and the residual membrane potential. FireFly v2 mapped on Ultra96 achieves the highest clock frequency of 600MHz and the highest peak DSP efficiency of \(9.6\) GOP/s/DSP, which is doubled compared to FireFly. The DSP efficiency improvement of FireFly v2 compared to its previous version is primarily attributed to the increased clock frequency. FireFly v2 mapped on KV260 achieves the highest peak power efficiency of \(835.9\) GOP/s/W, maintaining a low power draw of 4.9W. FireFly v2 mapped on ZCU104 achieves the highest peak throughput of \(8192\) GOP/s. FireFly v2 mapped on Ultra96 can reach 600MHz with PerformanceWithRemap implementation strategy set in Vivado Design Suite. However, this strategy induces higher power consumption. But still, FireFly v2 can achieve similar power efficiency compared to FireFly on the same Ultra96 device. FireFly v2 mapped on KV260 cannot reach 600MHz even with PerformanceWithRemap strategy being enabled. This limitation arises from the considerably inherent smaller CLB:DSP ratio of \(\frac{93}{1}\) in KV260 in comparison to Ultra96 with CLB:DSP ratio of \(\frac{196}{1}\). This translates to a higher likelihood of routing congestion that will cause degrade in frequency performance. Nevertheless, FireFly v2 mapped on KV260 can reach timing closure at 500MHz and achieve excellent power efficiency when using PowerDefaultOpt implementation strategy. Since power consumption is tightly coupled to the clock frequency, we also run multiple experiments at lower frequencies using the same implementation strategy and find that FireFly v2 running at 500MHz achieves the best power efficiency, shown in Table.V. We also try a higher level of parallelism in KV260 since a \(16\times 16\times 8\times 4\) configuration only utilizes 40% of DSP48E2s. A \(32\times 16\times 8\times 4\) configuration can meet timing closure at 400MHz. Note that in this configuration we halve the depth of the local weight cache depth and the FIFO size of the AXI DataMover, reducing the BRAM consumption to relieve the tight setup requirements. FireFly v2 mapped on ZCU104 adopts a greater degree of parallelism to fully utilize the on-chip resources since ZCU104 is the largest FPGA device among the mentioned devices. FireFly v2 achieves \(\times 2\) peak power efficiency and \(\times 1.67\) peak throughput than FireFly on the same ZCU104 device. We then compare FireFly v2 with our previous work on the same four SNN models, as initially reported in FireFly and displayed in Table.IV. FireFly v2 mapped on xczu5ev shows \(\times 1.54,\times 1.53\), \(\times 1.27\) and \(\times 1.76\) FPS/W improvements on the MNIST, CIFAR10, CIFAR100 and DVS-Gesture classification tasks respectively. While FireFly v2 mapped on xczu3eg may not excel in terms of the FPS/W metric due to the power inefficiencies brought by the complex routings operating under a 600MHz clock frequency, it still exhibits a substantial improvement in inference latency and actual GOP/s performance on the same device compared with FireFly. It's worth noting that the inference latency of our previous work, FireFly, does not include the direct coding layer, as FireFly does not support non-spike convolution. In contrast, the inference latency of FireFly v2, as presented in Table IV, is measured end-to-end. The actual improvements of FireFly v2 in these metrics should be even higher. One might also notice that the actual performance improvement is not directly proportional to the peak performance improvement shown in Table.III. This discrepancy is primarily due to FireFly v2 adopting a coarser parallelism granularity, which can be fully leveraged when processing input feature maps from larger datasets, such as ImageNet. In FireFly, we specifically selected these four models with \(3\times 3\) convolutional layers and max-pooling layers only, as FireFly is particularly well-suited for optimizing these types of layers. FireFly adopts a fixed convolution configuration and a fully flattened parallelism scheme in the kernel dimension. The spike pixels in the same feature map are processed sequentially in an on-the-fly manner. This allows FireFly to handle small feature maps more effectively. FireFly v2, on the other hand, supports general torch.nn.Conv2d operations but operates with a coarser granularity at the pixel level, as it supports pixel-level parallelism and can process \(N\) spike pixels at a time. As a result, we may not fully leverage its advantages when handling small feature maps on FireFly v2, especially when \(N\geq W_{o}\). Taking the CIFAR-10 or CIFAR-100 dataset as an example, the size of the feature map is initially only \(32\times 32\). After three \(2\times 2\) pooling operations, the size becomes \(4\times 4\). When dealing with feature maps with a width or height smaller than \(4\), several inefficiencies become apparent: 1) The explicit same-padding processing time becomes noticeable, as only \(\frac{4\times 4}{6\times 6}=\frac{16}{36}\) spike pixels are valid. 2) When \(N>4\), the redundant processing elements allocated for pixel parallelism remain idle. 3) Dealing with small feature maps reduces the opportunities for reusing kernel weights within the same set of feature maps, making parameter bandwidth a bottleneck. These inefficiencies won't occur when dealing with large datasets such as ImageNet. Despite the listing inefficiencies, FireFly v2 still achieves improvements in latency and efficiency on the same benchmarks compared with our previous work. ### _Comparison with the DeepFire2_ In FireFly, we've evaluated various systolic-array-based SNN accelerators. We won't repeat these comparisons, as FireFly has already shown superior performance compared to those prior studies [30][31][32][33][34][17][10][35][36]. In this paper, we compare FireFly v2 with DeepFire [24] and DeepFire2 [25], two recently published high-performance SNN accelerators also with DSP optimizations and operating at 450-600MHz high clock frequency. DeepFire series targets large multi-die FPGA devices and adopts layer-wise mapping of the entire SNN models. DeepFire2 achieves the highest clock frequency of 600MHz and throughput among all FPGA-based SNN implementations with deep pipelining. It is important to figure out the experimental setup of DeepFire2 to ensure fair comparison. Despite adopting distinct SNN model mapping schemes (Folded for FireFly, Unrolled for DeepFire), both series of accelerators utilize the same GOP/s metrics. The FLOPS count for the SNN models is determined by calculating the FLOPS count of their equivalent ANN models using established tools like ptflops. This same experimental setup enables a fair and meaningful comparison. However, DeepFire2 did not provide information about their time step configuration in their experiments, a critical parameter that significantly impacts inference latency. Furthermore, it's important to note that DeepFire2 does not support any form of time-step aggregation or sparsity acceleration. Consequently, inference performance relies solely on the following factors: the total FLOPs of the model, simulation time step, clock frequency, and DSP usage, with the simulation time step being the only unknown variable. Based on the metrics reported in their research, it can be inferred that DeepFire2 adopts a simulation time step of one, which explains the exceptionally low reported inference latency. As the computation workload scales linearly with the time step, we quantify the computation workload as the product of FLOPs and the time step (FLOPs-T). Moreover, in DeepFire series accelerators, SNN models for CIFAR-10, CIFAR-100, and ImageNet classification are meticulously crafted to ensure that the network parameters align seamlessly with the storage granularity of BRAM and URAM. Although the performance of FireFly v2 does not strongly correlate with specific SNN models, we choose SNN models that align with the parallelism granularity of FireFly v2's architecture to ensure a fair comparison. We have the following several key observations in Table.VI. 1) Both FireFly v2 and DeepFire2 achieve significantly high clock frequencies, exceeding 400MHz. FireFly v2 exhibits consistent frequency performance as a standalone engine, while DeepFire2 experiences a sharp drop in frequency, dropping to 450MHz when deploying deep SNN models on large datasets. 2) DeepFire2 prioritizes inference latency over benchmark accuracy by adopting a \(T=1\) SNN setup. In contrast, FireFly v2 targets SNN models capable of delivering high classification accuracy, particularly on more complex datasets such as CIFAR100 and ImageNet. The accuracy of 93.6%, 74.7%, and 67.3% achieved on CIFAR10, CIFAR100, and ImageNet are closely aligned with the state-of-the-art performance in SNN algorithms. The remaining performance gap is primarily attributed to the quantization process, which could potentially be mitigated through the adoption of a quantization-aware training approach in the future. 3) FireFly v2 falls short in achieving the same level of GOP/s performance and inference latency as DeepFire2, since xcvu9p, the FPGA used by DeepFire2, is considerably larger than the edge devices we use. However, it's noteworthy that FireFly v2 has \(\times 1.32,\times 1.25,\times 1.33\) average GOP/s/DSP efficiency improvements, and \(\times 1.35,\times 1.42,\times 1.42\) power efficiency improvements on CIFAR10, CIFAR100 and ImageNet classification tasks compared to DeepFire2. 4) The SNN models benchmarked by FireFly v2 compared to DeepFire2 are not only larger in terms of FLOPs (2.58 vs. 0.45 on the CIFAR10 task and 4.08 vs. 1.34 on the CIFAR100 task) but also more complex(ResNet compared to VGGNet on the ImageNet task). FireFly v2 also exhibits scalability, enabling support for even larger and deeper SNN models. In contrast, DeepFire2's solution is not scalable, as it's constrained by the FPGA on-chip resources, restricting the supported model complexity. 5) DeepFire2 relies on costly, large FPGA devices that may not be practical for deployment in embedded systems within edge scenarios. On the other hand, the KV260 device we employ is a commercially available and relatively affordable FPGA device. It ensures that the budget for constructing customized edge systems for real-world applications remains manageable. It is worth noting that FireFly v2 will exhibit a higher performance when the convolutional configuration aligns with its computation granularity. For instance, in the case of SEWResNet34 with a \(224\times 224\) image input, the resulting feature map widths of \(14\) and \(7\) do not align with the \(\times 8\) pixel parallelism granularity of FireFly v2. However, when SEWResNet34 uses a \(256\times 256\) image input, there is a notable improvement in efficiency, as the feature map size aligns with the \(\times 8\) granularity. It's also worth mentioning that relying on FLOP calculations of equivalent ANN models may not offer a fair measure of model capacity when running SNN models with multi-bit spikes. This is because the FLOP count of equivalent ANN models does not account for the bit-width of operands, resulting in an efficiency drop when benchmarking the SEW-ResNet34 network with the ADD function. FireFly v2 and DeepFire2 both utilize similar DSP optimization techniques and operate at similar clock frequencies, resulting in comparable normalized inference efficiency. FireFly v2's higher efficiency compared to DeepFire2 is primarily attributed to its systolic array consistently operating without idle states. At the same time, the remarkably low inference latency of DeepFire2 is primarily achieved through its use of single-time-step inference and extensive utilization of DSP48E2 resources. ### _Discussion_ In our experiments, our primary focus is on comparing FireFly v2 accelerators with DeepFire2 since both of its previous versions have already outperformed most SNN FPGA accelerators in terms of latency and efficiency. The excellent performance of our previous work FireFly is mainly attributed to utilizing the DSP48E2s to build a large synaptic crossbar circuit. The improvements shown in FireFly v2 are attributed to the optimized spatiotemporal dataflow and the doubled clock frequency. The remarkably low inference latency achieved by the DeepFire series primarily stems from their extensive utilization of DSP48E2 resources on a large FPGA device, combined with a relatively simplified SNN setup featuring a time step of one (though not explicitly reported in their paper). Both the FireFly and DeepFire series show that building large-scale and high-speed parallel dedicated computing units can achieve significant inference acceleration even though the sparsity nature of SNNs is not considered. However, we believe improving the performance of inference efficiency based on FPGA devices without sparsity acceleration becomes more and more challenging. The clock frequency of FireFly v2 and DeepFire2 is already close to the maximum supported frequency by Ultrascale+ FPGA devices. We also recognize that it is impractical to develop an FPGA-based SNN accelerator that can match the power efficiency of ASIC designs, as FPGAs inherently trade some efficiency for reconfigurability. While ASIC designers primarily prioritize power efficiency, FPGA designers concentrate on fully utilizing on-chip resources and tailoring their designs to match the characteristics of FPGA devices. We have put this principle into practice by utilizing the DSP48E2 cells for synaptic operations and approaching the peak DSP48E2 frequency by decoupling the high-speed hard blocks from the low-speed fabric. Another significant aspect of FireFly v2 is its advancement toward a general SNN-DPU solution, akin to Vitis-AI DPU--the ANN-DPU counterpart. The support for non-spike operations in FireFly v2 is crucial for end-to-end deployment without requiring algorithm modifications. This represents a milestone where SNN algorithmic research, such as DIET-SNN [7] and SEW-ResNet [8], can be directly deployed onto FireFly v2 with minimal impact on accuracy. It is worth noting that SNN algorithmic research is still rapidly evolving in terms of encoding schemes [37], neuron types [38] and connection topologies [39]. While the development cycle for hardware used to be significantly longer than that for algorithm software, the current trend in FPGA-based agile hardware development can now expedite the process and provide timely support for the latest algorithmic advancements. ## VII Conclusions FireFly v2 exhibits significant improvements over our initial version of FireFly. It takes a significant step forward in advancing hardware support for current SNN algorithm developments by supporting non-spike operation, which presents an obstacle in the end-to-end deployment onto existing specialized SNN hardware. The spatiotemporal dataflow enables the processing of incoming spikes and the generation of output spikes on the fly. Additionally, the double data rate technique enables the DSP48E2 systolic array to operate at a clock frequency of 500-600MHz, which is twice as fast as our previous version of FireFly. In this work, our focus remains on targeting commercially available and affordable embedded FPGA edge devices for use in edge scenarios. In the future, we will continue to develop SNN hardware infrastructures that can not only operate at higher speeds but also offer timely support for advancements in SNN algorithms, enabling higher-performance SNN software and hardware co-design.
2309.13752
Improving Robustness of Deep Convolutional Neural Networks via Multiresolution Learning
The current learning process of deep learning, regardless of any deep neural network (DNN) architecture and/or learning algorithm used, is essentially a single resolution training. We explore multiresolution learning and show that multiresolution learning can significantly improve robustness of DNN models for both 1D signal and 2D signal (image) prediction problems. We demonstrate this improvement in terms of both noise and adversarial robustness as well as with small training dataset size. Our results also suggest that it may not be necessary to trade standard accuracy for robustness with multiresolution learning, which is, interestingly, contrary to the observation obtained from the traditional single resolution learning setting.
Hongyan Zhou, Yao Liang
2023-09-24T21:04:56Z
http://arxiv.org/abs/2309.13752v2
# Improving Robustness of Deep Convolutional Neural Networks via Multiresolution Learning ###### Abstract The current learning process of deep learning, regardless of any deep neural network (DNN) architecture and/or learning algorithm used, is essentially a single resolution training. We explore multiresolution learning and show that multiresolution learning can significantly improve robustness of DNN models for both 1D signal and 2D signal (image) prediction problems. We demonstrate this improvement in terms of both noise and adversarial robustness as well as with small training dataset size. Our results also suggest that it may not be necessary to trade standard accuracy for robustness with multiresolution learning, which is, interestingly, contrary to the observation obtained from the traditional single resolution learning setting. Multiresolution deep learning, Robustness, Wavelet, 1D/2D signal classification ## I Introduction Deep neural network (DNN) models have achieved breakthrough performance on a number of challenging problems in machine learning, being widely adopted for increasingly complex and large-scale problems including image classification, speech recognition, language translation, drug discovery, and self-driving vehicles (e.g., [1, 2, 3, 4]). However, it has been revealed that these deep learning models suffer from brittleness, being highly vulnerable to even small and imperceptible perturbations of the input data known as adversarial examples (see the survey [5] and the references therein). Remarkably, a recent study has shown that perturbing even one pixel of an input image could completely fool DNN models [6]. Robustness in machine learning is of paramount importance in both theoretical and practical perspectives. This intrinsic vulnerability of DNN presents one of the most critical challenges in DNN modeling, which has gained significant attention. Existing efforts on improving DNN robustness can be classified into three categories: (1) Brutte-force adversarial training, i.e., adding newly discovered or generated adversarial inputs into training data set (e.g., [7, 8]), which dramatically increases training data size; (2)modifying DNN architecture, by adding more layers/subnetworks or designing robust DNN architecture (e.g., [9, 10]); and (3) modifying training algorithm or introducing new regularizations (e.g., [11, 12]). Unlike those existing works, we take a different direction. We focus on how to improve the learning process of neural networks in general. The observation is that the current learning process, regardless of DNN architecture and/or learning algorithm (e.g., backpropagation) used, is essentially a single resolution training. It has been shown that for shallow neural networks (SNN) this traditional single resolution learning process is limited in its efficacy, whereas multiresolution learning can significantly improve models' robustness and generalization [13, 14]. We are interested in investigating if such multiresolution learning paradigm can also be effective for DNN models as well to address their found vulnerability. In particular, the original multiresolution learning was mainly focused on 1-demisional (1D) signal prediction, which needs to be extended for 2-demisioal (2D) signal prediction tasks such as image classification. In this work, we show that the multiresolution learning paradigm is effective for deep convolutional neural networks (CNN) as well, particularly in improving the robustness of DNN models. We further extend the original multiresolution learning to 2D signal (i.e., image) prediction problem and demonstrate its efficacy. Our exploration also shows that with multiresolution learning it may not be necessary to have the so-called tradeoff between the accuracy and robustness, which seems contrary to the recent findings of [15] in traditional single resolution learning setting. ## II Multiresolution Learning ### _The main idea_ The multiresolution learning paradigm for neural network models was originally proposed in [13, 14], and was demonstrated in applications to VBR video traffic prediction [16, 17, 18]. It is based on the observation that the traditional learning process, regardless of the neural network architectures and learning algorithms used, is basically a _single-resolution learning_ paradigm. From multiresolution analysis framework [19] in wavelet theory, any signal can have a multiresolution decomposition in a tree-like hierarchical way. This systematic wavelet representation of signals directly uncovers underlying structures and patterns of individual signals at their different resolution levels, which could be hidden or much complicated at the finest resolution level of original signals. Namely, any complex signal can be decomposed into a simplified approximation (i.e., a coarser resolution) signal plus a "detail" signal which is the difference between the original and the approximation signals. A low pass filter \(\mathbf{L}\) is used to generate the coarse approximation signal (i.e., Low frequency component) while a high pass filter \(\mathbf{H}\) is used to generate the detail signal (i.e., High frequency component) as shown in Equations (1) and (2), respectively, where \(f^{i}\) denotes the signal approximation at resolution level \(i\) (\(i\in\mathbf{Z}\) and \(i\geq 0\)), and \(d^{i}\) denotes the detail signal at resolution level \(i\). This decomposition process can be iterated on approximation signals until an appropriate level of approximation is achieved. For the sake of notation convenience, \(L^{0}\) (Layer 0) denotes the original signal \(f^{m}\) at the finest resolution level \(m\), whereas \(L^{i}\) and \(H^{i}\) denote approximation \(f^{m-i}\) and detail \(d^{m-i}\) signals, respectively, at Layer \(i\) of signal decomposition. \[f^{i-1}=\mathbf{L}f^{i},\;\;i=1,2,...,m. \tag{1}\] \[d^{i-1}=\mathbf{H}f^{i},\;\;i=1,2,...,m. \tag{2}\] This signal decomposition is reversable, meaning that a coarser resolution approximation plus its corresponding detail signal can reconstruct a finer resolution version of the signal recursively without information loss. One level of signal reconstruction is as follows: \[f^{i}=f^{i-1}(+)d^{i-1}\,,i=1,2,...,m. \tag{3}\] where operator \((+)\) denotes the reconstruction operation. Fig. 1 shows the decomposition hierarchy of signal \(f^{i}\) from the original finest resolution level \(m\) to resolution level \(m\)-4. As we can see, by applying low pass filter \(\mathbf{L}\) and high pass filter \(\mathbf{H}\) on signal \(L^{0}(f^{m})\), we obtain \(L^{1}\) and \(H^{1}\). Recursively, \(L^{1}\) is further deconstructed into \(L^{2}\) and \(H^{2}\), and so on. Utilizing the mathematical framework of multiresolution analysis [19], multiresolution learning [13, 14] explores and exploits the different signal resolutions in neural network learning, where the entire training process of neural network models is divided into multiple learning phases associated with different resolution approximation signals, from the coarsest version to finest version. This is in contrast with the traditional learning process widely employed in today's deep learning, where only the original finest resolution signal is used in the entire training process. The main idea of multiresolution learning is that for models to learn any complex signal better, it would be more effective to start training with a significantly simplified approximation version of the signal with a lot of details removed, then progressively adding more and more details and thus refining the neural network models as the training proceeds. This paradigm embodies the principle of divide and conquer applied to information science. In multiresolution learning, discrete wavelet transform (DWT) can be employed for signal decomposition, whereas inverse DWT (IDWT) can be used to form simplified approximation signals via the replacement of the detail signal by zero signal in the signal reconstruction. To build a neural network model for a given task of sampled signal \(f^{m}\), multiresolution learning can be outlined as follows [13, 14]. First, decompose the original signal \(f^{m}\) to obtain \(f^{m-1}\), \(f^{m-2}\),..., etc., based on which one reconstructs multiresolution versions of training data, where each different resolution approximation should maintain the dimension of the original signal that matches the input dimension of neural network model. Namely, reconstruct \(k\) multiresolution versions \(\tau_{l}\) (\(i=1,2,...,k\)) of the original signal training data, where the representation of training data \(\tau_{i}\) at resolution version \(i\) is formed as follows: \[\tau_{i}\] \[=\begin{cases}f^{m},&i=1\\ (...((f^{m-i+1}(+)0^{m-i+1})(+)0^{m-i+2})(+)...(+)0^{m-1}),&i>1.\end{cases} \tag{4}\] where \(0^{j}\) indicates a zero signal at resolution level \(j\) in multiresolution analysis. Fig. 1: Illustration of decomposition structure of 1D signal \(f^{m}\) from the original finest resolution to lower resolution levels. Fig. 2 illustrates how information content varies in \(k\) multiresolution signal training data generated by (4) from the original (finest resolution) training data for \(k\) = 5. Second, the training process of a neural network model is divided into a sequence of learning phases. Let \(A_{i}(r_{i})\) be a training phase conducted on training data version _r\({}_{i}\)_ of a coarser resolution \(i\) with any given learning algorithm. Let A\(\rightarrow\)B indicate that A activity is carried out in advance of B activity. Thus, _k_-level multiresolution learning (_k_\(>\)1) is formulated as an ordered sequence of \(k\) learning phases associated with the sequence of approximation subspaces in multiresolution analysis, which satisfies \(A_{i+1}(r_{i+1})\to A_{i}(r_{i})\) (_i-1, 2,..., k-1_) during the entire training process of a model. It can be seen that the multiresolution learning paradigm forms an ordered sequence of \(k\) learning phases stating with an appropriate coarser resolution version of training data _r\({}_{k}\)_, proceeding to the finer resolution versions of training data. The finest resolution of training data originally given will be used in the last learning phase \(A_{1}\left(r_{1}\right)\). The total number of multiresolution levels \(k\) is to be chosen by the user for a given task at hand. Also, the traditional (single resolution) learning process is just a special case of the multiresolution learning where the total multiresolution levels _k=1_. ### _New Extension_ While the original multiresolution learning [13, 14] was proposed for 1D signal prediction problems, it can be naturally extended for 2D signal prediction problems, such as image recognition. In this work, the original multiresolution learning paradigm is extended for 2D signal problems as follows. First, 2D multiresolution analysis is employed. Basically, 2D wavelet decomposition and its reconstruction are applied to construct coarser resolution approximation versions for 2D signal (training data). For example, 2D DWT can be performed iterating two orthogonal (horizontal and vertical) 1D DWT. For any 2D signal (image) at a given resolution level, four sub-band components are obtained by 2D DWT: the low-resolution approximation (sub-band _LL_) of the input image and the other three sub-bands containing the details in three orientations (horizontal _LH_, vertical _HL_, and diagonal _HH_) for this image decomposition. Let _f\({}^{m}\)_, a sampled 2D signal at a given resolution level \(m\), be denoted as _LL\({}^{0}\)_ (Layer 0) in 2D signal decomposition hierarchy. The coarser approximation _f\({}^{m-1}\)_obtained by 2D DWT, is then denoted as _LL\({}^{1}\)_, with a one-quarter size of that of the original _f\({}^{m}\)_; the other three sub-bands _d\({}_{LH}^{m-1}\)_, _d\({}_{HL}^{m-1}\)_, _d\({}_{HH}^{m-1}\)_ are denoted as _LH\({}^{1}\)_, _HL\({}^{1}\)_ and _HH\({}^{1}\)_, generated through two orthogonal 1DWT in the order (L, H), (H, L) and (H, H), respectively. Similarly, this 2D signal decomposition process of _f\({}^{i}\)_ (_i=m, m-1,..., m-k+1_)_ can be iterated until an appropriate coarser approximation _LL\({}^{k}\)_ (i.e., _f\({}^{m-k}\)_) is achieved, as illustrated in Fig. 3. Again, this 2D signal decomposition is reversable. One level of 2D signal reconstruction is as follows: \[f^{1}=f^{1-(1}+)d_{LH}^{i-1}(+)\ d_{HL}^{i-1}(+)\ d_{HH}^{i-1},\ \ i=1,2,...,m. \tag{5}\] We can rewrite (5) as: \[\begin{array}{l}LL^{l}=LL^{i+1}(+)LH^{i+1}(+)\ HL^{i+1}(+)\ HH^{i+1},\\ j=0,1,...,m-1.\end{array} \tag{6}\] To construct multiresolution versions of training data, we create one additional intermediate resolution level for each level of 2D decomposition to reduce information difference between resolutions, as shown in the second row in Fig. 4. This is one significant difference from the multiresolution learning procedure for 1D signal problem. Hence, the representation of multiresolution training data _r\({}_{i}^{2D}\)_ at resolution level \(i\) is constructed as follows: \[\begin{array}{l}r_{i}^{2D}=\\ \left\{\begin{array}{ll}f^{m},&i=1\\ (...\left((LL^{k}(+)0^{k})(+)0^{k-1}\right)...\left(+0^{1}\right),&i=2k+1\\ (...\left((LL^{k}(+)LL^{k}(+)HL^{k}(+)0_{HH}^{k}(+)0^{k-1}\right)...\left(+0^ {1}\right),&i=2\end{array}\right.\end{array} \tag{7}\] where \(0^{j}=0^{j}_{LH}(+)0^{j}_{HL}(+)0^{j}_{HH}\), and integer \(k\) \(>\) \(0\). Fig. 3 illustrates the information content of 2D signal training data at five different resolutions. The coarser resolution version of training data, the less detailed information it contains. Fig. 5 gives an example of four different resolution training data constructed from (7). Fig. 2: Illustration of the information content of 1D signal training data at five different resolution levels, which can be organized into five information layers. The top layer (Layer 0) corresponds to original signal at the finest resolution level \(m\), whereas the bottom layer (Layer 4) indicates the information content contained in the generated training data at the coarsest resolution level \(m\) – 4. The coarser resolution, the less detailed information it contains. Fig. 3: Illustration of three-level wavelet decomposition of 2D signal _f\({}^{m}\)_ (_LL\({}^{0}\)_) from the given resolution level \(m\). ### _Multiresolution Learning Process_ All the multiresolution training data versions are of the same dimension size but associated with different detailed levels of the given signal. For 1D signals, each level of wavelet decomposition will create one new version of coarser resolution training data, while for 2D signals, each level of wavelet decomposition will create two new versions of coarser resolution training data. As the version index \(i\) of training data \(r_{i}/r_{i}^{2D}\) increases, the \(r_{i}/r_{i}^{2D}\) contain less details, which facilitates to start model training with a significantly simplified signal version even though the original signal could be extremely complex. Furthermore, one has the flexibility to select an appropriate wavelet transform and basis for signal decomposition and a total multiresolution levels \(k\) in multiresolution learning for a given task. After each learning phase in multiresolution learning, the resulting intermediate weights of the DNN model are saved and used as initial weights of the model in the next learning phase in this dynamic training process, until the completion of the final learning phase with the original finest resolution training data \(r_{1}/r_{1}^{2D}\) (\(f^{m}\)). ## III Experiments and Analyses We conduct thorough experiments with three open datasets FSDD [20], ESC-10 [21] and CIFAR-10 [22] of audio/image to systematically study and compare the multiresolution learning with the traditional single resolution learning on deep CNN models. Particularly, the problem of speech recognition is casted, respectively, to 1D signal classification on data set FSDD, and 2D image classification on data set ESC-10, to illustrate our extended multiresolution learning. The following subsections describe the data sets used, our experiment setup, and results and evaluation. ### _Datasets and Preprocessing_ The FSDD [20] is a simple audio/speech dataset consisting of sample recordings of spoken digits in wav files at 8kHz. FSDD is an open dataset, which means it will grow over time as new data are contributed. In this work, the version used consists of six speakers, 3,000 recordings (50 of each digit per speaker), and English pronunciations. Even though the recordings are trimmed so that they have near minimal silence at the beginning and end, there are still a lot of zero values in the raw waveform of the recordings. To apply DNN models to speech recognition, some preprocessing steps are necessary for raw speech samples, which include cropping, time scaling, and amplitude scaling. First, in the cropping step, all contiguous zero values at the beginning and the end of recordings should be removed, to reserve the significant signal part. However, that means the recordings can have different lengths after cropping. Second, in the time scaling step, a fixed-length \(L\), a number near the mean value of lengths of all sample recordings, is chosen as the dimension of input layer of DNN models; each sample recording of variable length is then either extended or contracted to get the same length \(L\). Third, in the amplitude scaling step, the amplitude of each recording is normalized to a range of [-1,1]. Fig. 6 shows six examples of three different preprocessing steps. The leftmost shows the original signals before processing. After cropping all the zero values in the beginning and end, signals are shown in the middle part. The rightmost gives the final preprocessing results after scaling time steps and amplitude. Scaling time helps obtain a unified signal size as model input size while scaling amplitude reduces the influence of different loudness levels since loudness is not our focus. The ESC-10 dataset [21] contains sounds that are recorded in numerous outdoor and indoor environments. Each sound sample recording is of 5 second duration. The sampling rate of each recording is 44100 Hz. The dataset consists of a total of 400 recordings, which are divided into 10 major categories. They are sneezing, dog barking, clock ticking, crying baby, crowing rooster, rain, sea waves, fire crackling, helicopter, and chainsaw. Each category contains 40 samples. The dataset comes with 5 folds for cross-validation. Fig. 4: Illustration of the information content of 2D signal training data at five different resolution levels, indicated by arrows from the coarsest resolution to the finest resolution. The coarser resolution, the less detailed information it contains. To apply multiresolution learning on the 2D CNN for speech recognition, log-scaled mel-spectrograms (i.e., images) were first extracted from all sound recordings with a window size of 1024, hop length of 512, and 60 mel-bands. Fig. 7 illustrates the generated spectrogram of two examples. Some environmental sound clips have periodic features. For example, we can hear bark several times or tick several times in a single sample recording. Also, considering the very limited number of samples available for training, the spectrograms were split into 50% overlapping segments of 41 frames (short variant, segments of approx. 950 ms) [21], but without discarding silent segments. One example is shown in Fig. 8. The top image shows the original spectrogram extracted from a clip (dog bark), while the second, third, and fourth images illustrate the first three short segments split from the original spectrogram, where the overlaps about 50% with neighboring short segments can be clear seen. ### _Experiment Setup_ For CIFAR-10 and ESC-10, we always train models with 100% of the training data. For FSDD data set, two groups of experiments are designed. The first group is the normal experiments using 100% available training data set aside to construct CNN models. In the second group, we attempt to evaluate CNN models' performance using reduced size of training data on FSDD. To do so, instead of using 100% of the training data, a much smaller portion of training data is applied Fig. 8: Spectrogram clipping. The first image shows the original spectrogram extracted from a clip (dog bark). The second, third, and fourth images illustrate the first 3 short segments split from the original spectrogram. Fig. 6: Illustration of preprocessing steps. The left-most column shows the original signals. The middle column shows obtained signals after cropping all the zero values at the beginning and end of recordings. The rightmost column shows the results after both scaling time and scaling amplitude. Fig. 7: Spectrograms extracted from a clip (dog bark) and a clip (rain). Fig. 9: Examples of a few sample representations of CIFAR-10 at three different resolution levels. in model learning. Validation data and testing data are reserved as the same as in normal experiments. For example, for FSDD, 2000 records of total 3000 records are used as training data in the first group of normal experiments, whereas in the second group (i.e., reduced training data) of experiments, only 20% of those 2000 records of data are applied as training data for constructing CNNs. DNN architecture and setup for FSDD. With FSDD data set, sound waveforms are directly taken as input to a deep CNN model. Reported in recent work of SampleCNN [23], a CNN model that takes raw waveforms as input can achieve the classification performance to be in a par with the use of log-mel spectrograms as input to DNN models, provided that a very small filter size (such as two or three) is used in all convolutional layers. The CNN architecture given in SampleCNN is adopted in our experiments with FSDD. Table I illustrates the CNN architecture and its number of parameters for each layer. The portion of training/validation/testing data is 64%, 16% and 20% respectively for the group of normal experiments. In our multiresolution learning for DNN models on both data sets FFSD and ESC-10, Haar wavelet transform is adopted to build multiresolution training data. Three groups of multiresolution learning DNN models are constructed on FFSD with four, five and six levels of multiresolution learning processes separately. To make sure DNN models get sufficient training and at the same time prevent overfitting, 500 total epochs and an early stopping scheme are applied. To have fair comparisons of multiresolution learning (ML) versus traditional learning (TL), for experiments with ML, the maximum epoch number for each individual resolution training phase is divided evenly from the total 500 epochs. The input size of the data is (16000, 1). Stochastic gradient descent (SGD) optimizer is adopted with a learning rate of 0.02 and momentum of 0.2 and batch size of 23. The weight initializer is set as a normal distribution with a standard deviation of 0.02 and a mean of 0. In addition, a 0.001 L2 norm weight regularizer is applied. The ReLU (Rectified Linear Unit) activation function is used for each convolutional layer and dense layer except the output layer using the softmax function. It should be noted that different slope value of ReLU function is used for different resolution level i of training data, as shown in (8). The general rule is that the finer resolution of the training data is, the smaller the slope value is [14], where the finest resolution learning phase, the slope value is 1 (i.e., the normal ReLU). The normal ReLU is also used for TL. \[slope=0.85+0.15i \tag{8}\] DNN architecture and setup for ESC-10. We design our new CNN model architecture for ESC-10 but adopt the same segmentation and voting scheme as in Piczak [24]. Our CNN model contains 13 layers in total as shown in Table II. The first convolutional ReLU layer consists of 80 filters of rectangular shape (3\(\times\)3 size). Maxpooling layer is applied after every 2 convolutional layers with a default pool shape of 2\(\times\)2 and stride of 1\(\times\)1. The second to the sixth convolutional ReLU layer consist of 80 filters of rectangular shape (2\(\times\)2 size). Further processing is applied through 3 dense (fully connected hidden) layers of 1000 ReLUs, 500 ReLUs and 500 ReLUs, respectively, with a softmax output layer. One dropout layer with dropout rate 0.5 are added after the third dense layer. With the input size (60, 41, 1) for ESC-10, training is performed using SGD with a learning rate of 0.02, momentum 0.2, and batch size of 64. The same as the setup for the experiments with FSDD, 500 total epochs, and an early stopping scheme are applied. Again, for experiments with multiresolution learning, the maximum epoch number for each individual resolution training phase is obtained by dividing the total 500 epochs evenly. Also, 0.001 L2 norm weight regularizer is applied for two convolutional layers and dense layers. Different slope values of ReLU function are also applied following (8) for different resolution level \(i\) of training data. The model is evaluated in a 5-fold cross-validation regime with a single fold used as an intermittent validation set. That is, the training/validation/test data split ratio is 0.6/0.2/0.2 in the group of normal experiments. This cross-validation scheme leads to 20 different combinations of model construction and testing. Since each clip is segmented into 20 short pieces, the absolute number of training/validation/test short segments is 4800/1600/1600 respectively. Final predictions for clips are generated a probability-voting scheme. DNN architecture and setup for CIFAR-10. We use wide Residual Networks [25] as our main network. Experiments are implemented on a popular variant WRN-28-10 which has a depth of 28 and a multiplier of 10 containing 36.5M parameters. SGD optimizer with Nesterov momentum and a global weight decay of 5x10-4. 200 training epochs are divided into 4 phases (60, 60, 40, 40 epochs respectively) to apply different learning rates (0.1, 0.02, 0.004, 0.0008) while batch size 128 is kept same. There is no validation set for training and data augmentation is not used. Instead, we evaluate top-1 error rates based on 5 runs with different seeds. While training ML2 models, \(\mathbf{r_{2}^{2D}}\) is the input in phase 1 and phase 2 while \(\mathbf{r_{1}^{2D}}\) in phase 3 and phase 4. When we train ML3 models, \(\mathbf{r_{3}^{2D}}\) is the input in phase 1, \(\mathbf{r_{2}^{2D}}\) is the input in phase 2 while \(\mathbf{r_{1}^{2D}}\) in phase 3 and phase 4. ### _Results and Evaluation_ With FSDD. The deep CNN models are evaluated based on CNN models constructed by the traditional single resolution learning versus the multiresolution learning using 15 different seeds for the generation of initial random weights of CNN models. To evaluate the noise robustness of trained deep CNN models on FSDD, random noise is then added to the test data. The noise set is generated using a normal probabilistic distribution with a zero mean and a scale of 0.75, with 5% noise density of the original sample length of 16000. In the second group of experiments with small training data, we construct new CNN models by reducing the training data to only 20% of the original training data set. The evaluation results are shown in Table III, in which the accuracy for each model category is the average recognition accuracy over 15 constructed models using arbitrary seeds for models' random weight initialization. In Table. 3 and Fig. 10, as we can see, ML models always outperform TL models no matter how many total levels of multiresolution learning phases are employed in the learning process. To evaluate these constructed DNN models' performance on noisy testing data, random noise is added to test data. The performance of TL models degrades much more as we can see that the improvement ratios of ML models (with either 4-, or 5-, or 6- levels of multiresolution learning process) over TL model are all over 10%, which is significantly higher than that on clean test data. This indicates that CNN models constructed by the traditional learning are not as robust as CNN models constructed by multiresolution learning with respect to noise. When the amount of training data is reduced in the second group of experiments, the accuracy of TL models decreases more than that of ML models, indicating that CNN models constructed by TL are more vulnerable compared to CNN models constructed by ML in small training data setting. The highest improvement over TL models is obtained by ML models in the category of 4 levels of multiresolution learning, which is 22.69%. The last row in Table III shows that, when reduced training data size and noise attack are combined, ML models demonstrate more substantial performance improvements over TL models, illustrating the benefits of multiresolution learning for CNN models in robustness enhancement with respect to noise or/and small training data. With ESC-10. The model is evaluated in a 5-fold cross-validation regime with a single fold used as an intermittent validation set. Since each clip is segmented into 20 short pieces, the absolute number of training/validation/test short segments is 4800/1600/1600 respectively. Final predictions for clips are generated using a probability-voting scheme. Two sets of experiments are implemented with unnormalized training data and normalized training data respectively. We use Deepfool [8], an effective tool to systematically evaluate the adversarial robustness of CNN models on ESC-10. Deepfool is an algorithm to efficiently compute the minimum perturbations that are needed to fool DNN models on image classification tasks, and reliably quantify the robustness of these DNN classifiers using robustness metric \(\rho\), where \(\rho\) of a classifier M is defined as follows [9]: \[\hat{\rho}_{adv}(M)=\frac{1}{|\rho|}\sum_{\mathbf{x}\in\mathcal{D}}\frac{\|\mathbf{ \hat{p}}(\mathbf{x})\|_{2}}{\|\mathbf{x}\|_{2}} \tag{9}\] where \(\mathbf{\hat{p}}(\mathbf{x})\) is the estimated minimal perturbation obtained using Deepfool, and \(\mathcal{D}\) denotes the test set. Basically, Deepfool is applied to the test data for each CNN model to generate adversarial perturbations. An average robustness value can be computed over the generated perturbations and original test data. The Adversarial Robustness Toolbox [25] is used to implement Deepfool. With ESC-10 (unnormalized training data). As shown in Table IV, under normal accuracy experiments, we can see that ML models of 2 and 3 levels only slightly perform better in probability voting (PV) setting comparing to TL model. In addition, lower average test accuracy is obtained with ML models of 5 levels than TL models, which shows a degradation of 2.3%. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{TL} & \multicolumn{8}{c}{ML} \\ \cline{2-10} & \multicolumn{3}{c|}{2 Levels} & \multicolumn{3}{c|}{3 Levels} & \multicolumn{3}{c|}{4 Levels} & \multicolumn{3}{c}{5 Levels} \\ \hline \(\rho\) & \(\rho\) & Imp. & \(\rho\) & Imp. & \(\rho\) & Imp. & \(\rho\) & Imp. \\ \hline 0.277 & 0.297 & **7.22\%** & 0.295 & **6.50\%** & 0.303 & **9.39\%** & 0.296 & **6.86\%** \\ \hline \end{tabular} \end{table} TABLE V: Average Deepfool Robustness value \(\rho\) of ML (2, 3, 4, 5 levels) and TL under each group of experiments on ESC-10 with unnormalized training data Fig. 10: Accuracy result comparison of traditional learning (single resolution) versus 4, 5, and 6 levels of multiresolution learning on FSDD. The mean accuracy for each category is given by triangle. (a)100% training data; (b)100% training data + noise; (c) 20% training data; (d) 20% training data + noise. With ESC-10 (normalized training data). Experiments were also conducted with normalized input data, which fall in range [0,1] based on original spectrogram value, to explore a new robustness attack tool - AutoAttack, in addition to Deepfool. AutoAttack consists of four attacks. They are APGD-CE (auto projected gradient descent with cross-entropy loss), APGD-DLR (auto projected gradient descent with difference of logits ratio loss), FAB (Fast Adaptive Boundary Attack) and Square Attack. APGD is a white-box attack aiming at any adversarial example within an Lp-ball, FAB minimizes the norm of the perturbation necessary to achieve a misclassification, while Square Attack is a score-based black-box attack for norm bounded perturbations which uses random search and does not exploit any gradient approximation [1]. In our experiments, attack batch size 1000, epsilon 8/255 and L2 norm are applied. One special requirement of AutoAttack is softmax function in the last dense layer should be removed, which results in different sets of standard test accuracy as shown in Table VI and Table VIII (before attack). As shown in Table VI, we can see that ML (4) & ML (5) models slightly perform better comparing to TL model. In addition, lower average test accuracy is obtained with ML (2) levels than TL models, which shows a small degradation of 0.54%. Overall performance shows that normalizing data does not help improve classification accuracy. We further examine average Deepfool robustness value \(\rho\) for each TL and ML model on normalized training data. As illustrated in Table VII, TL models get the lowest average \(\rho\) value 2.782 compared to all different ML models. In general, higher robustness value \(\rho\) corresponds to ML with more resolution levels. Specifically, the average of \(\rho\) value of the ML (5) models is about 79.5% higher than the average \(\rho\) value of the TL models. And ML (2) models achieve 35.0% higher value Fig. 11: Accuracy (left plot) and Deepfool robustness (right plot) result comparison of traditional learning(W1) versus 2, 3, 4 and 5 levels of multiresolution learning (W2, W3, W4, W5) on ESC-10 with unnormalized training data. Fig. 12: Accuracy (left plot) and Deepfool robustness (right plot) result comparison of traditional learning(W1) versus 2, 3, 4 and 5 levels of multiresolution learning (W2, W3, W4, W5) on ESC-10 with normalized training data. \(\rho\) compared to TL models. In general, robustness value \(\rho\) obtained with experiments on normalized training data is significantly higher than that on unnormalized training data. The results also show that the CNN models constructed by multiresolution learning demonstrate significant improvement on robustness against adversarial examples compared to TL models with normalized training data. Fig. 12 shows results from Table VI and Table VII. Table VIII and Fig. 13 illustrate AutoAttack experiment results. As we can see, removing softmax function causes some general performance degradation on standard test accuracy comparing to results in Table IV. In other words, this illustrates the effectiveness of softmax function. Different from Table IV, all ML models outperform TL models under Autoattack standard test accuracy setting (Phase -- before Autoattack). ML (4) models achieve the best improvement of 4.1% in comparison to TL models. After applying Autoattack, ML (4) models still achieve the best performance as opposed to all other models. Hence, we can draw the same overall trend as Deepfool experiments -- the ML models with more resolution levels tend to achieve better robustness performance. With CIFAR-10. The average classification accuracy results on CIFAR-10 with WRN-28-10 models are shown in Table IX. Results indicate that TL models slightly outperform ML2 models while performance of ML3 models get a degradation of 1.31% comparing to TL models. To evaluate the robustness of ML versus TL models, we also apply Deepfool [8] as described in section III.C. The average robustness values are reported in Table X. As we know, under the same setting, the higher robustness value indicates the better resistance to Deepfool perturbation. We find that both ML2 and ML3 models outperform TL models significantly with improvement ratio of 14.06% and 20.48% respectively. This trend can also be seen for experiments on ESC-10 dataset. The multiresolution learning models with more resolution levels, in general, achieve better Deepfool robustness. To thoroughly investigate the robustness improvement by multiresolution learning, another black-box attack tool, one-pixel-attack [26], is further adopted in our evaluation. The one-pixel-attack tool is a method for generating one-pixel adversarial perturbations based on differential evolution (DE). We apply the untargeted attack for our models. 1000 samples out of all the successfully predicted images will be chosen randomly to test robustness. Once a perturbated sample is predicted as a wrong class, we call a success attack. The overall average attack success rate for models is obtained with 5 runs of different seeds. The results are presented in Table XI. A lower attack success rate always implies that the trained model is more robust. From Table XI, it can be seen that TL models achieve the highest attack success rate 26.0% within 1000 attacked samples. At the same time, ML2 models achieve the lowest attack success rate 24.1%, a reduction of 7.3% of attack success rate compared to TL models. ML3 models also obtain a similar result as ML2 models. To sum up, both Deepfool attack and one-pixel attack results consistently indicate that ML models are more robust than TL models under the same setting. Fig. 13: Accuracy boxplot before applying AutoAttack (left plot) and Accuracy boxplot after applying AutoAttack (right plot) result comparison of traditional learning(W1) versus 2, 3, 4 and 5 levels of multiresolution learning (W2, W3, W4) on ESC-10 with normalized input data. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{Training data} & \multirow{2}{*}{TL} & \multicolumn{4}{c}{ML} \\ \cline{3-6} & & \multicolumn{2}{c|}{2 Levels} & \multicolumn{2}{c}{3 Levels} \\ \cline{3-6} & & Acc. & \multicolumn{1}{c|}{Acc.} & Imp. & Acc & Imp. \\ \hline 100\% & 94.86\% & 94.78\% & **-0.08\%** & 93.62\% & **-1.31\%** \\ \hline \end{tabular} \end{table} TABLE X: Average Deepfool robustness value \(\rho\) and relative improvement ratio of ML models (2, 3 levels) over TL models on CIFAR-10 dataset \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{Training data} & \multirow{2}{*}{TL} & \multicolumn{4}{c}{ML} \\ \cline{3-6} & & \multicolumn{2}{c|}{2 Levels} & \multicolumn{2}{c}{3 Levels} \\ \cline{3-6} & AS rate & AS rate & Imp. & AS rate & Imp. \\ \hline 100\% & 26.00\% & 24.10\% & **7.30\%** & 24.40\% & **6.15\%** \\ \hline \end{tabular} \end{table} TABLE XI: Average one-pixel attack success rate (AS rate) and relative improvement ratio of ML models (2, 3 levels) over TL models on CIFAR-10 dataset \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{Coefficient} & \multirow{2}{*}{TL} & \multicolumn{4}{c}{ML} \\ \cline{3-6} & & \multicolumn{2}{c|}{2 Levels} & \multicolumn{2}{c}{3 Levels} \\ \cline{2-6} & AS rate & AS rate & Imp. & AS rate & Imp. \\ \hline LH & 7.80\% & 6.41\% & **17.83\%** & 3.99\% & **48.85\%** \\ \hline HL & 9.43\% & 7.81\% & **17.18\%** & 4.34\% & **53.98\%** \\ \hline HH & 3.41\% & 2.48\% & **27.28\%** & 0.81\% & **76.25\%** \\ \hline Total & 20.63\% & 16.70\% & **19.05\%** & 9.14\% & **55.70\%** \\ \hline \end{tabular} \end{table} TABLE XII: Average multi-resolution robustness attack success rate (AS rate) on different coefficients and relative improvement ratio of ML models (2, 3 levels) over TL models on CIFAR-10 dataset Fig. 14: Multi-resolution robustness attack examples. For each original image, one detailed coefficient subset among LH, HL or HH is replaced by that of another random image of different category to generate adversarial images. Last, we propose our multi-resolution robustness attack method. To implement this attack, we apply 2D haar wavelet transformation on correct predicted test images and obtain approximation (LL), horizontal detail (LH), vertical detail (HL) and diagonal detail (HH) coefficients respectively. Then, one component among LH, HL and HH of each image is replaced randomly with the same detail component from another image of different category to generate adversarial images. The comparison between generated adversarial images and original images is shown in Fig. 13. As we can see, replacing coefficient doesn't affect when human identify different categories. Through 5 runs of attack, we obtain robustness results illustrated in Table XII. Overall, ML2 (AS rate 16.7%) and ML3 (AS rate 9.14%) both significantly outperform TL (AS rate 20.63%) while identifying perturbated images. It shows that replacing HL or LH coefficients makes it harder for models to identify perturbated images, while replacing HH coefficients is less harmful. This makes sense since HH coefficients, in general, contain detailed information of images which is less critical. In summary, TL model shows the least robustness while test images are perturbated by multi-resolution robustness attack compared to ML2 and ML3 models. ## IV Discussions ### _Wavelet CNNs._ Several works are reported on wavelet-based CNNs to incorporate DWT in CNNs, which can be basically classified into two categories. The first category is to use wavelet decomposition as a _preprocessing_ of data to obtain a fixed level of lower resolution training data, to be employed to construct a CNN model using traditional learning process. For example, Wavelet-SRNet [27] is such a wavelet-based CNN for face super resolution. Zhao [28] focuses on experiments in ECG (1D signal) classification task using deep CNN with wavelet transform, where, again, wavelet transform is only used as a data preprocessing tool to obtain the filtered ECG signal. The second category includes schemes that embed DWT into CNN models mainly for image restoration tasks (e.g., deep convolutional framelets [29], MWCNN [30]). Our approach of multiresolution learning for deep CNN is different from those wavelet-based CNN schemes in literature in the following key aspects: First, the overall training of CNN model is a progressive process involving multiple different resolution training data from coarser versions to finer versions. In contrast, when DWT is only used for data preprocessing, the CNN training is still traditional learning, a repeated process over the single resolution training data. Second, our approach is very flexible, where users can easily select a different wavelet basis and a total levels of wavelet decomposition for a given task. In contrast, embedded DWT into CNN means that the wavelet transformation and basis used as well as the total levels of decomposition are 'hard coded' in the model, making any change difficult and error prone. ### _Accuracy performance comparison with other schemes on ESC-10._ ESC-10 is a popular sound classification dataset which has been used by many researchers. The performance of our proposed multiresolution learning CNN is compared with other existing various schemes on ESC-10 using traditional learning, as shown in Table XIII, where our result of 5-level multiresolution learning is used. While our CNN model adopts the same segmentation and voting scheme as in the baseline work of PiczakCNN [24], our network scale is greatly reduced. PiczakCNN has about 25M trainable parameters, whereas our CNN only has about 2M parameters. That means that our CNN model has drastically saved computational resource and training time. At the same time, a better recognition result is achieved by our approach with much fewer model parameters. We note that we do not apply data augmentation (DA) in our study, while \(\times\)10 data augmentations for PiczakCNN [24] and \(\times\)5 data augmentations for SB-CNN [31] were used respectively. As we can see, our multiresolution learning CNN under short segment/probability voting setting (SP) outperforms all the other schemes/models listed in Table XIII, even though our ML CNN only used about 9.1% and 16.7% amount of training data compared to data augmentations used in PiczakCNN and SB-CNN, respectively. ### _Accuracy performance comparison with other schemes on CIFAR-10._ In recent years, the state-of-the-art performance on CIFAR-10 has continued to improve, with new models and techniques that achieve increasingly lower error rates. In this work, we do not apply complex data augmentation method except the crop/flip on training data to focus on the effect of multiresolution learning. Table XIV presents the comparison of some classification results for this setting. As we can see, our multiresolution learning approach has achieved the higher accuracy than other schemes except for the work of [37] which improved SGD by ensembling a subset of the weights in late stages of learning to augment the standard models. ### _Tradeoff between accuracy and robustness_ Recent studies show that robustness may be at odds with accuracy in traditional learning setting (e.g., [15, 39]). Namely, getting robust models may lead to a reduction of standard accuracy. It is also stated in [15] that though adversarial training is effective, this comes with certain drawbacks like an increase in the training time and the potential need for more training data. The multiresolution deep learning, however, can overcome these drawbacks easily because it only demands the same computing epochs and the same dataset as traditional learning, as demonstrated in our experiments. Both [15] and [39] claim that adversarial vulnerability is not necessarily tied to the standard training framework but is rather a property of the dataset. This may perhaps need further study and evidence, since under robustness attack different training frameworks (e.g., traditional learning vs. multiresolution learning) could bring totally distinct results. A hypothesis is raised in [15, 39] that the biggest price of being adversarial robust is the reduction of standard accuracy. However, our results seem to suggest that multiresolution learning could significantly improve robustness with no or little tradeoff of standard accuracy. ## V Conclusions and Future Work In this paper, we present a new multiresolution deep learning for CNNs. To the best of our knowledge, this work represents the first study of its kind in exploring multiresolution learning in deep learning technology. We showed that multiresolution learning can significantly improve the robustness of deep CNN models for both 1D and 2D signal classification problems, even without applying data augmentation. We demonstrated this robustness improvement in terms of random noise, reduced training data, and in particular well-designed adversarial attacks using multiple systematic tools including Deepfool, AutoAttack, and one-pixel-attack. In addition, we have also proposed our systematic multi-resolution attack method for the evaluation of robustness. Our multiresolution deep learning is very general and can be readily applied to other DNN models. Our multi-resolution attack method can also be applied in general for robustness attack and adversarial training. On contrary to the recent observation in traditional single resolution learning setting, our results seem to suggest that it may not be necessary to trade standard accuracy for robustness with multiresolution learning, a definite interesting and important topic for further future research. We also plan to further investigate our approach on large-scale problems including ImageNet, and to explore various wavelet bases beyond Haar wavelet in the multiresolution learning.
2309.12128
Convergence and Recovery Guarantees of Unsupervised Neural Networks for Inverse Problems
Neural networks have become a prominent approach to solve inverse problems in recent years. While a plethora of such methods was developed to solve inverse problems empirically, we are still lacking clear theoretical guarantees for these methods. On the other hand, many works proved convergence to optimal solutions of neural networks in a more general setting using overparametrization as a way to control the Neural Tangent Kernel. In this work we investigate how to bridge these two worlds and we provide deterministic convergence and recovery guarantees for the class of unsupervised feedforward multilayer neural networks trained to solve inverse problems. We also derive overparametrization bounds under which a two-layers Deep Inverse Prior network with smooth activation function will benefit from our guarantees.
Nathan Buskulic, Jalal Fadili, Yvain Quéau
2023-09-21T14:48:02Z
http://arxiv.org/abs/2309.12128v3
# Convergence and Recovery Guarantees of Unsupervised Neural Networks for Inverse Problems ###### Abstract Neural networks have become a prominent approach to solve inverse problems in recent years. While a plethora of such methods was developed to solve inverse problems empirically, we are still lacking clear theoretical guarantees for these methods. On the other hand, many works proved convergence to optimal solutions of neural networks in a more general setting using overparametrization as a way to control the Neural Tangent Kernel. In this work we investigate how to bridge these two worlds and we provide deterministic convergence and recovery guarantees for the class of unsupervised feedforward multilayer neural networks trained to solve inverse problems. We also derive overparametrization bounds under which a two-layers Deep Inverse Prior network with smooth activation function will benefit from our guarantees. **Keywords: Inverse problems, Deep Image/Inverse Prior, Overparametrization, Gradient flow, Unsupervised learning** ## 1 Introduction ### Problem Statement An inverse problem consists in reliably recovering a signal \(\overline{\mathbf{x}}\in\mathbb{R}^{n}\) from noisy indirect observations \[\mathbf{y}=\mathbf{F}(\overline{\mathbf{x}})+\mathbf{\varepsilon}, \tag{1}\] where \(\mathbf{y}\in\mathbb{R}^{m}\) is the observation, \(\mathbf{F}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) is a forward operator, and \(\varepsilon\) stands for some additive noise. We will denote by \(\overline{\mathbf{y}}=\mathbf{F}(\overline{\mathbf{x}})\) the ideal observations i.e., those obtained in the absence of noise. In recent years, the use of sophisticated machine learning algorithms, including deep learning, to solve inverse problems has gained a lot of momentum and provides promising results; see e.g., the reviews [1; 2]. The general framework of these methods is to optimize a generator network \(\mathbf{g}:(\mathbf{u},\boldsymbol{\theta})\in\mathbb{R}^{d}\times\mathbb{R}^ {p}\mapsto\mathbf{x}\in\mathbb{R}^{n}\), with some activation function \(\phi\), to transform a given input \(\mathbf{u}\in\mathbb{R}^{d}\) into a vector \(\mathbf{x}\in\mathbb{R}^{n}\). The parameters \(\boldsymbol{\theta}\) of the network are optimized via (possibly stochastic) gradient descent to minimize a loss function \(\mathcal{L}_{\mathbf{y}}:\mathbb{R}^{m}\rightarrow\mathbb{R}_{+},\mathbf{y}(t )\mapsto\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\) which measures the discrepancy between the observation \(\mathbf{y}\) and the solution \(\mathbf{y}(t)=\mathbf{F}(\mathbf{g}(\mathbf{u},\boldsymbol{\theta}(t)))\) generated by the network at time \(t\geq 0\). Theoretical understanding of recovery and convergence guarantees for deep learning-based methods is of paramount importance to make their routine use in critical applications reliable [3]. While there is a considerable amount of work on the understanding of optimization dynamics of neural network training, especially through the lens of overparametrization, recovery guarantees when using neural networks for inverse problem remains elusive. Some attempts have been made in that direction but they are usually restricted to very specific settings. One kind of results that was obtained [4; 5; 6] is convergence towards the optimal points of a regularized problem, typically with a learned regularizer. However this does not give guarantees about the real sought-after vector. Another approach is used in Plug-and-Play [7] to show that under strong assumptions on the pre-trained denoiser, one can prove convergence to the true vector. This work is however limited by the constraints on the denoiser which are not met in many settings. Our aim in this paper is to help close this gap by explaining when gradient descent consistently and provably finds global minima of \(\mathcal{L}\), and how this translates into recovery guarantees for both \(\overline{\mathbf{y}}\) and \(\overline{\mathbf{x}}\) i.e., in both the observation and the signal spaces. For this, we focus on a continuous-time gradient flow applied to \(\mathcal{L}\): \[\begin{cases}\boldsymbol{\dot{\theta}}(t)=-\nabla_{\boldsymbol{\theta}} \mathcal{L}_{\mathbf{y}}(\mathbf{F}(\mathbf{g}(\mathbf{u},\boldsymbol{\theta} (t))))\\ \boldsymbol{\theta}(0)=\boldsymbol{\theta}_{0}.\end{cases} \tag{2}\] This is an idealistic setting which makes the presentation simpler and it is expected to reflect the behavior of practical and common first-order descent algorithms, as they are known to approximate gradient flows. In this work, our focus in on an unsupervised method known as Deep Image Prior [8], that we also coin Deep Inverse Prior (DIP) as it is not confined to images. A chief advantage of this method is that it does not need any training data, while the latter is mandatory in most supervised deep learning-based methods used in the literature. In the DIP method, \(\mathbf{u}\) is fixed throughout the optimization/training process, usually a realization of a random variable. By taking out the need of training data, this method focuses on the generation capabilities of the network trained through gradient descent. In turn, this will allow us to get insight into the effect of network architecture on the reconstruction quality. ### Contributions We deliver a theoretical analysis of gradient flow optimization of neural networks, i.e. (2), in the context of inverse problems and provide various recovery guarantees for general loss functions verifying the Kurdyka-Lojasewicz (KL) property. We first prove that the trained network with a properly initialized gradient flow will converge to an optimal solution in the observation space with a rate characterized by the desingularizing function appearing in the KL property of the loss function. This result is then converted to a prediction error on \(\overline{\mathbf{y}}\) through an early stopping strategy. More importantly, we present a recovery result in the signal space with an upper bound on the reconstruction error of \(\overline{\mathbf{x}}\). The latter result involves for instance a restricted injectivity condition on the forward operator. We then turn to showing how these results can be applied to the case of a two-layer neural network in the DIP setting where \[\mathbf{g}(\mathbf{u},\boldsymbol{\theta})=\frac{1}{\sqrt{k}}\mathbf{V}\phi( \mathbf{W}\mathbf{u}),\quad\boldsymbol{\theta}\stackrel{{\mathrm{ def}}}{{=}}(\mathbf{V},\mathbf{W}), \tag{3}\] with \(\mathbf{V}\in\mathbb{R}^{n\times k}\), \(\mathbf{W}\times\mathbb{R}^{k\times d}\), and \(\phi\) an element-wise nonlinear activation function. The scaling by \(\sqrt{k}\) will become clearer later. We show that for a proper random initialization \(\mathbf{W}(0)\), \(\mathbf{V}(0)\) and sufficient overparametrization, all our conditions are in force to control the eigenspace of the Jacobian of the network as required to obtain the aforementioned convergence properties. We provide a characterization of the overparametrization needed in terms of \((k,d,n)\) and the conditioning of \(\mathbf{F}\). ### Relation to Prior Work _Data-Driven Methods to Solve Inverse Problems_ Data-driven approaches to solve inverse problems come in various forms; see the comprehensive reviews in [1, 2]. The first type trains an end-to-end network to directly map the observations to the signals for a specific problem. While they can provide impressive results, these methods can prove very unstable as they do not use the physics of the problem which can be severely ill-posed. To cope with these problems, several hybrid models that mix model- and data-driven algorithms were developed in various ways. One can learn the regularizer of a variational problem [9] or use Plug-and-Play methods [10] for example. Another family of approaches, which takes inspiration from classical iterative optimization algorithms, is based on unrolling (see [11] for a review of these methods). Still, all these methods require an extensive amount of training data, which may not always be available. _Deep Inverse Prior_ The DIP model [8] (and its extensions that mitigate some of its empirical issues [12, 13, 14, 15]) is an unsupervised alternative to the supervised approches briefly reviewed above. The empirical idea is that the architecture of the network acts as an implicit regularizer and will learn a more meaningful transformation before overfitting to artefacts or noise. With an early stopping strategy, one can hope for the network to generate a vector close to the sought-after signal. However, this remains purely empirical and there is no guarantee that a network trained in such manner converges in the observation space (and even less in the signal space). The theoretical recovery guarantees of these methods are not well understood [3] and our work aims at reducing this theoretical gap by analyzing the behaviour of such networks in both the observation and the signal space under some overparametrization condition. #### Theory of Overparametrized Networks To construct our analysis, we build upon previous theoretical work of overparametrized networks and their optimization trajectories [16, 17]. The first works that proved convergence to an optimal solution were based on a strong convexity assumption of the loss which is typically not the case when it is composed with a neural network. A more recent approach is based on a gradient dominated inequality from which we can deduce by simple integration an exponential convergence of the gradient flow to a zero-loss solution. This allows to obtain convergence guarantees for networks trained to minimize a mean square error by gradient flow [18] or its discrete counterpart (i.e., gradient descent with fixed step) [19, 20, 21, 22]. The work that we present here is inspired by these works but it goes far beyond them. Amongst other differences, we are interested in the challenging situation of inverse problems (presence of a forward operator), and we deal with more general loss functions that obey the Kurdyka-Lojasewicz inequality (e.g., any semi-algebraic function or even definable on an o-minimal structure) [23, 24, 25]. Recently, it has been found that some kernels play a very important role in the analysis of convergence of the gradient flow when used to train neural networks. In particular the semi-positive definite kernel given by \(\mathcal{J}_{\mathbf{g}}(t)\mathcal{J}_{\mathbf{g}}(t)^{\top}\), where \(\mathcal{J}_{\mathbf{g}}(t)\) is the Jacobian of the network at time \(t\). When all the layers of a network are trained, this kernel is a combination of the _Neural Tangent Kernel_ (NTK) [26] and the Random Features Kernel (RF) [27]. If one decides to fix the last layer of the network, then this amounts to just looking at the NTK which is what most of the previously cited works do. The goal is then to control the eigenvalues of the kernel to ensure that it stays positive definite during training, which entails convergence to a zero-loss solution at an exponential rate. The control of the eigenvalues of the kernel is done through a random initialization and the overparametrization of the network. Indeed, for a sufficiently wide network, the parameters \(\boldsymbol{\theta}(t)\) will stay near their initialization and they will be well approximated by their linearization (so-called "lazy" regime [18]). The overparametrization bounds that were obtained are mostly for two-layers networks as the control of deep networks is much more complex. However, even if there are theoretical works on the gradient flow-based optimization of neural networks as reviewed above, similar analysis that would accommodate for the forward operator as in inverse problems remain challenging and open. Our aim is to participate in this endeavour by providing theoretical understanding of recovery guarantees with neural network-based methods. This paper is an extension of our previous one in [28]. There are however several distinctive and new results in the present work. For instance, the work [28] only dealt with linear inverse problems while our results here apply to non-linear ones. Moreover, we here provide a much more general analysis under which we obtain convergence guarantees for a wider class of models than just the DIP one and for a general class of loss functions, not just the MSE. More importantly we show convergence not only in the observation space but also in the signal space now. When particularized to the DIP case, we also provide overparametrization bounds for the case when the linear layer of the network is not fixed which is also an additional novelty. #### Paper organization The rest of this work is organized as follows. In Section 2 we give the necessary notations and definitions useful for this work. In Section 3 we present our main result with the associated assumptions and proof. In Section 4 we present the overparametrization bound on the DIP model. Finally, in Section 5, we show some numerical experiments that validate our findings, before drawing our conclusions in Section 6. ## 2 Preliminaries ### General Notations For a matrix \(\mathbf{M}\in\mathbb{R}^{a\times b}\) we denote by \(\sigma_{\min}(\mathbf{M})\) and \(\sigma_{\max}(\mathbf{M})\) its smallest and largest non-zero singular values, and by \(\kappa(\mathbf{M})=\frac{\sigma_{\max}(\mathbf{M})}{\sigma_{\min}(\mathbf{M})}\) its condition number. We also denote by \(\langle,\rangle\) the Euclidean scalar product, \(\left\|\cdot\right\|\) the associated norm (the dimension is implicit from the context), and \(\left\|\cdot\right\|_{F}\) the Frobenius norm of a matrix. With a slight abuse of notation \(\left\|\cdot\right\|\) will also denote the spectral norm of a matrix. We use \(\mathbf{M}^{i}\) (resp. \(\mathbf{M}_{i}\)) as the \(i\)-th row (resp. column) of \(\mathbf{M}\). For two vectors \(\mathbf{x},\mathbf{z}\), \([\mathbf{x},\mathbf{z}]=\{(1-\rho)\mathbf{x}+\rho\mathbf{z}:\ \rho\in[0,1]\}\) is the closed segment joining them. We use the notation \(a\gtrsim b\) if there exists a constant \(C>0\) such that \(a\geq Cb\). We also define \(\mathbf{y}(t)=\mathbf{F}(\mathbf{g}(\mathbf{u},\boldsymbol{\theta}(t)))\) and \(\mathbf{x}(t)=\mathbf{g}(\mathbf{u},\boldsymbol{\theta}(t))\) and we recall \(\overline{\mathbf{y}}=\mathbf{F}(\overline{\mathbf{x}})\). The Jacobian of the network is denoted \(\mathcal{J}_{\mathbf{g}}\). \(\mathcal{J}_{\mathbf{g}}(t)\) is a shorthand notation of \(\mathcal{J}_{\mathbf{g}}\) evaluated at \(\boldsymbol{\theta}(t)\). \(\mathcal{J}_{\mathbf{F}}(t)\) is the Jacobian of the forward operator \(\mathbf{F}\) evaluated at \(\mathbf{x}(t)\). The local Lipschitz constant of a mapping on a ball of radius \(R>0\) around a point \(\mathbf{z}\) is denoted \(\operatorname{Lip}_{\mathbb{B}(\mathbf{z},R)}(\cdot)\). We omit \(R\) in the notation when the Lipschitz constant is global. For a function \(f:\mathbb{R}^{n}\to\mathbb{R}\), we use the notation for the sublevel set \([f<c]=\{\mathbf{z}\in\mathbb{R}^{n}:\ f(\mathbf{z})<c\}\) and \([c_{1}<f<c_{2}]=\{\mathbf{z}\in\mathbb{R}^{n}:\ c_{1}<f(\mathbf{z})<c_{2}\}\). Given \(\mathbf{z}\in\mathcal{C}^{0}(]0,+\infty[;\mathbb{R}^{a})\), the set of cluster points of \(\mathbf{z}\) is defined as \[\mathfrak{W}(\mathbf{z}(\cdot))=\left\{\widetilde{\mathbf{z}}\in\mathbb{R}^{ a}:\ \exists(t_{k})_{k\in\mathbb{N}}\to+\infty\ \text{s.t.}\ \lim_{k\to\infty}\mathbf{z}(t_{k})=\widetilde{\mathbf{z}}\right\}.\] For some \(\Theta\subset\mathbb{R}^{p}\), we define \(\Sigma_{\Theta}=\{\mathbf{g}(\mathbf{u},\boldsymbol{\theta}):\ \boldsymbol{\theta}\in\Theta\}\) the set of signals that the network \(\mathbf{g}\) can generate for all \(\theta\) in the parameter set \(\Theta\). \(\Sigma_{\Theta}\) can thus be viewed as a parametric manifold. If \(\Theta\) is closed (resp. compact), so is \(\Sigma_{\Theta}\). We denote \(\operatorname{dist}(\cdot,\Sigma_{\Theta})\) the distance to \(\Sigma_{\Theta}\) which is well defined if \(\Theta\) is closed and non-empty. For a vector \(\mathbf{x}\), \(\mathbf{x}_{\Sigma_{\Theta}}\) is its projection on \(\Sigma_{\Theta}\), i.e. \(\mathbf{x}_{\Sigma_{\Theta}}\in\operatorname{Argmin}_{\mathbf{z}\in\Sigma_{ \Theta}}\left\|\mathbf{x}-\mathbf{z}\right\|\). Observe that \(\mathbf{x}_{\Sigma_{\Theta}}\) always exists but might not be unique. We also define \(T_{\Sigma_{\Theta}}(\mathbf{x})=\overline{\operatorname{conv}}\left(\mathbb{ R}_{+}(\Sigma_{\Theta}-\mathbf{x})\right)\) the tangent cone of \(\Sigma_{\Theta}\) at \(\mathbf{x}\in\Sigma_{\Theta}\). The minimal (conic) singular value of a matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) w.r.t. the cone \(T_{\Sigma_{\Theta}}(\mathbf{x})\) is then defined as \[\lambda_{\min}(\mathbf{A};T_{\Sigma_{\Theta}}(\mathbf{x}))=\inf\{\left\| \mathbf{A}\mathbf{z}\right\|/\left\|\mathbf{z}\right\|:\mathbf{z}\in T_{\Sigma_ {\Theta}}(\mathbf{x})\}.\] ### Multilayer Neural Networks Neural networks produce structured parametric families of functions that have been studied and used for almost 70 years, going back to the late 1950's [29]. **Definition 2.1**.: Let \(d,L\in\mathbb{N}\) and \(\phi:\mathbb{R}\to\mathbb{R}\) an activation map which acts componentwise on the entries of a vector. A fully connected multilayer neural network with input dimension \(d\), \(L\) layers and activation \(\phi\), is a collection of weight matrices \(\big{(}\mathbf{W}^{(l)}\big{)}_{l\in[L]}\) and bias vectors \(\big{(}\mathbf{b}^{(l)}\big{)}_{l\in[L]}\), where \(\mathbf{W}^{(l)}\in\mathbb{R}^{N_{l}\times N_{l-1}}\) and \(\mathbf{b}^{(l)}\in\mathbb{R}^{N_{l}}\), with \(N_{0}=d\), and \(N_{l}\in\mathbb{N}\) is the number of neurons for layer \(l\in[L]\). Let us gather these parameters as \[\boldsymbol{\theta}=\Big{(}(\mathbf{W}^{(1)},\mathbf{b}^{(1)}),\ldots,( \mathbf{W}^{(L)},\mathbf{b}^{(L)})\Big{)}\in\bigtimes_{l=1}^{L}\big{(}\big{(} \mathbb{R}^{N_{l}\times N_{l-1}}\big{)}\times\mathbb{R}^{N_{l}}\big{)}.\] Then, a neural network parametrized by \(\boldsymbol{\theta}\) produces a function \[\mathbf{g}:(\mathbf{u},\boldsymbol{\theta})\in\mathbb{R}^{d}\times\bigtimes_{ l=1}^{L}\big{(}\big{(}\mathbb{R}^{N_{l}\times N_{l-1}}\big{)}\times\mathbb{R}^{N_{l }}\big{)}\mapsto\mathbf{g}(\mathbf{u},\boldsymbol{\theta})\in\mathbb{R}^{N_{ L}},\quad\text{with}\quad N_{L}=n,\] which can be defined recursively as \[\begin{cases}\mathbf{g}^{(0)}(\mathbf{u},\boldsymbol{\theta})&=\mathbf{u},\\ \mathbf{g}^{(l)}(\mathbf{u},\boldsymbol{\theta})&=\phi\left(\mathbf{W}^{(l)} \mathbf{g}^{(l-1)}(\mathbf{u},\boldsymbol{\theta})+\mathbf{b}^{(l)}\right), \quad\text{ for }l=1,\ldots,L-1,\\ \mathbf{g}(\mathbf{u},\boldsymbol{\theta})&=\mathbf{W}^{(L)}\mathbf{g}^{(L-1 )}(\mathbf{u},\boldsymbol{\theta})+\mathbf{b}^{(L)}.\end{cases}\] The total number of parameters is then \(p=\sum_{l=1}^{L}(N_{l-1}+1)N_{l}\). In the rest of this work, \(\mathbf{g}(\mathbf{u},\boldsymbol{\theta})\) is always defined as just described. We will start by studying the general case before turning in Section 4 to a two-layer network, i.e. with \(L=2\). ### KL Functions We will work under a general condition of the loss function \(\mathcal{L}\) which includes non-convex ones. More precisely, we will suppose that \(\mathcal{L}\) verifies a Kurdyka-Lojasewicz-type (KL for short) inequality [25, Theorem 1]. **Definition 2.2** (KL inequality).: A continuously differentiable function \(f:\mathbb{R}^{n}\to\mathbb{R}\) satisfies the KL inequality if there exists \(r_{0}>0\) and a strictly increasing function \(\psi\in\mathcal{C}^{0}([0,r_{0}[)\cap\mathcal{C}^{1}(]0,r_{0}[)\) with \(\psi(0)=0\) such that \[\psi^{\prime}(f(\mathbf{z})-\min f)\,\|\nabla f(\mathbf{z})\|\geq 1,\quad \text{for all}\quad\mathbf{z}\in[\min f<f<\min f+r_{0}]. \tag{4}\] We use the shorthand notation \(f\in\text{KL}_{\psi}(r_{0})\) for a function satisfying this inequality. The KL property basically expresses the fact that the function \(f\) is sharp under a reparameterization of its values. Functions satisfying the KL inequality are also sometimes called gradient dominated functions [30]. The function \(\psi\) is known as the desingularizing function for \(f\). The Lojasiewicz inequality [23, 24] corresponds to the case where the desingularizing function takes the form \(\psi(s)\,=\,cs^{\alpha}\) with \(\alpha\,\in\,[0,1]\). The KL inequality plays a fundamental role in several fields of applied mathematics among which convergence behaviour of (sub-)gradient-like systems and minimization algorithms [31, 32, 33, 34, 35, 36], neural networks [37], partial differential equations [38, 39, 40], to cite a few. The KL inequality is closely related to error bounds that also play a key role to derive complexity bounds of gradient descent-like algorithms [41]. Let us give some examples of functions satisfying (4); see also [35]. **Example 2.3** (Convex functions with sufficient growth).: Let \(f\) be a differentiable convex function on \(\mathbb{R}^{n}\) such that \(\operatorname{Argmin}(f)\neq\emptyset\). Assume that \(f\) verifies the growth condition \[f(\mathbf{z})\geq\min f+\varphi(\operatorname{dist}(\mathbf{z},\operatorname {Argmin}(f))),\quad\text{for all}\quad\mathbf{z}\in[\min f<f<\min f+r], \tag{5}\] where \(\varphi:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is continuous, increasing, \(\varphi(0)=0\) and \(\int_{0}^{r}\frac{\varphi^{-1}(s)}{s}ds<+\infty\). Then by [36, Theorem 30], \(f\in\operatorname{KL}_{\psi}(r)\) with \(\psi(r)=\int_{0}^{r}\frac{\varphi^{-1}(s)}{s}ds\). **Example 2.4** (Uniformly convex functions).: Suppose that \(f\) is a differentiable uniformly convex function, i.e., \(\forall\mathbf{z},\mathbf{x}\in\mathbb{R}^{n}\), \[f(\mathbf{x})\geq f(\mathbf{z})+\left\langle\nabla f(\mathbf{z}),\mathbf{x}- \mathbf{z}\right\rangle+\varphi\left(\left\|\mathbf{x}-\mathbf{z}\right\|\right) \tag{6}\] for an increasing function \(\varphi:\mathbb{R}_{+}\to\mathbb{R}_{+}\) that vanishes only at \(0\). Thus \(f\) has a unique minimizer, say \(\mathbf{z}^{*}\), see [42, Proposition 17.26]. This example can then be deduced from the previous one since a uniformly convex function obviously obeys (5). However, we here provide an alternative and sharper characterization. We may assume without loss of generality that \(\min f=0\). Applying inequality (6) at \(\mathbf{x}=\mathbf{z}^{*}\) and any \(\mathbf{z}\in[0<f]\), we get \[f(\mathbf{z}) \leq\left\langle\nabla f(\mathbf{z}),\mathbf{z}-\mathbf{x} \right\rangle-\varphi\left(\left\|\mathbf{x}-\mathbf{z}\right\|\right)\] \[\leq\left\|\nabla f(\mathbf{z})\right\|\left\|\mathbf{x}- \mathbf{z}\right\|-\varphi\left(\left\|\mathbf{x}-\mathbf{z}\right\|\right)\] \[\leq\varphi_{+}(\left\|\nabla f(\mathbf{z})\right\|),\] where \(\varphi_{+}:a\in\mathbb{R}_{+}\mapsto\varphi^{+}(a)=\sup_{x\geq 0}ax- \varphi(x)\) is known as the monotone conjugate of \(\varphi\). \(\varphi_{+}\) is a proper closed convex and non-decreasing function on \(\mathbb{R}_{+}\) that vanishes at \(0\). When \(\varphi\) is strictly convex and supercoercive, so is \(\varphi_{+}\) which implies that \(\varphi_{+}\) is also strictly increasing on \(\mathbb{R}_{+}\). Thus \(f\) verifies Definition 2.2 at any \(\mathbf{z}\in[0<f]\) with \(\psi\) a primitive of \(\frac{1}{\varphi_{+}^{-1}}\), and \(\psi\) is indeed strictly increasing, vanishes at \(0\) and is even concave. A prominent example is the case where \(\varphi:s\in\mathbb{R}_{+}\mapsto\frac{1}{p}s^{p}\), for \(p\in]1,+\infty[\), in which case \(\psi:s\in\mathbb{R}_{+}\mapsto q^{-1/q}s^{1/p}\), where \(1/p+1/q=1\). **Example 2.5**.: In finite-dimensional spaces, deep results from algebraic geometry have shown that the KL inequality is satisfied by a large class of functions, namely, real semi-algebraic functions and more generally, function definable on an o-minimal structure or even functions belonging to analytic-geometric categories [23, 24, 43, 25, 44]. Many popular losses used in machine learning and signal processing turn out to be KL functions (MSE, Kullback-Leibler divergence and cross-entropy to cite a few). ## 3 Recovery Guarantees ### Main Assumptions Throughout this paper, we will work under the following standing assumptions. Assumptions on the loss 1. \(\mathcal{L}_{\mathbf{y}}(\cdot)\in\mathcal{C}^{1}(\mathbb{R}^{m})\) whose gradient is Lipschitz continuous on the bounded sets of \(\mathbb{R}^{m}\). 2. \(\mathcal{L}_{\mathbf{y}}(\cdot)\in\operatorname{KL}_{\psi}(\mathcal{L}_{ \mathbf{y}}(\mathbf{y}(0))+\eta)\) for some \(\eta>0\). 3. \(\min\mathcal{L}_{\mathbf{y}}(\cdot)=0\). 4. \(\exists\Theta\subset\mathbb{R}^{p}\) large enough such that \(\nabla_{\mathbf{v}}\mathcal{L}_{\mathbf{y}}(\mathbf{v})\in\operatorname{Im} \left(\mathcal{J}_{\mathbf{F}}(\mathbf{x})\right)\) for any \(\mathbf{v}=\mathbf{F}(\mathbf{x})\) with \(\mathbf{x}\in\Sigma_{\Theta}\). Assumption on the activation 2. \(\phi\in\mathcal{C}^{1}(\mathbb{R})\) and \(\exists B>0\) such that \(\sup_{x\in\mathbb{R}}|\phi^{\prime}(x)|\leq B\) and \(\phi^{\prime}\) is \(B\)-Lipschitz continuous. 3. **Assumption on the forward operator** 4. \(\mathbf{F}\in\mathcal{C}^{1}(\mathbb{R}^{n};\mathbb{R}^{m})\) whose Jacobian \(\mathcal{J}_{\mathbf{F}}\) is Lipschitz continuous on the bounded sets of \(\mathbb{R}^{n}\). Let us now discuss the meaning and effects of these assumptions. First, A-1 is made for simplicity to ensure existence and uniqueness of a strong maximal solution (in fact even global thanks to our estimates) of (2) thanks to the Cauchy-Lipschitz theorem (see hereafter). We think this could be relaxed to cover non-smooth losses if we assume path differentiability, hence existence of an absolutely continuous trajectory. This is left to a future work. A notable point in A-2 is that convexity is not always needed for the loss (see the statements of the theorem). Regarding A-3, it is natural yet it would be straightforward to relax it. Assumption A-4 allows us to leverage the fact that \[\sigma_{\mathbf{F}}\stackrel{{\mathrm{def}}}{{=}}\inf_{\mathbf{x }\in\Sigma_{\Theta},\mathbf{z}\in\operatorname{Im}\left(\mathcal{J}_{ \mathbf{F}}(\mathbf{x})\right)}\frac{\left\|\mathcal{J}_{\mathbf{F}}(\mathbf{ x})^{\top}\mathbf{z}\right\|}{\left\|\mathbf{z}\right\|}>0, \tag{7}\] with \(\Theta\) a sufficiently large subset of parameters. Clearly, we will show later that the parameter trajectory \(\boldsymbol{\theta}(t)\) is contained in a ball around \(\boldsymbol{\theta}_{0}\). Thus a natural choice of \(\Theta\) is that ball (or an enlargement of it). There are several scenarios of interest where assumption A-4 is verified. This is the case when \(\mathbf{F}\) is an immersion, which implies that \(\mathcal{J}_{\mathbf{F}}(\mathbf{x})\) is surjective for all \(\mathbf{x}\). Other interesting cases are when \(\mathcal{L}_{\mathbf{y}}(\mathbf{v})=\eta\left(\left\|\mathbf{v}-\mathbf{y} \right\|^{2}\right)\), \(\mathbf{F}=\Phi\circ\mathbf{A}\), where \(\eta:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) is differentiable and vanishes only at \(0\), and \(\Phi:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}\) is an immersion1. One easily sees in this case that \(\nabla_{\mathbf{v}}\mathcal{L}_{\mathbf{y}}(\mathbf{v})=2\eta^{\prime}\left( \left\|\mathbf{v}-\mathbf{y}\right\|^{2}\right)(\mathbf{v}-\mathbf{y})\) with \(\mathbf{v}=\Phi(\mathbf{A}\mathbf{x})\), and \(\mathcal{J}_{\mathbf{F}}(\mathbf{x})=\mathcal{J}_{\Phi}(\mathbf{A}\mathbf{x})\mathbf {A}\). It is then sufficient to require that \(\mathbf{A}\) is surjective. This can be weakened for the linear case, i.e. \(\Phi\) is the identity, in which case it is sufficient that \(\mathbf{y}\in\operatorname{Im}\left(\mathbf{A}\right)\) for A-4 to hold. Assumption A-5 is key in well-posedness as it ensures, by Definition 2.1 which \(\mathbf{g}(\mathbf{u},\boldsymbol{\theta})\) follows, that \(\mathbf{g}(\mathbf{u},\cdot)\) is \(\mathcal{C}^{1}(\mathbb{R}^{p};\mathbb{R}^{p})\) whose Jacobian is Lipschitz continuous on bounded sets, which is necessary for the Cauchy-Lipschitz theorem. This constraint on \(\phi\) is met by many activations such as the softmax, sigmoid or hyperbolic tangent. Including the ReLU requires more technicalities that will be avoided here. Finally, Assumption A-6 on local Lipschitz continuity on \(\mathbf{F}\) is not only important for well-posedness of (2), but it turns out to be instrumental when deriving recovery rates (as a function of the noise) in the literature of regularized nonlinear inverse problems; see [45] and references therein. ### Well-posedness In order for our analysis to hold, the Cauchy problem (2) needs to be well-posed. We start by showing that (2) has a unique maximal solution. **Proposition 3.1**.: _Assume that A-1, A-5 and A-6 hold. There there exists \(T(\boldsymbol{\theta}_{0})\in]0,+\infty]\) and a unique maximal solution \(\boldsymbol{\theta}(\cdot)\in\mathcal{C}^{0}([0,T(\boldsymbol{\theta}_{0})[)\) of (2), and \(\boldsymbol{\theta}(\cdot)\) is \(\mathcal{C}^{1}\) on every compact set of the interior of \([0,T(\boldsymbol{\theta}_{0})[\)._ Proof.: Thanks to A-5, one can verify with standard differential calculus applied to \(\mathbf{g}(\mathbf{u},\cdot)\), as given in Definition 2.1, that \(\mathcal{J}_{\mathbf{g}}\) is Lipschitz continuous on the bounded sets of \(\mathbb{R}^{p}\). This together with A-1 and A-6 entails that \(\nabla_{\boldsymbol{\theta}}\mathcal{L}_{\mathbf{y}}(\mathbf{F}(\mathbf{g}( \mathbf{u},\cdot))\) is also Lipschitz continuous on the bounded sets of \(\mathbb{R}^{p}\). The claim is then a consequence of the Cauchy-Lipschitz theorem [46, Theorem 0.4.1]. \(T(\boldsymbol{\theta}_{0})\) is known as the maximal existence time of the solution and verifies the alternative: either \(T(\boldsymbol{\theta}_{0})=+\infty\) and the solution is called _global_; or \(T(\boldsymbol{\theta}_{0})<+\infty\) and the solution blows-up in finite time, i.e., \(\|\boldsymbol{\theta}(t)\|\to+\infty\) as \(t\to T(\boldsymbol{\theta}_{0})\). We will show later that the maximal solution of (2) is indeed global; see Section 3.4.4. ### Main Results We are now in position to state our recovery results. **Theorem 3.2**.: _Recall \(\sigma_{\mathbf{F}}\) from (7). Consider a network \(\mathbf{g}(\mathbf{u},\cdot)\), a forward operator \(\mathbf{F}\) and a loss \(\mathcal{L}\), such that A-1 to A-6 hold. Let \(\boldsymbol{\theta}(\cdot)\) be a solution trajectory of (2) where the initialization \(\boldsymbol{\theta}_{0}\) is such that_ \[\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))>0\;\;\text{and}\;\;R^{\prime}<R \tag{8}\] _where \(R^{\prime}\) and \(R\) obey_ \[R^{\prime}=\frac{2}{\sigma_{\mathbf{F}}\sigma_{\min}(\mathcal{J}_{\mathbf{g}}( 0))}\psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(0)))\;\;\text{and}\;\;R=\frac{ \sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))}{2\mathrm{Lip}_{\mathbb{B}( \boldsymbol{\theta}_{0},R)}(\mathcal{J}_{\mathbf{g}})}. \tag{9}\] _Then the following holds:_ 1. _the loss converges to_ \(0\) _at the rate_ \[\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\leq\Psi^{-1}(\gamma(t))\] (10) _with_ \(\Psi\) _a primitive of_ \(-\psi^{\prime 2}\) _and_ \(\gamma(t)=\frac{\sigma_{\mathbf{F}}^{2}\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0)) ^{2}}{4}t+\Psi\left(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(0))\right)\)_. Moreover,_ \(\boldsymbol{\theta}(t)\) _converges to a global minimizer_ \(\boldsymbol{\theta}_{\infty}\) _of_ \(\mathcal{L}_{\mathbf{y}}(\mathbf{F}(\mathbf{g}(\mathbf{u},\cdot)))\)_, at the rate_ \[\left\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{\infty}\right\|\leq\frac{2 }{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{\mathbf{F}}}\psi\left( \Psi^{-1}\left(\gamma(t)\right)\right). \tag{11}\] _If_ \(\operatorname{Argmin}(\mathcal{L}_{\mathbf{y}}(\cdot))=\{\mathbf{y}\}\)_, then_ \(\lim_{t\to+\infty}\mathbf{y}(t)=\mathbf{y}\)_. In addition, if_ \(\mathcal{L}\) _is convex then_ \[\left\|\mathbf{y}(t)-\overline{\mathbf{y}}\right\|\leq 2\left\|\boldsymbol{ \varepsilon}\right\|\quad\text{when}\quad t\geq\frac{4\Psi(\psi^{-1}(\left\| \boldsymbol{\varepsilon}\right\|))}{\sigma_{\mathbf{F}}^{2}\sigma_{\min}( \mathcal{J}_{\mathbf{g}}(0))^{2}}-\Psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(0) )). \tag{12}\] _Assume that_ \(\operatorname{Argmin}(\mathcal{L}_{\mathbf{y}}(\cdot))=\{\mathbf{y}\}\)_,_ \(\mathcal{L}\) _is convex, and that_2__ Footnote 2: We suppose here that \(\operatorname{Argmin}_{\boldsymbol{x}\in\Sigma}\left\|\boldsymbol{\mathrm{z}}- \overline{\mathbf{x}}\right\|=\{\overline{\mathbf{x}}_{\Sigma^{\prime}}\}\) is a singleton. In fact, we only need that there exists at least one \(\overline{\mathbf{x}}_{\Sigma^{\prime}}\in\operatorname{Argmin}_{\boldsymbol{x }\in\Sigma}\left\|\boldsymbol{\mathrm{z}}-\overline{\mathbf{x}}\right\|\) such that \(\mu_{\mathbf{F},\Sigma^{\prime}}>0\). **A-7**.: \(\mu_{\mathbf{F},\Sigma^{\prime}}>0\) _where_ \(\mu_{\mathbf{F},\Sigma^{\prime}}\stackrel{{\mathrm{def}}}{{=}} \inf\limits_{\mathbf{x}\in\Sigma^{\prime}}\frac{\left\|\mathbf{F}(\mathbf{x})- \mathbf{F}(\overline{\mathbf{x}}_{\Sigma^{\prime}})\right\|}{\left\|\mathbf{x} -\overline{\mathbf{x}}_{\Sigma^{\prime}}\right\|}\) _with_ \(\Sigma^{\prime}\stackrel{{\mathrm{def}}}{{=}}\Sigma_{\mathbb{B}_{ R^{\prime}+\left\|\boldsymbol{\theta}_{0}\right\|}(0)}\)_._ _Let_ \(L_{\mathbf{F}}\stackrel{{\mathrm{def}}}{{=}}\max_{\mathbf{x}\in \mathbb{B}(0,2\left\|\overline{\mathbf{x}}\right\|)}\|\mathcal{J}_{\mathbf{F} }(\mathbf{x})\|<+\infty\)_. Then_ \[\left\|\mathbf{x}(t)-\overline{\mathbf{x}}\right\|\leq\frac{2\psi\left(\Psi^{ -1}\left(\gamma(t)\right)\right)}{\mu_{\mathbf{F},\Sigma^{\prime}}\sigma_{ \min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{\mathbf{F}}}+\left(1+\frac{L_{ \mathbf{F}}}{\mu_{\mathbf{F},\Sigma^{\prime}}}\right)\operatorname{dist}( \overline{\mathbf{x}},\Sigma^{\prime})+\frac{\left\|\boldsymbol{\varepsilon} \right\|}{\mu_{\mathbf{F},\Sigma^{\prime}}}. \tag{13}\] ### Discussion and Consequences We first discuss the meaning of the initialization condition \(R^{\prime}<R\). This dictates that \(\psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(0)))\) must be smaller than some constant that depends on the operator \(\mathbf{F}\) and the Jacobian of the network at initialization. Intuitively, this requires the initialization of the network to be in an appropriate convergence basin i.e., we start close enough from an optimal solution. #### 3.4.1 Convergence Rate The first result ensures that under the conditions of the theorem, the network converges towards a zero-loss solution. The convergence speed is given by the application of \(\Psi^{-1}\), which is (strictly) decreasing by definition, on an affine function w.r.t time. The function \(\Psi\) only depends on the chosen loss function and its associated Kurdyka-Lojasewiecz inequality. This inequality is verified for a wide class of functions, including all the semi-algebraic ones [25], but it is not always obvious to know the exact formulation of \(\psi\) (see section 2.3). In the case where the KL inequality is respected with \(\psi=cs^{\alpha}\) (the Lojasiewicz case), we obtain by direct computation the following decay rate of the loss and convergence rate for the parameters: **Corollary 3.3**.: _If \(\mathcal{L}\) satisfies the Lojasiewicz inequality, that is A-2 holds with \(\psi(s)=cs^{\alpha}\) and \(\alpha\in[0,1]\), then, \(\exists t_{0}\in R^{+}\) such that \(\forall t>t_{0},\gamma(t)>0\) and the loss and the parameters converge with rate:_ \[\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\leq\left\{\begin{array}{ll}\left(\frac{1 -2\alpha}{\alpha^{2}c^{2}}\gamma(t)\right)^{-\frac{1}{1-2\alpha}}&\text{if }0< \alpha<\frac{1}{2},\\ \left(\frac{2\alpha-1}{\alpha^{2}c^{2}}\gamma(t)\right)^{-\frac{\alpha}{2 \alpha-1}}&\text{if }\frac{1}{2}<\alpha<1\\ \exp\left(-\frac{4}{c^{2}}\gamma(t)\right)&\text{if }\alpha=\frac{1}{2} \end{array}\right.\] \[\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{\infty}\|\leq\left\{\begin{array}[ ]{ll}\left(\frac{1-2\alpha}{\alpha^{2}c^{2}}\gamma(t)\right)^{-\frac{\alpha}{ 1-2\alpha}}&\text{if }0<\alpha<\frac{1}{2},\\ \left(\frac{2\alpha-1}{\alpha^{2}c^{2}}\gamma(t)\right)^{-\frac{\alpha}{2 \alpha-1}}&\text{if }\frac{1}{2}<\alpha<1\\ \exp\left(-\frac{4}{c^{2}}\gamma(t)\right)&\text{if }\alpha=\frac{1}{2} \end{array}\right.\] These results allow to see precise convergence rates of the loss for a wide variety of functions. First let us observe the particular case when \(\alpha=1/2\) which gives exponential convergence to the solution. In practice a function that matches such Lojasiewicz inequality is the Mean Squared Error (MSE). For other values of \(\alpha\), we obtain convergence rates in \(O(t^{-\frac{1}{1-2\alpha}})\) or \(O(t^{-\frac{1}{2\alpha-1}})\) depending on the interval of \(\alpha\) that was chosen. Furthermore, in theory, the parameters of the model will converge slightly slower than the loss with their convergence speed modulated by \(\alpha\). #### 3.4.2 Early stopping strategy While the first result allows us to obtain convergence rates to a zero-loss solution, it does so by overfitting the noise inherent to the problem. A classical way to avoid this to happen is to use an early stopping strategy to ensure that our solution will lie in a ball around the desired solution. The bound on the time given in (12) will verify that all the solutions found past that time will be no more than \(2\left\|\boldsymbol{\varepsilon}\right\|\) away from the noiseless solution. This bound is given by balancing the convergence rate offered by the KL properties of the loss, the loss of the model at initialization and the level of noise in the problem. #### 3.4.3 Signal Recovery Guarantees Our third result provides a bound on the distance between the solution found at time \(t\) and the true solution \(\overline{\mathbf{x}}\). This bound is a sum of three terms representing three kinds of errors. The first term is an "optimization error", which represents how far \(\mathbf{x}(t)\) is from the solution found at the end of the optimization process. Of course, this decreases to 0 as \(t\) goes to infinity. The second error is a "modeling error" which captures the expressivity of the optimized network, i.e. its ability to generate solutions close to \(\overline{\mathbf{x}}\). Finally, the third term is a "noise error" that depends on \(\left\|\boldsymbol{\varepsilon}\right\|\) which is inherent to the problem at hand. Obviously, the operator \(\mathbf{F}\) also plays a key role in this bound where its influence is reflected by three quantities of interest: \(\sigma_{\mathbf{F}}\), \(L_{\mathbf{F}}\) and \(\mu_{\mathbf{F},\Sigma^{\prime}}\). First, \(L_{\mathbf{F}}\) is the Lipschitz constant of the Jacobian of \(\mathbf{F}\) on \(\Sigma^{\prime}\). Moreover, we always have \(\sigma_{\mathbf{F}}>0\) and the dependence of the bound on \(\sigma_{\mathbf{F}}\) (or the ratio \(L_{\mathbf{F}}/\sigma_{\mathbf{F}}\)) reflects the fact that this bound degrades as the Jacobian of \(\mathbf{F}\) over \(\Sigma_{\Theta}\) becomes badly-conditioned. Second, \(\mu_{\mathbf{F},\Sigma^{\prime}}\) corresponds to a restricted injectivity condition, which is a classical and natural assumption if one hopes for recovering \(\overline{\mathbf{x}}\) (to a good controlled error). In particular, in the case where \(\mathbf{F}\) is a linear operator \(\mathbf{A}\in\mathbb{R}^{m\times n}\), \(\mu_{\mathbf{F},\Sigma^{\prime}}\) becomes the minimal conic singular value \(\lambda_{\min}(\mathbf{A};T_{\Sigma^{\prime}}(\overline{\mathbf{x}}_{\Sigma^{ \prime}}))\) and \(L_{\mathbf{F}}\) is replaced by \(\|\mathbf{A}\|\). (A-7) then amounts to assuming that \[\ker(\mathbf{A})\cap T_{\Sigma^{\prime}}(\overline{\mathbf{x}}_{\Sigma^{ \prime}})=\{0\}\,. \tag{14}\] Assuming the rows of \(\mathbf{A}\) are linearly independent, one easily checks that (14) imposes that \(m\geq\dim(T_{\Sigma^{\prime}}(\overline{\mathbf{x}}_{\Sigma^{\prime}}))\). We will give a precise sample complexity bound for the case of compressed sensing in Example 3.4. It is worth mentioning that condition (14) (and (A-7) in some sense) is not uniform as it only requires a control at \(\overline{\mathbf{x}}\) and not over the whole set \(\Sigma^{\prime}\). Observe that the restricted injectivity condition (A-7) depends on \(\Sigma^{\prime}\) which itself depends on \(R^{\prime}\), that is, the radius of the ball around \(\boldsymbol{\theta}_{0}\) containing the whole trajectory \(\theta(t)\) during the network training (see the proof of Lemma 3.10). On the other hand, \(R^{\prime}\) depends on the loss at initialization, which means that the higher the initial error of the network, the larger the set of parameters it might reach during optimization, and thus the larger the set \(\Sigma^{\prime}\). This discussion clearly reveals an expected phenomenon: there is a trade-off between the restricted injectivity condition on \(\mathbf{F}\) and the expressivity of the network. If the model is highly expressive then \(\operatorname{dist}(\overline{\mathbf{x}},\Sigma^{\prime})\) will be smaller. But this is likely to come at the cost of making \(\mu_{\mathbf{F},\Sigma^{\prime}}\) decrease, as restricted injectivity can be required to hold on a larger subset (cone). This discussion relates with the work on the instability phenomenon observed in learned reconstruction methods as discussed in [47, 48]. For instance, when \(\mathbf{F}\) is a linear operator \(\mathbf{A}\), the fundamental problem that creates these instabilities and/or hallucinations in the reconstruction is due to the fact that the kernel of \(\mathbf{A}\) is non-trivial. Thus a method that can correctly learn to reconstruct signals whose difference lies in or close to the kernel of \(\mathbf{A}\) will necessarily be unstable or hallucinate. In our setting, this is manifested through the restricted injectivity condition, that imposes that the smallest conic singular value is bounded away from \(0\), i.e. \(\mu_{\mathbf{F},\Sigma^{\prime}}=\lambda_{\min}(\mathbf{A};T_{\Sigma^{\prime} }(\overline{\mathbf{x}}_{\Sigma^{\prime}}))>0\). This is a natural (and minimal) condition in the context of inverse problems to have stable reconstruction guarantees. Note that our condition is non-uniform as it is only required to hold at \(\overline{\mathbf{x}}_{\Sigma^{\prime}}\) and not at all points of \(\Sigma^{\prime}\). In A-11, we generalize the restricted injectivity condition (14) beyond the linear case provided that \(\mathcal{J}_{\mathbf{F}}\) is Lipschitz continuous. This covers many practical cases, for instance that of phase retrieval. Observe that whereas assumption A-7 requires a uniform control of injectivity of \(\mathbf{F}\) on the whole signal class \(\Sigma^{\prime}\), A-11 is less demanding and only requires injectivity of the Jacobian of \(\mathbf{F}\) at \(\overline{\mathbf{x}}_{\Sigma^{\prime}}\) on the tangent space of \(\Sigma^{\prime}\) at \(\overline{\mathbf{x}}_{\Sigma^{\prime}}\). However the price is that the recovery bound in Theorem A.1 is only valid for high signal-to-noise regime and \(\operatorname{dist}(\overline{\mathbf{x}},\Sigma^{\prime})\) is small enough. Moreover, the convergence rate in noise becomes \(O(\sqrt{\|\boldsymbol{\varepsilon}\|})\) which is worse than \(O(\|\boldsymbol{\varepsilon}\|)\) of Theorem 3.2. **Example 3.4** (Compressed sensing with sub-Gaussian measurements).: Controlling the minimum conic singular value is not easy in general. Amongst the cases where results are available, we will look at the compressed sensing framework with linear random measurements. In this setting, the forward operator \(\mathbf{A}\in\mathbb{R}^{m\times n}\) is a random sensing matrix. Exploiting the randomness of \(\mathbf{A}\), a natural question is then how many measurements are sufficient to ensure that \(\lambda_{\min}(\mathbf{A};T_{\Sigma^{\prime}}(\overline{\mathbf{x}}_{\Sigma^{ \prime}}))>0\) with high probability. In the case of Gaussian and sub-Gaussian measurements, we can exploit the non-uniform results of [49, 50] to derive sample complexity bounds, i.e. lower bounds on \(m\), for this to hold. By using [50, Theorem 6.3], we have the following proposition: **Proposition 3.5**.: _Assume that each row \(\mathbf{A}^{i}\) is an independent sub-Gaussian vector, that is_ 1. \(\mathbb{E}[\mathbf{A}^{i}]=0\)_,_ 2. \(\alpha\leq\mathbb{E}[\big{|}\langle\mathbf{A}^{i},\mathbf{w}\rangle\big{|}]\) _for each_ \(\mathbf{w}\in\mathbb{S}^{n-1}\) _with_ \(\alpha>0\)_,_ 3. \(\mathbb{P}\left(\big{|}\langle\mathbf{A}^{i},\mathbf{w}\rangle\big{|}\geq \tau\right)\leq 2e^{-\tau^{2}/(2\sigma^{2})}\) _for each_ \(\mathbf{w}\in\mathbb{S}^{n-1}\)_, with_ \(\sigma>0\)_._ _Let \(C\) and \(C^{\prime}\) be positive constants and \(w(K)\) the Gaussian width of the cone \(K\) defined as:_ \[w(K)=\mathbb{E}_{\mathbf{z}\sim\mathcal{N}(0,\mathbf{I})}\left[\sup_{\mathbf{ w}\in K\cap\mathbb{S}^{d-1}}\langle\mathbf{z},\mathbf{w}\rangle\right].\] _If_ \[m\geq C^{\prime}\left(\frac{\sigma}{\alpha}\right)^{6}w(T_{\Sigma^{\prime}}( \overline{\mathbf{x}}_{\Sigma^{\prime}}))^{2}+2C^{-2}\frac{\sigma^{2}}{ \alpha^{4}}\tau^{2},\] _then \(\lambda_{\min}(\mathbf{A},T_{\Sigma^{\prime}}(\overline{\mathbf{x}}_{\Sigma^ {\prime}}))>0\) with probability at least \(1-\exp(-C\tau^{2})\)._ The Gaussian width is an important tool in high-dimensional convex geometry and can be interpreted as a measure of the "dimension" of a cone. Except in some specific settings (such as when \(K\) is a descent cone of a convex function and other special cases), it is notoriously difficult to compute this quantity; see the discussion in [49]. Another "generic" tool for computing Gaussian widths is based on Dudley's inequality which bounds the width of a set in terms of the covering number of the set at all scales. Estimating the covering number is not easy either in general. This shows the difficulty of computing \(w(T_{\Sigma^{\prime}}(\overline{\mathbf{x}}_{\Sigma^{\prime}}))\) which we leave to a future work. Analyzing recovery guarantees in the compressed sensing framework using unsupervised neural networks such as DIP was proposed in [51, 52]. In [51], the authors restricted their analysis to the case of networks without non-linear activations nor training/optimization. The authors of [52] studied the case of the DIP method but their optimization algorithms is prohibitively intensive necessitating at each iteration retraining the DIP network. Another distinctive difference with our work is that these existing results are uniform relying on RIP-type arguments and their specialization for Gaussian measurements. #### 3.4.4 Existence and Uniqueness of a Global Strong Solution We have already stated in Section 3.2 that (2) admits a unique maximal solution. Assumption (8) allows us to further specify this solution as strong and global. Indeed, (11) ensures that the trajectory \(\boldsymbol{\theta}(t)\) is uniformly bounded. Let us start by recalling the notion of a strong solution. **Definition 3.6**.: Denote \(\boldsymbol{\theta}:t\in[0,+\infty[\mapsto\boldsymbol{\theta}(t)\in\mathbb{R}^ {p}\). The function \(\boldsymbol{\theta}(\cdot)\) is a strong global solution of (2) if it satisfies the following properties: * \(\boldsymbol{\theta}\) is in \(\mathcal{C}^{1}([0,+\infty[;\mathbb{R}^{p})\); * for almost all \(t\in[0,+\infty[\), (2) holds with \(\boldsymbol{\theta}(0)=\boldsymbol{\theta}_{0}\). **Proposition 3.7**.: _Assume that A-1-A-6 and (8) are satisfied. Then, for any initial condition \(\boldsymbol{\theta}_{0}\), the evolution system (2) has a unique strong global solution._ Proof.: Proposition 3.1 ensures the existence and uniqueness of a maximal solution. Following the discussion after the proof of Proposition 3.1, if \(\mathbf{\theta}(t)\) is bounded, then we are done. This is precisely what is ensured by Theorem 3.2 under our conditions. ### Proofs We start with the following lemmas that will be instrumental in the proof of Theorem 3.2. **Lemma 3.8**.: _Assume that A-1, A-3, A-5 and A-6 hold. Let \(\mathbf{\theta}(\cdot)\) be a solution trajectory of (2). Then,_ 1. \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(\cdot)))\) _is nonincreasing, and thus converges._ 2. _If_ \(\mathbf{\theta}(\cdot)\) _is bounded,_ \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(\cdot)))\) _is constant on_ \(\mathfrak{W}(\mathbf{\theta}(\cdot))\)_._ Proof.: Let \(V(t)=\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\). 1. Differentiating \(V(\cdot)\), we have for \(t>0\): \[\dot{V}(t) =\langle\dot{\mathbf{y}}(t),\nabla_{\mathbf{y}(t)}\mathcal{L}_{ \mathbf{y}}(\mathbf{y}(t))\rangle\] \[=\langle\mathcal{J}_{\mathbf{F}}(t)\mathcal{J}_{\mathbf{g}}(t) \dot{\mathbf{\theta}}(t),\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{ y}(t))\rangle\] \[=-\left\langle\mathcal{J}_{\mathbf{F}}(t)\mathcal{J}_{\mathbf{g} }(t)\mathcal{J}_{\mathbf{g}}(t)^{\top}\mathcal{J}_{\mathbf{F}}(t)^{\top} \nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)),\nabla_{ \mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\right\rangle\] \[=-\left\|\mathcal{J}_{\mathbf{g}}(t)^{\top}\mathcal{J}_{\mathbf{ F}}(t)^{\top}\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)) \right\|^{2}=-\left\|\dot{\mathbf{\theta}}(t)\right\|^{2},\] (15) and thus \(V(\cdot)\) is decreasing. Since it is bounded from below (by \(0\) by assumption), it converges to say \(\mathcal{L}_{\infty}\) (\(0\) in our case). 2. Since \(\mathbf{\theta}(\cdot)\) is bounded, \(\mathfrak{W}(\mathbf{\theta}(\cdot))\) is non-empty. Let \(\mathbf{\theta}_{\infty}\in\mathfrak{W}(\mathbf{\theta}(\cdot))\). Then \(\exists t_{k}\to+\infty\) such that \(\mathbf{\theta}(t_{k})\to\mathbf{\theta}_{\infty}\) as \(k\to+\infty\). Combining claim 1 with continuity of \(\mathcal{L}\), \(\mathbf{F}\) and \(\mathbf{g}(\cdot,\mathbf{u})\), we have \[\mathcal{L}_{\infty}=\lim_{k\to+\infty}\mathcal{L}_{\mathbf{y}}(\mathbf{F}( \mathbf{g}(\mathbf{u},\mathbf{\theta}(t_{k}))))=\mathcal{L}_{\mathbf{y}}(\mathbf{ F}(\mathbf{g}(\mathbf{u},\mathbf{\theta}_{\infty}))).\] Since this is true for any cluster point, the claim is proved. **Lemma 3.9**.: _Assume that A-1 to A-6 hold. Let \(\mathbf{\theta}(\cdot)\) be a solution trajectory of (2). If for all \(t\geq 0\), \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(t))\geq\frac{\sigma_{\min}(\mathcal{J} _{\mathbf{g}}(0))}{2}>0\), then \(\|\dot{\mathbf{\theta}}(\cdot)\|\in L^{1}([0,+\infty[)\). In turn, \(\lim_{t\to+\infty}\mathbf{\theta}(t)\) exists._ Proof.: From Lemma 3.8(i), we have for \(t\geq 0\): \[\mathbf{y}(t)\in[0\leq\mathcal{L}_{\mathbf{y}}(\cdot)\leq\mathcal{L}_{ \mathbf{y}}(\mathbf{y}(0))].\] We may assume without loss of generality that \(\mathbf{y}(t)\in[0<\mathcal{L}_{\mathbf{y}}(\cdot)\leq\mathcal{L}_{\mathbf{y} }(\mathbf{y}(0))]\) since otherwise \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(\cdot))\) is eventually zero which implies, by Lemma 3.8, that \(\dot{\mathbf{\theta}}\) is eventually zero, in which case there is nothing to prove. We are now in position to use the KL property on \(\mathbf{y}(\cdot)\). We have for \(t>0\): \[\frac{\mathrm{d}\psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)))}{\mathrm{d}t}= \psi^{\prime}(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)))\frac{\mathrm{d} \mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))}{\mathrm{d}t}\] \[=-\psi^{\prime}(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)))\left\| \mathcal{J}_{\mathbf{g}}(t)^{\top}\mathcal{J}_{\mathbf{F}}(t)^{\top}\nabla_{ \mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\right\|^{2}\] \[\leq-\frac{\left\|\mathcal{J}_{\mathbf{g}}(t)^{\top}\mathcal{J}_{ \mathbf{F}}(t)^{\top}\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y} (t))\right\|^{2}}{\left\|\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}( \mathbf{y}(t))\right\|}\] \[\leq-\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(t))\sigma_{\mathbf{F }}\left\|\mathcal{J}_{\mathbf{g}}(t)^{\top}\mathcal{J}_{\mathbf{F}}(t)^{\top} \nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\right\|\] \[\leq-\frac{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{ \mathbf{F}}}{2}\left\|\dot{\boldsymbol{\theta}}(t)\right\|. \tag{16}\] where we used A-4 and that \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(t))\geq\frac{\sigma_{\min}(\mathcal{J }_{\mathbf{g}}(0))}{2}>0\). Integrating, we get \[\int_{0}^{t}\left\|\dot{\boldsymbol{\theta}}(s)\right\|\mathrm{d}s\leq\frac{2 }{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{\mathbf{F}}}\left(\psi( \mathcal{L}_{\mathbf{y}}(\mathbf{y}(0)))-\psi(\mathcal{L}_{\mathbf{y}}( \mathbf{y}(t)))\right). \tag{17}\] Since \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\) converges thanks to Lemma 3.8(i) and \(\psi\) is continuous and increasing, the right hand side in (17) has a limit. Thus passing to the limit as \(t\to+\infty\), we get that \(\dot{\boldsymbol{\theta}}\in L^{1}([0,+\infty[)\). This in turn implies that \(\lim_{t\to+\infty}\boldsymbol{\theta}(t)\) exists, say \(\boldsymbol{\theta}_{\infty}\), by applying Cauchy's criterion to \[\boldsymbol{\theta}(t)=\boldsymbol{\theta}_{0}+\int_{0}^{t}\dot{\boldsymbol{ \theta}}(s)\mathrm{d}s.\] **Lemma 3.10**.: _Assume that A-1 to A-6 hold. Recall \(R\) and \(R^{\prime}\) from (9). Let \(\boldsymbol{\theta}(\cdot)\) be a solution trajectory of (2)._ 1. _If_ \(\boldsymbol{\theta}\in\mathbb{B}(\boldsymbol{\theta}_{0},R)\) _then_ \[\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta}))\geq\sigma_{\min} (\mathcal{J}_{\mathbf{g}}(0))/2.\] 2. _If for all_ \(s\in[0,t]\)_,_ \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(s))\geq\frac{\sigma_{\min}(\mathcal{J }_{\mathbf{g}}(0))}{2}\) _then_ \[\boldsymbol{\theta}(t)\in\mathbb{B}(\boldsymbol{\theta}_{0},R^{\prime}).\] 3. _If_ \(R^{\prime}<R\)_, then for all_ \(t\geq 0\)_,_ \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(t))\geq\sigma_{\min}(\mathcal{J}_{ \mathbf{g}}(0))/2\)_._ Proof.: 1. Since \(\boldsymbol{\theta}\in\mathbb{B}(\boldsymbol{\theta}_{0},R)\), we have \[\left\|\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta})-\mathcal{J}_{\mathbf{g}} (\boldsymbol{\theta}_{0})\right\|\leq\mathrm{Lip}_{\mathbb{B}(\boldsymbol{ \theta}_{0},R)}(\mathcal{J}_{\mathbf{g}})\left\|\boldsymbol{\theta}- \boldsymbol{\theta}_{0}\right\|\leq\mathrm{Lip}_{\mathbb{B}(\boldsymbol{\theta} _{0},R)}(\mathcal{J}_{\mathbf{g}})R\leq\frac{\sigma_{\min}(\mathcal{J}_{ \mathbf{g}}(0))}{2}.\] By using that \(\sigma_{\min}(\cdot)\) is 1-Lipschitz, we obtain \[\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta}))\geq\sigma_{\min}( \mathcal{J}_{\mathbf{g}}(0))-\left\|\mathcal{J}_{\mathbf{g}}(\boldsymbol{ \theta})-\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta}_{0})\right\|\geq\frac{ \sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))}{2}.\] 2. We have for \(t>0\) \[\frac{1}{2}\frac{\mathrm{d}\left\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{0} \right\|^{2}}{\mathrm{d}t}=\left\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{0} \right\|\frac{\mathrm{d}\left\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{0} \right\|}{\mathrm{d}t}=\left\langle\dot{\boldsymbol{\theta}}(t),\boldsymbol{ \theta}(t)-\boldsymbol{\theta}_{0}\right\rangle,\] and Cauchy-Schwarz inequality then implies \[\frac{\mathrm{d}\left\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{0}\right\|}{ \mathrm{d}t}\leq\left\|\dot{\boldsymbol{\theta}}(t)\right\|.\] Combining this with (17) yields \[\left\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{0}\right\|\leq\int_{0}^{t} \left\|\dot{\boldsymbol{\theta}}(s)\right\|\mathrm{d}s\leq\frac{2}{\sigma_{ \min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{\mathbf{F}}}\psi(\mathcal{L}_{ \mathbf{y}}(\mathbf{y}(0))),\] where we argue that \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\) is positive and bounded and \(\psi\) is positive and increasing. 3. Actually, we prove the stronger statement that \(\boldsymbol{\theta}(t)\in\mathbb{B}(\boldsymbol{\theta}_{0},R^{\prime})\) for all \(t\geq 0\), whence our claim will follow thanks to (i). Let us assume for contradiction that \(R^{\prime}<R\) and \(\exists\;t<+\infty\) such that \(\boldsymbol{\theta}(t)\notin\mathbb{B}(\boldsymbol{\theta}_{0},R^{\prime})\). By (ii), this means that \(\exists\;s\leq t\) such that \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(s))<\sigma_{\min}(\mathcal{J}_{ \mathbf{g}}(0))/2\). In turn, (i) implies that \(\boldsymbol{\theta}(s)\notin\mathbb{B}(\boldsymbol{\theta}_{0},R)\). Let us define \[t_{0}=\inf\{\tau\geq 0:\boldsymbol{\theta}(\tau)\notin\mathbb{B}( \boldsymbol{\theta}_{0},R)\},\] which is well-defined as it is at most \(s\). Thus, for any small \(\boldsymbol{\varepsilon}>0\) and for all \(t^{\prime}\leq t_{0}-\boldsymbol{\varepsilon}\), \(\boldsymbol{\theta}(t^{\prime})\in\mathbb{B}(\boldsymbol{\theta}_{0},R)\) which, in view of (i) entails that \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta})(t^{\prime}))\geq \sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))/2\). In turn, we get from (ii) that \(\boldsymbol{\theta}(t_{0}-\boldsymbol{\varepsilon})\in\mathbb{B}(\boldsymbol{ \theta}_{0},R^{\prime})\). Since \(\boldsymbol{\varepsilon}\) is arbitrary and \(\boldsymbol{\theta}\) is continuous, we pass to the limit as \(\boldsymbol{\varepsilon}\to 0\) to deduce that \(\boldsymbol{\theta}(t_{0})\in\mathbb{B}(\boldsymbol{\theta}_{0},R^{\prime}) \subsetneq\mathbb{B}(\boldsymbol{\theta}_{0},R)\) hence contradicting the definition of \(t_{0}\). Proof of Theorem 3.2.: 1. We here use a standard Lyapunov analysis with several energy functions. Let us reuse \(V(t)\). Embarking from (15), we have for \(t>0\) \[\dot{V}(t) =-\left\|\mathcal{J}_{\mathbf{g}}(t)^{\top}\mathcal{J}_{\mathbf{ F}}(t)^{\top}\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)) \right\|^{2}\] \[\leq-\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(t))^{2}\sigma_{ \mathbf{F}}^{2}\left\|\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y }(t))\right\|^{2},\] where we used A-4. In view of Lemma 3.10(iii), we have \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(t))\geq\sigma_{\min}(\mathcal{J}_{ \mathbf{g}}(0))/2>0\) for all \(t\geq 0\) if the initialization error verifies (8). Using once again A-2, we get \[\dot{V}(t) \leq-\frac{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))^{2}\sigma_{ \mathbf{F}}^{2}}{4}\left\|\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}( \mathbf{y}(t))\right\|^{2}\] \[\leq-\frac{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))^{2}\sigma_ {\mathbf{F}}^{2}}{4\psi^{\prime}(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)))^{2}}.\] Let \(\Psi\) be a primitive of \(-\psi^{\prime 2}\). Then, the last inequality gives \[\dot{\Psi}(V(t)) =\Psi^{\prime}(V(t))\dot{V}(t)\] \[\geq\frac{\sigma_{\mathbf{F}}^{2}\sigma_{\min}(\mathcal{J}_{ \mathbf{g}}(0))^{2}}{4}.\] By integration on \(s\in[0,t]\) alongside the fact that \(\Psi\) and \(\Psi^{-1}\) are (strictly) decreasing functions, we get \[\Psi(V(t))-\Psi(V(0)) \geq\frac{\sigma_{\mathbf{F}}^{2}\sigma_{\min}(\mathcal{J}_{ \mathbf{g}}(0))^{2}}{4}t\] \[V(t) \leq\Psi^{-1}\left(\frac{\sigma_{\mathbf{F}}^{2}\sigma_{\min}( \mathcal{J}_{\mathbf{g}}(0))^{2}}{4}t+\Psi(V(0))\right),\] which gives (10). By Lemma 3.9, \(\boldsymbol{\theta}(t)\) converges to some \(\boldsymbol{\theta}_{\infty}\). Continuity of \(\mathcal{L}_{\mathbf{y}}(\cdot)\), \(\mathbf{F}\) and \(\mathbf{g}(\mathbf{u},\cdot)\) implies that \[0=\lim_{t\to+\infty}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))=\lim_{t\to+\infty }\mathcal{L}_{\mathbf{y}}(\mathbf{F}(\mathbf{g}(\mathbf{u},\boldsymbol{ \theta}(t))))=\mathcal{L}_{\mathbf{y}}(\mathbf{F}(\mathbf{g}(\mathbf{u}, \boldsymbol{\theta}_{\infty}))),\] and thus \(\boldsymbol{\theta}_{\infty}\in\operatorname{Argmin}(\mathcal{L}_{\mathbf{y}} (\mathbf{F}(\mathbf{g}(\mathbf{u},\cdot))))\). To get the rate, we argue as in the proof of Lemma 3.10 (ii), replacing \(\boldsymbol{\theta}_{0}\) by \(\boldsymbol{\theta}_{\infty}\), to obtain \[\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{\infty}\|\leq\int_{t}^{+\infty} \left\|\dot{\boldsymbol{\theta}}(s)\right\|\mathrm{d}s.\] We then get by integrating (16) that \[\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{\infty}\| \leq-\frac{2}{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{ \mathbf{F}}}\int_{t}^{+\infty}\frac{\mathrm{d}\psi(\mathcal{L}_{\mathbf{y}}( \mathbf{y}(s)))}{\mathrm{d}s}\mathrm{d}s\] \[\leq\frac{2}{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{ \mathbf{F}}}\psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))).\] Thanks to (10), and using that \(\psi\) is increasing, we arrive at (11). 2. By Lemma 3.9 and continuity of \(\mathbf{F}\) and \(\mathbf{g}(\mathbf{u},\cdot)\), we can infer that \(\mathbf{y}(\cdot)\) also converges to \(\mathbf{y}_{\infty}=\mathbf{F}(\mathbf{g}(\mathbf{u},\boldsymbol{\theta}_{ \infty}))\), where \(\boldsymbol{\theta}_{\infty}=\lim_{t\to+\infty}\boldsymbol{\theta}(t)\). Thus using also continuity of \(\mathcal{L}_{\mathbf{y}}(\cdot)\), we have \[0=\lim_{t\to+\infty}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))=\mathcal{L}_{ \mathbf{y}}(\mathbf{y}_{\infty}),\] and thus \(\mathbf{y}_{\infty}\in\operatorname{Argmin}(\mathcal{L}_{\mathbf{y}})\). Since the latter is the singleton \(\{\mathbf{y}\}\) by assumption, we conclude. In order to obtain the early stopping bound, we use [41, Theorem 5] that links the KL property of \(\mathcal{L}_{\mathbf{y}}(\cdot)\) with an error bound. In our case, this reads \[\operatorname{dist}(\mathbf{y}(t),\operatorname{Argmin}(\mathcal{L}_{\mathbf{y }}))=\|\mathbf{y}(t)-\mathbf{y}\|\leq\psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y} (t))).\] (18) It then follows that \[\|\mathbf{y}(t)-\overline{\mathbf{y}}\| \leq\|\mathbf{y}(t)-\mathbf{y}\|+\|\mathbf{y}-\overline{\mathbf{y }}\|\] \[\leq\psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)))+\|\boldsymbol{ \varepsilon}\|\] \[\|\mathbf{F}(\overline{\mathbf{x}})-\mathbf{F}(\overline{\mathbf{x}}_{\Sigma^{ \prime}})\|\leq\max_{\mathbf{z}\in\mathbb{B}(0,2\|\overline{\mathbf{x}}\|)}\| \mathcal{J}_{\mathbf{F}}(\mathbf{z})\|\operatorname{dist}(\overline{\mathbf{x}},\Sigma^{\prime}). \tag{19}\] ## 4 Case of The Two-Layer DIP Network This section is devoted to studying under which conditions on the neural network architecture the key condition in (8) is fulfilled. Towards this goal, we consider the case of a two-layer DIP network. Therein, \(\mathbf{u}\) is randomly set and kept fixed during the training, and the network is trained to transform this input into a signal that matches the observation \(\mathbf{y}\). In particular, we will provide bounds on the level of overparametrization ensuring that (8) holds, which in turn will provide the subsequent recovery guarantees in Theorem 3.2. ### The Two-Layer Neural Network We take \(L=2\) in Definition 2.1 and thus consider the network defined in (3): \[\mathbf{g}(\mathbf{u},\boldsymbol{\theta})=\frac{1}{\sqrt{k}}\mathbf{V}\phi( \mathbf{W}\mathbf{u})\] with \(\mathbf{V}\in\mathbb{R}^{n\times k}\) and \(\mathbf{W}\in\mathbb{R}^{k\times d}\), and \(\phi\) an element-wise nonlinear activation function. Observe that it is immediate to account for the bias vector in the hidden layer by considering the bias as a column of the weight matrices \(\mathbf{W}\), augmenting \(\mathbf{u}\) by \(1\) and then normalizing to unit norm. The normalization is required to comply with A-8 hereafter. The role of the scaling by \(\sqrt{k}\) will become apparent shortly, but it will be instrumental to concentrate the kernel stemming from the jacobian of the network. In the sequel, we set \(C_{\phi}=\sqrt{\mathbb{E}_{X\sim\mathcal{N}(0,1)}\left[\phi(X)^{2}\right]}\) and \(C_{\phi^{\prime}}=\sqrt{\mathbb{E}_{X\sim\mathcal{N}(0,1)}\left[\phi^{\prime}( X)^{2}\right]}\). We will assume without loss of generality that \(\mathbf{F}(0)=0\). This is a very mild assumption that is natural in the context of inverse problems, but can be easily removed if needed. We will also need the following assumptions: **Assumptions on the network input and intialization** 1. \(\mathbf{u}\) _is a uniform vector on_ \(\mathbb{S}^{d-1}\)_;_ 2. \(\mathbf{W}(0)\) _has iid entries from_ \(\mathcal{N}(0,1)\) _and_ \(C_{\phi},C_{\phi^{\prime}}<+\infty\)_;_ 3. \(\mathbf{V}(0)\) _is independent from_ \(\mathbf{W}(0)\) _and_ \(\mathbf{u}\) _and has iid columns with identity covariance and_ \(D\)_-bounded centered entries._ ### Recovery Guarantees in the Overparametrized Regime Our main result gives a bound on the level of overparameterization which is sufficient for (8) to hold. **Theorem 4.1**.: _Suppose that assumptions A-1, A-3, A-5 and A-6 hold. Let \(C\), \(C^{\prime}\) two positive constants that depend only on the activation function and \(D\). Let:_ \[L_{\mathbf{F},0}=\max_{\mathbf{x}\in\mathbb{B}\left(0,C\sqrt{n\log(d)}\right) }\left\|\mathcal{J}_{\mathbf{F}}(\mathbf{x})\right\|\] _and_ \[L_{\mathcal{L},0}=\max_{\mathbf{v}\in\mathbb{B}\left(0,CL_{\mathbf{F},0}\sqrt {n\log(d)}+\sqrt{m}\left(\left\|\mathbf{F}(\mathbf{\overline{x}})\right\|_{ \infty}+\left\|\boldsymbol{\varepsilon}\right\|_{\infty}\right)\right)}\frac{ \left\|\nabla_{\mathbf{v}}\mathcal{L}_{\mathbf{y}}(\mathbf{v})\right\|}{ \left\|\mathbf{v}-\mathbf{y}\right\|}.\] _Consider the one-hidden layer network (3) where both layers are trained with the initialization satisfying A-8 to A-10 and the architecture parameters obeying_ \[k\geq C^{\prime}\sigma_{\mathbf{F}}^{-4}n\psi\left(\frac{L_{\mathcal{L},0}}{2 }\left(CL_{\mathbf{F},0}\sqrt{n\log(d)}+\sqrt{m}\left(\left\|\mathbf{F}( \mathbf{\overline{x}})\right\|_{\infty}+\left\|\boldsymbol{\varepsilon}\right\| _{\infty}\right)\right)^{2}\right)^{4}.\] _Then (8) holds with probability at least \(1-2n^{-1}-d^{-1}\)._ Before proving Theorem 4.1, a few remarks are in order. _Remark 4.2_ (Randomness of \(\Sigma^{\prime}\)).: It is worth observing that since the initialization is random, so is the set of signals \(\Sigma^{\prime}=\Sigma_{\mathbb{B}_{R^{\prime}+\left\|\boldsymbol{\varepsilon} \right\|}(0)}\) by definition, where \(\boldsymbol{\theta}_{0}=(\mathbf{V}(0),\mathbf{W}(0))\). This set is contained in a larger deterministic set with high probability. Indeed, Gaussian concentration gives us, for any \(\delta>0\), \[\left\|\mathbf{W}(0)\right\|_{F}\leq(1+\delta)\sqrt{kd}\] with probability larger than \(1-e^{-\delta^{2}kd/2}\). Moreover, since by A-10\(\mathbf{V}(0)\) has independent columns with bounded entries and \(\mathbb{E}\left[\left\|\mathbf{V}_{i}(0)\right\|^{2}\right]=n\), we can apply Hoeffding's inequality to \(\left\|\mathbf{V}(0)\right\|_{F}^{2}=\sum_{i=1}^{k}\left\|\mathbf{V}_{i}(0)\right\|^ {2}\) to infer that \[\left\|\mathbf{V}(0)\right\|_{F}\leq(1+\delta)\sqrt{kn}\] with probability at least \(1-e^{-\delta^{2}kd/(2D^{2})}\). Collecting the above, we have \[\left\|\boldsymbol{\theta}_{0}\right\|\leq(1+\delta)\sqrt{k}\left(\sqrt{n}+ \sqrt{d}\right),\] with probability at least \(1-e^{-\delta^{2}kd/2}-e^{-\delta^{2}kd/(2D^{2})}\). In view of the bound on \(R^{\prime}\) (see (22)), this yields that with probability at least \(1-e^{-\delta^{2}kd/2}-e^{-\delta^{2}kd/(2D^{2})}-2n^{-1}-d^{-1}\), \(\Sigma^{\prime}\subset\Sigma_{\mathbb{B}_{\rho}(0)}\), where \[\rho=\frac{4}{\sigma_{\mathbf{F}}\sqrt{C_{\phi}^{2}+C_{\phi^{\prime}}^{2}}} \psi\left(\frac{L_{\mathcal{L},0}}{2}\left(CL_{\mathbf{F},0}\sqrt{n\log(d)}+ \sqrt{m}\left(\left\|\mathbf{F}(\overline{\mathbf{x}})\right\|_{\infty}+ \left\|\boldsymbol{\varepsilon}\right\|_{\infty}\right)\right)^{2}\right)+(1+ \delta)\sqrt{k}\left(\sqrt{n}+\sqrt{d}\right).\] This confirms the expected behaviour that expressivity of \(\Sigma^{\prime}\) is higher as the overparametrization increases. _Remark 4.3_ (Distribution of \(\mathbf{u}\)).: The generator \(\mathbf{g}(\cdot,\boldsymbol{\theta})\) synthesize data by transforming the input (latent) random variable \(\mathbf{u}\). As such, it generates signals \(\mathbf{x}\in\Sigma^{\prime}\) who are in the support of the measure \(\mathbf{g}(\cdot,\boldsymbol{\theta})\#\mu_{\mathbf{u}}\), where \(\mu_{\mathbf{u}}\) is the distribution of \(\mathbf{u}\), and \(\#\) is the push-forward operator. Expressivity of these generative models, coined also push-forward models, in particular GANs, have been recently studied either empirically or theoretically [53, 54, 55, 56, 57]. In particular, this literature highlights the known fact that, since \(\mathbf{g}(\cdot,\boldsymbol{\theta})\) is continuous by construction, the support of \(\mathbf{g}(\cdot,\boldsymbol{\theta})\#\mu_{\mathbf{u}}\) is connected if that of \(\mu_{\mathbf{u}}\) is connected (as in our case). On the other hand, a common assumption in the imaging literature, validated empirically by [58], is that distributions of natural images are supported on low dimensional manifolds. It is also conjectured that the distribution of natural images may in fact lie on a union of disjoint manifolds rather than one globally connected manifold; the union of subspaces or manifolds model is indeed a common assumption in signal/image processing. In the latter case, a generator \(\mathbf{g}(\cdot,\boldsymbol{\theta})\) that will attempt to cover the different modes (manifolds) of the target distribution from one unimodal latent variable \(\mathbf{u}\) will generate samples out of the real data manifold. There are two main ways to avoid this: either making the support of \(\mu_{\mathbf{u}}\) disconnected (e.g. using a mixture of distributions [54, 59]), or making \(\mathbf{g}(\cdot,\boldsymbol{\theta})\) discontinuous [53]. The former strategy appears natural in our context and it will be interesting to investigate this generalization in a future work. _Remark 4.4_ (Restricted injectivity).: As argued above, if \(\Sigma^{\prime}\) belongs to a target manifold \(\mathcal{M}\), then the restricted injectivity condition (14) tells us that \(\mathbf{A}\) has to be invertible on the tangent space of the target manifold \(\mathcal{M}\) at the closest point of \(\overline{\mathbf{x}}\) in \(\mathcal{M}\). _Remark 4.5_ (Dependence on \(L_{\mathcal{L},0}\) and \(L_{\mathbf{F},0}\)).: The overparametrization bound on \(k\) depends on \(L_{\mathcal{L},0}\) and \(L_{\mathbf{F},0}\) which in turn may depend on \((n,m,d)\). Their estimate is therefore important. For instance, if \(\mathbf{F}\) is globally Lipschitz, as is the case when it is linear, then \(L_{\mathbf{F},0}\) is independent of \((n,m,d)\). As far as \(L_{\mathcal{L},0}\) is concerned, it is of course independent of \((n,m,d)\) if the loss gradient is globally Lipschitz continuous. Another situation of interest is when \(\nabla_{\mathbf{v}}\mathcal{L}_{\mathbf{y}}(\mathbf{v})\) verifies \[\left\|\nabla_{\mathbf{v}}\mathcal{L}_{\mathbf{y}}(\mathbf{v})-\nabla_{ \mathbf{z}}\mathcal{L}_{\mathbf{y}}(\mathbf{z})\right\|\leq\varphi\left(\left\| \mathbf{v}-\mathbf{z}\right\|\right),\quad\forall\mathbf{v},\mathbf{z}\in\mathbb{ R}^{m},\] where \(\varphi:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) is increasing and vanishes at \(0\). This is clearly weaker than global Lipschitz continuity and covers it as a special case. It also encompasses many important situations such as e.g. losses with Holderian gradients. It then easily follows, see e.g. [42, Theorem 18.13], that for all \(\mathbf{v}\in\mathbb{R}^{m}\): \[\mathcal{L}_{\mathbf{y}}(\mathbf{v})\leq\Phi\left(\left\|\mathbf{v}-\mathbf{y} \right\|\right)\quad\text{where}\quad\Phi(s)=\int_{0}^{1}\frac{\varphi(st)}{t} \mathrm{d}t.\] In this situation, and if \(\mathbf{F}\) is also globally \(L_{\mathbf{F}}\)-Lipschitz, following our line of proof, the overparametrization bound of Theorem 4.1 reads \[k\geq C^{\prime}\sigma_{\mathbf{F}}^{-4}n\psi\left(\Phi\left(CL_{\mathbf{F}} \sqrt{n\log(d)}+\sqrt{m}\left(\left\|\mathbf{F}(\overline{\mathbf{x}})\right\| _{\infty}+\left\|\boldsymbol{\varepsilon}\right\|_{\infty}\right)\right) \right)^{4}.\] _Remark 4.6_ (Dependence on the loss function).: If we now take interest in the scaling of the overparametrization bound on \(k\) with respect to \((n,m,d)\) in the general case we obtain that \(k\gtrsim\sigma_{\mathbf{F}}^{-4}n\psi(L_{\mathcal{L},0}(L_{\mathbf{F},0}^{2}n +m))^{4}\). Aside from the possible dependence of \(L_{\mathcal{L},0}\) and \(L_{\mathbf{F},0}\) on the parameters \((n,m,d)\) discussed before, we observe that this bound is highly dependent on the desingularizing function \(\psi\) given by the loss function. In the Lojasiewicz case where \(\psi=cs^{\alpha}\) with \(\alpha\in[0,1]\), one can choose to use a sufficiently small \(\alpha\) to reduce the scaling on the parameters but then one would slow the convergence rate as described in Corollary 3.3 which implies a tradeoff between the convergence rate and the number of parameters to ensure this convergence. In the special case where \(\alpha=\frac{1}{2}\) which corresponds to the MSE loss, and where \(L_{\mathbf{F},0}\) is of constant order and independent of \((n,m,d)\), then the overparametrization of \(k\) necessary for ensuring convergence to a zero-loss is \(k\gtrsim n^{3}m^{2}\). Another interesting case is when \(\mathbf{F}\) is linear. In that setting, the overparametrization bound becomes \(k\gtrsim\sigma_{\mathbf{F}}^{-4}n\psi(L_{\mathcal{L},0}(\left\|\mathbf{F} \right\|^{2}n+m))^{4}\). By choosing the MSE loss, and thus controlling \(\psi\) to be a square root operator, then we obtain that we need \(k\gtrsim\kappa(\mathbf{F})^{4}n^{3}m^{2}\). The bound is thus more demanding as \(\mathbf{F}\) becomes more and more ill-conditioned. The latter dependency can be interpreted as follows: the more ill-conditioned the original problem is, the larger the network needs to be. _Remark 4.7_ (Scaling when \(\mathbf{V}\) is fixed).: When the linear layer \(\mathbf{V}\) is fixed and only \(\mathbf{W}\) is trained, the overparametrization bound to guarantee convergence can be improved (see Appendix B and the results in [28]). In this case, one needs \(k\gtrsim\sigma_{\mathbf{F}}^{-2}n\psi(L_{\mathcal{L},0}(L_{\mathbf{F},0}^{2}n +m))^{2}\). In particular, for the MSE loss and an operator such that \(L_{\mathbf{F},0}\) is of constant order (as is the case when \(\mathbf{F}\) is linear), we only need \(k\gtrsim n^{2}m\). The main reason underlying this improvement is that there is no need in this case to control the deviation of \(\mathbf{V}\) from its initial point to compute the local Lipschitz constant of the jacobian of the network. This allows to have a far better Lipschitz constant estimate which turns out to be even global in this case. _Remark 4.8_ (Effect of input dimension \(d\)).: Finally, the dependence on \(d\) is far smaller (by a log factor) than the one on \(n\) and \(m\). In the way we presented the theorem, it does also affect the probability obtained but it is possible to write the same probability without \(d\) and with a stronger impact of \(n\). This indicates that \(d\) plays a very minor role on the overparametrization level whereas \(k\) is the key to reaching the overparametrized regime we are looking for. In fact, this is demonstrated by our numerical experiments where we obtained the same results by using very small \(d\in[1,10]\) or larger values up to 500, for all our experiments with potentially large \(n\). ### Proofs We start with the following lemmas that will be instrumental in the proof of Theorem 4.1. **Lemma 4.9** (Bound on \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\) with both layers trained).: _Consider the one-hidden layer network (3) with both layers trained under assumptions A-5 and A-8-A-10. We have_ \[\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\geq\sqrt{C_{\phi}^{2}+C_{\phi^{ \prime}}^{2}}/2\] _with probability at least \(1-2n^{-1}\) provided that \(k/\log(k)\geq Cn\log(n)\) for \(C>0\) large enough that depends only on \(B\), \(C_{\phi}\), \(C_{\phi^{\prime}}\) and \(D\)._ Proof.: Define the matrix \(\mathbf{H}=\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta}_{0})\mathcal{J}_{ \mathbf{g}}(\boldsymbol{\theta}_{0})^{\top}\). Since \(\mathbf{u}\) is on the unit sphere, \(\mathbf{H}\) reads \[\mathbf{H}=\frac{1}{k}\sum_{i=1}^{k}\mathbf{H}_{i},\quad\text{ where}\quad \mathbf{H}_{i}\stackrel{{\mathrm{def}}}{{=}}\phi^{\prime}( \mathbf{W}^{i}(0)\mathbf{u})^{2}\mathbf{V}_{i}(0)\mathbf{V}_{i}(0)^{\top}+ \phi(\mathbf{W}^{i}(0)\mathbf{u})^{2}\mathbf{I}_{n}.\] It then follows that \[\mathbb{E}\left[\mathbf{H}\right] =\frac{1}{k}\mathbb{E}_{X\sim\mathcal{N}(0,1)}\left[\phi^{\prime }(X)^{2}\right]\sum_{i=1}^{k}\mathbb{E}\left[\mathbf{V}_{i}(0)\mathbf{V}_{i}( 0)^{\top}\right]+\mathbb{E}_{X\sim\mathcal{N}(0,1)}\left[\phi(X)^{2}\right] \mathbf{I}_{n}\] \[=(C_{\phi^{\prime}}^{2}+C_{\phi}^{2})\mathbf{I}_{n},\] where we used A-8, A-9 and orthogonal invariance of the Gaussian distribution, hence \(\mathbf{W}^{i}(0)\mathbf{u}\) are iid in \(\mathcal{N}(0,1)\), as well as A-10 and independence between \(\mathbf{V}(0)\) and \(\mathbf{W}(0)\). Moreover, \(\mathbb{E}\left[\phi(X)\right]\leq C_{\phi}\), and since \(X\sim\mathcal{N}(0,1)\) and in view of A-5, we can upper-bound \(\phi(X)\) using the Gaussian concentration inequality to get \[\mathbb{P}\left(\phi(X)\geq C_{\phi}\sqrt{\log(nk)}+\tau\right)\leq\mathbb{P} \left(\phi(X)\geq\mathbb{E}\left[\phi(X)\right]+\tau\right)\leq\exp\left(- \frac{\tau^{2}}{2B^{2}}\right)\!. \tag{20}\] By choosing \(\tau=\sqrt{2}B\sqrt{\log(nk)}\), and taking \(c_{1}=C_{\phi}+\sqrt{2}B\), we get \[\mathbb{P}\left(\phi(X)\geq c_{1}\sqrt{\log(nk)}\right)\leq(nk)^{-1}. \tag{21}\] Using a union bound, we obtain \[\mathbb{P}\left(\max_{i\in[k]}\phi(\mathbf{W}^{i}(0)\mathbf{u})^{2}>c_{1}\log (nk)\right)\leq n(nk)^{-1}\leq n^{-1}.\] Thus, with probability at least \(1-n^{-1}\) we get \[\max_{i\in[k]}\lambda_{\max}\left(\mathbf{H}_{i}\right)\leq B^{2}D^{2}n+c_{1} \log(nk)\leq c_{2}n\log(k),\] where \(c_{2}=B^{2}D^{2}+2c_{1}\). We can then apply the matrix Chernoff inequality [60, Theorem 5.1.1] to get \[\mathbb{P}\left(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\leq \delta\sqrt{C_{\phi^{\prime}}^{2}+C_{\phi}^{2}}\right)\] \[\leq ne^{-\frac{(1-\delta)^{2}k(C_{\phi^{\prime}}^{2}+C_{\phi}^{ 2})}{c_{2}n\log(k)}}+n^{-1}.\] Taking \(\delta=1/2\) and \(k\) as prescribed with a sufficiently large constant \(C\), we conclude. **Lemma 4.10** (Local Lipschitz constant of \(\mathcal{J}_{\mathbf{g}}\) with both layers trained).: _Suppose that assumptions A-5, A-8 and A-10 are satisfied. For the one-hidden layer network (3) with both layers trained, we have for \(n\geq 2\) and any \(\rho>0\):_ \[\mathrm{Lip}_{\mathbb{B}(\boldsymbol{\theta}_{0},\rho)}(\mathcal{J}_{ \mathbf{g}})\leq B(1+2(D+\rho))\sqrt{\frac{n}{k}}.\] Proof.: Let \(\boldsymbol{\theta}\in\mathbb{R}^{k(d+n)}\) (resp. \(\boldsymbol{\widetilde{\theta}}\)) be the vectorized form of the parameters of the network \((\mathbf{W},\mathbf{V})\) (resp. \((\widetilde{\mathbf{W}},\widetilde{\mathbf{V}})\)). For \(\boldsymbol{\theta},\boldsymbol{\widetilde{\theta}}\in\mathbb{B}(R, \boldsymbol{\theta}_{0})\), we have \[\left\|\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta})-\mathcal{J}_ {\mathbf{g}}(\boldsymbol{\widetilde{\theta}})\right\|^{2} \leq\frac{1}{k}\left(\sum_{i=1}^{k}\left\|\phi^{\prime}(\mathbf{ W}^{i}\mathbf{u})\mathbf{V}_{i}\mathbf{u}^{\top}-\phi^{\prime}(\widetilde{ \mathbf{W}}^{i}\mathbf{u})\widetilde{\mathbf{V}}_{i}\mathbf{u}^{\top}\right\| _{F}^{2}+\left\|\mathrm{diag}_{n}\left(\phi(\mathbf{W}\mathbf{u})-\phi( \widetilde{\mathbf{W}}\mathbf{u})\right)\right\|_{F}^{2}\right)\] \[\leq\frac{1}{k}\Bigg{(}2\sum_{i=1}^{k}\left(\left\|\phi^{\prime}( \mathbf{W}^{i}\mathbf{u})\left(\mathbf{V}_{i}-\widetilde{\mathbf{V}}_{i} \right)\mathbf{u}^{\top}\right\|_{F}^{2}+\left\|\left(\phi^{\prime}(\mathbf{ W}^{i}\mathbf{u})-\phi^{\prime}(\widetilde{\mathbf{W}}^{i}\mathbf{u})\right) \widetilde{\mathbf{V}}_{i}\mathbf{u}^{\top}\right\|_{F}^{2}\right)\] \[\qquad+\left\|\mathrm{diag}_{n}\left(\phi(\mathbf{W}\mathbf{u})- \phi(\widetilde{\mathbf{W}}\mathbf{u})\right)\right\|_{F}^{2}\Bigg{)}\] \[\leq\frac{1}{k}\left(2B^{2}\sum_{i=1}^{k}\left(\left\|\mathbf{V} _{i}-\widetilde{\mathbf{V}}_{i}\right\|^{2}+\left\|\mathbf{W}^{i}-\widetilde{ \mathbf{W}}^{i}\right\|^{2}\left\|\widetilde{\mathbf{V}}_{i}\right\|^{2}\right) +n\left\|\phi(\mathbf{W}\mathbf{u})-\phi(\widetilde{\mathbf{W}}\mathbf{u}) \right\|^{2}\right)\] \[\leq\frac{1}{k}\left(2B^{2}\left\|\mathbf{V}-\widetilde{\mathbf{ V}}\right\|_{F}^{2}+2B^{2}\sum_{i=1}^{k}\left\|\mathbf{W}^{i}-\widetilde{\mathbf{W}}^{ i}\right\|^{2}\left\|\widetilde{\mathbf{V}}_{i}\right\|^{2}+B^{2}n\left\| (\mathbf{W}-\widetilde{\mathbf{W}})\mathbf{u}\right\|^{2}\right)\] \[\leq\frac{1}{k}\left(2B^{2}\left\|\mathbf{V}-\widetilde{\mathbf{ V}}\right\|_{F}^{2}+2B^{2}\max_{i}\left\|\widetilde{\mathbf{V}}_{i}\right\|^{2} \left\|\mathbf{W}-\widetilde{\mathbf{W}}\right\|_{F}^{2}+B^{2}n\left\|\mathbf{ W}-\widetilde{\mathbf{W}}\right\|_{F}^{2}\right)\] \[\leq\frac{n}{k}B^{2}\left(\left\|\mathbf{V}-\widetilde{\mathbf{ V}}\right\|_{F}^{2}+\left\|\mathbf{W}-\widetilde{\mathbf{W}}\right\|_{F}^{2} \right)+\frac{2}{k}B^{2}\max_{i}\left\|\widetilde{\mathbf{V}}_{i}\right\|^{2} \left\|\mathbf{W}-\widetilde{\mathbf{W}}\right\|_{F}^{2}\] \[=\frac{n}{k}B^{2}\left\|\boldsymbol{\theta}-\boldsymbol{\widetilde{ \theta}}\right\|^{2}+\frac{2}{k}B^{2}\max_{i}\left\|\widehat{\mathbf{V}}_{i} \right\|^{2}\left\|\mathbf{W}-\widetilde{\mathbf{W}}\right\|_{F}^{2}.\] Moreover, for any \(i\in[k]\): \[\left\|\widetilde{\mathbf{V}}_{i}\right\|^{2}\leq 2\left\|\mathbf{V}_{i}(0)\right\|^ {2}+2\left\|\widetilde{\mathbf{V}}_{i}-\mathbf{V}_{i}(0)\right\|^{2}\leq 2 \left\|\mathbf{V}_{i}(0)\right\|^{2}+2\left\|\boldsymbol{\theta}-\boldsymbol{ \theta}_{0}\right\|^{2}\leq 2nD^{2}+2\rho^{2},\] where we used A-10. Thus \[\left\|\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta})-\mathcal{J}_{\mathbf{g}} (\boldsymbol{\widetilde{\theta}})\right\|^{2}\leq\frac{n}{k}B^{2}\left(1+4D^{ 2}+2\rho^{2}\right)\left\|\boldsymbol{\theta}-\boldsymbol{\widehat{\theta}} \right\|^{2}.\] **Lemma 4.11** (Bound on the initial error).: _Under assumptions A-5, A-6 and A-8 to A-10, the initial error of the network satisfies_ \[\left\|\mathbf{y}(0)-\mathbf{y}\right\|\leq CL_{\mathbf{F},0}\sqrt{n\log(d)} +\sqrt{m}\left(\left\|\mathbf{F}(\mathbf{\overline{x}})\right\|_{\infty}+ \left\|\boldsymbol{\varepsilon}\right\|_{\infty}\right),\] _with probability at least \(1-d^{-1}\), where \(C\) is a constant that depends only on \(B\), \(C_{\phi}\), and \(D\)._ Proof.: By A-6 and the mean value theorem, we have \[\left\|\mathbf{y}(0)-\mathbf{y}\right\|\leq\max_{\mathbf{x}\in\mathbb{B}(0, \left\|\mathbf{x}(0)\right\|)}\left\|\mathcal{J}_{\mathbf{F}}(\mathbf{x}) \right\|\left\|\mathbf{x}(0)\right\|+\sqrt{m}\left(\left\|\mathbf{F}(\mathbf{ \overline{x}})\right\|_{\infty}+\left\|\boldsymbol{\varepsilon}\right\|_{ \infty}\right),\] where \(\mathbf{x}(0)=\mathbf{g}(\mathbf{u},\boldsymbol{\theta}(0))=\frac{1}{\sqrt{k}} \sum_{i=1}^{k}\phi(\mathbf{W}^{i}(0)\mathbf{u})\mathbf{V}_{i}(0)\). Moreover, by A-10: \[\left\|\mathbf{g}(\mathbf{u},\boldsymbol{\theta}(0))\right\|\leq\max_{i} \left\|\mathbf{V}_{i}(0)\right\|\frac{1}{\sqrt{k}}\sum_{i=1}^{k}\left|\phi( \mathbf{W}^{i}(0)\mathbf{u})\right|\leq D\sqrt{n}\frac{1}{\sqrt{k}}\sum_{i=1} ^{k}\left|\phi(\mathbf{W}^{i}(0)\mathbf{u})\right|.\] We now prove that the last term concentrates around its expectation. First, owing to A-8 and A-9, we can argue using orthogonal invariance of the Gaussian distribution and independence to infer that \[\mathbb{E}\left[\frac{1}{\sqrt{k}}\sum_{i=1}^{k}\left|\phi(\mathbf{W}^{i}(0) \mathbf{u})\right|\right]^{2}\leq\frac{1}{k}\mathbb{E}\left[\left(\sum_{i=1}^ {k}\left|\phi(\mathbf{W}^{i}(0)\mathbf{u})\right|\right)^{2}\right]=\mathbb{E }\left[\phi(\mathbf{W}^{1}(0)\mathbf{u})^{2}\right]=C_{\phi}^{2}.\] In addition, the triangle inequality and Lipschitz continuity of \(\phi\) (see A-5) yields \[\frac{1}{\sqrt{k}}\left|\sum_{i=1}^{k}\left|\phi(\mathbf{W}^{i} \mathbf{u})\right|-\left|\phi(\mathbf{\widetilde{W}}^{i}\mathbf{u})\right| \right| \leq\frac{1}{\sqrt{k}}\sum_{i=1}^{k}\left|\phi(\mathbf{W}^{i} \mathbf{u})-\phi(\mathbf{\widetilde{W}}^{i}\mathbf{u})\right|\] \[\leq B\left(\frac{1}{\sqrt{k}}\sum_{i=1}^{k}\left\|\mathbf{W}^{i} -\mathbf{\widetilde{W}}^{i}\right\|\right)\leq BD\left\|\mathbf{W}-\mathbf{ \widetilde{W}}\right\|_{F}.\] We then get using the Gaussian concentration inequality that \[\mathbb{P}\left(\frac{1}{\sqrt{k}}\sum_{i=1}^{k}\left|\phi(\mathbf{ W}^{i}(0)\mathbf{u})\right|\right|\geq C_{\phi}\sqrt{\log(d)}+\tau\right)\] \[\leq\mathbb{P}\left(\frac{1}{\sqrt{k}}\sum_{i=1}^{k}\left|\phi( \mathbf{W}^{i}(0)\mathbf{u})\right|\geq\mathbb{E}\left[\frac{1}{\sqrt{k}}\sum_ {i=1}^{k}\left|\phi(\mathbf{W}^{i}(0)\mathbf{u})\right|\right]+\tau\right)\leq e ^{-\frac{\tau^{2}}{2B^{2}D^{2}}}.\] Taking \(\tau=\sqrt{2}BD\sqrt{\log(d)}\), we get \[\|\mathbf{x}(0)\|\leq C\sqrt{n\log(d)}\] with probability at least \(1-d^{-1}\). Since the event above implies \(\mathbb{B}(0,\|\mathbf{x}(0)\|)\subset\mathbb{B}\left(0,C\sqrt{n\log(d)}\right)\), we conclude. Proof of Theorem 4.1.: Proving Theorem 4.1 amounts to showing that (8) holds with high probability under our scaling. This will be achieved by combining Lemma 4.9, Lemma 4.10 and Lemma 4.11 as well as the union bound. From Lemma 4.9, we have \[\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\geq\sqrt{C_{\phi}^{2}+C_{\phi^{ \prime}}^{2}}/2\] with probability at least \(1-2n^{-1}\) provided \(k\geq C_{0}n\log(n)\log(k)\) for \(C_{0}>0\). On the other hand, from Lemma 4.10, and recalling \(R\) from (9), we have that \(R\) must obey \[R\geq\frac{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))}{2B((1+2D)+2R))}\sqrt{ \frac{k}{n}}\geq\frac{\sqrt{C_{\phi}^{2}+C_{\phi^{\prime}}^{2}}}{8B((1/2+D)+R) )}\sqrt{\frac{k}{n}}.\] Solving for \(R\), we arrive at \[R\geq\frac{\sqrt{(1/2+D)^{2}+\frac{\sqrt{(C_{\phi}^{2}+C_{\phi^{\prime}}^{2}) \frac{k}{n}}}{2B}}-(1/2+D)}{2}.\] Simple algebraic computations and standard bounds on \(\sqrt{1+a}\) for \(a\in[0,1]\) show that \[R\geq C_{1}\left(\frac{k}{n}\right)^{1/4}\] whenever \(k\gtrsim n\), \(C_{1}\) being a positive constant that depends only on \(B\), \(C_{\phi}\), \(C_{\phi^{\prime}}\) and \(D\). Thanks to A-1 and A-3, we have by the descent lemma, see e.g. [42, Lemma 2.64], that \[\mathcal{L}_{\mathbf{y}}(\mathbf{y}(0))\leq\max_{\mathbf{v}\in[\mathbf{y}, \mathbf{y}(0)]}\frac{\|\nabla\mathcal{L}_{\mathbf{y}}(\mathbf{v})\|}{\| \mathbf{v}-\mathbf{y}\|}\frac{\left\|\mathbf{y}(0)-\mathbf{y}\right\|^{2}}{2}.\] Combining Lemma 4.11 and the fact that \[[\mathbf{y},\mathbf{y}(0)]\subset\mathbb{B}(0,\left\|\mathbf{y}\right\|+\left\| \mathbf{y}(0)\right\|)\] then allows to deduce that with probability at least \(1-d^{-1}\), we have \[\mathcal{L}_{\mathbf{y}}(\mathbf{y}(0))\leq\frac{L_{\mathcal{L},0}}{2}\left( CL_{\mathbf{F},0}\sqrt{n\log(d)}+\sqrt{m}\left(\left\|\mathbf{F}(\overline{ \mathbf{x}})\right\|_{\infty}+\left\|\boldsymbol{\varepsilon}\right\|_{\infty }\right)\right)^{2}.\] Therefore, using the union bound and the fact that \(\psi\) is increasing, it is sufficient for (8) to be fulfilled with probability at least \(1-2n^{-1}-d^{-1}\), that \[\frac{4}{\sigma_{\mathbf{F}}\sqrt{C_{\phi}^{2}+C_{\phi^{\prime}}^{2}}}\psi \left(\frac{L_{\mathcal{L},0}}{2}\left(CL_{\mathbf{F},0}\sqrt{n\log(d)}+ \sqrt{m}\left(\left\|\mathbf{F}(\overline{\mathbf{x}})\right\|_{\infty}+ \left\|\boldsymbol{\varepsilon}\right\|_{\infty}\right)\right)^{2}\right)<C_{1 }\left(\frac{k}{n}\right)^{1/4}, \tag{22}\] whence we deduce the claimed scaling. ## 5 Numerical Experiments To validate our theoretical findings, we carried out a series of experiments on two-layer neural networks in the DIP setting. Therein, 25000 gradient descent iterations with a fixed step-size were performed. If the loss reached a value smaller than \(10^{-7}\), we stopped the training and considered it has converged. For these networks, we only trained the first layer, \(\mathbf{W}\), and fixed the second layer, \(\mathbf{V}\), as it allows to have better theoretical scalings as discussed in Remark 4.7. Every network was initialized with respect to the assumption of this work where we used sigmoid activation function. The entries of \(\overline{\mathbf{x}}\) are drawn from \(\mathcal{N}(0,1)\) while the entries of the linear forward operator \(\mathbf{F}\) are drawn from \(\mathcal{N}(0,1/\sqrt{n})\) to ensure that \(L_{\mathbf{F},0}\) is of constant order. Our first experiment in Figure 1 studies the convergence to a zero-loss solution of networks with different architecture parameters in a noise-free context. The absence of noise allows the networks to converge faster which is helpful to check convergence in 25000 iterations. We used \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))=\frac{1}{2}\left\|\mathbf{y}(t)- \mathbf{y}\right\|^{2}\) as it should gives good exponential decay. For each set of architecture parameters, we did 50 runs and calculated the frequency at which the network arrived at the error threshold of \(10^{-7}\). We present two experiments, in the first one we fix \(m=10\) and \(d=500\) and let \(k\) and \(n\) vary while in the second we fix \(n=60\), \(d=500\) and we let \(k\) and \(m\) vary. Based on Remark 4.7 concerning Theorem B.1 which is a specialisation of Theorem 4.1, for our experimental setting (MSE loss with \(L_{\mathbf{F},0}\) of constant order), one should expect to observe convergence to zero-loss solutions when \(k\gtrsim n^{2}m\). We observe in Figure 0(a) the relationship between \(k\) and \(n\) for a fixed \(m\). In this setup where \(n\gg m\) and \(\mathbf{A}\) is Gaussian, we expect a quadratic relationship which seems to be the case in the plot. It is however surprising that with values of \(k\) restricted to the range \([20,1000]\), the network converges to zero-loss solution with high probability for situations where \(n>k\) which goes against our intuition for these underparametrized cases. Additionally, the observation of Figure 0(b) provides a very different picture when the ratio \(m/n\) goes away from 0. We first see clearly the expected linear relationship between \(k\) and \(m\). However, we used in this experiment \(n=60\) and we can see that for the same range of values of \(k\), the method has much more difficulty to converge with already small \(m\). This indicates that the ratio \(m/n\) plays an important role in the level of overparametrization necessary for the network to converge. It is clear from these results that our bounds are not tight as we observe convergence for lower values of \(k\) than expected. In our second experiment presented in Figure 1(a), we look at the signal evolution under different noise levels when the restricted injectivity constraint A-7 is met to verify our theoretical bound on the signal loss. Due to the fact that our networks can span the entirety of the space \(\mathbb{R}^{n}\), this injectivity constraint becomes a global one, which forces us to use a square matrix as our forward operator, we thus chose to use \(n=m=10\). Following the discussion about assumption A-4, we choose to use \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))=\eta(\left\|\mathbf{y}(t)-\mathbf{y} \right\|^{2})\) with \(\eta(s)=s^{p+1}/\big{(}2(p+1)\big{)}\) where \(p\in[0,1]\) with \(p=0.2\) for this specific experiment. We generated once a forward operator with singular values in \(\{\frac{1}{z^{2}+1}\mid z\in[0,9]\}\) and kept the same one for all the runs. To better see the convergence of the signal, we ran these experiments for 200000 iterations. Furthermore \(\epsilon\) is a noise vector with entries drawn from a uniform distribution \(U(-\beta,\beta)\) with \(\beta\) representing the level of noise. In this figure, we plot the mean and the standard deviation of 50 runs for each noise level. For comparison we also show with the dashed line the expectation of the theoretical upper bound, corresponding to \(\mathbb{E}\left[\left\|\varepsilon\right\|/\mu_{\mathbf{F},\Sigma^{\prime}} \right]\geq\frac{\sqrt{m\beta}}{\sqrt{6}\mu_{\mathbf{F},\Sigma^{\prime}}}\). We observe that the gap between this theoretical bound and the mean of the signal loss is growing as the noise level grows. This indicates that the more noise, the less tighter our bound becomes. We also see different convergence profiles of the signal depending on the noise level which is to be expected as the network will fit this noise to optimize its loss. Of course, when there is no noise, the signal tends to the ground truth thanks to the injectivity of the forward operator. We continue the study of the effect of the noise on the convergence of the networks in Figure 1(b). We show the convergence profile of the loss depending on the noise level and \(k\). For Figure 1: Probability of converging to a zero-loss solution for networks with different architecture parameters confirming our theoretical predictions: linear dependency between \(k\) and \(m\) and at least quadratic dependency between \(k\) and \(n\). The blue line is a quadratic function representing the phase transition fitted on the data. that we fixed \(n=1000\), \(m=10\), \(d=10\), \(p=0.1\) and ran the optimization of networks with different \(k\) and \(\beta\) values and we took the loss value obtained at the end of the optimization. The results are averaged from 50 runs and help to see that even if a network with insufficient overparametrization does not converge to a zero-loss solution, the more neurons it has, the better in average the solution in term of loss value. Moreover, this effect seems to stay true even with noise. It is interesting to see the behavior of the loss in such cases that are not treated by our theoretical framework. For our fourth experiment, we are interested by the effect on the convergence speed of the parameter \(p\) of the loss previously described. We fixed \(n=1000\), \(m=10\) and \(k=800\) and varied \(p\) between 0 and 1. For each choice of \(p\), we trained 50 networks and show the mean value of the loss at each iteration in Figure 3. We chose to use \(10^{6}\) iteration steps and let the optimization reach a limit of \(10^{-14}\). As expected by corollary 3.3, smaller \(p\) values lead to faster convergence rate in general. Indeed, smaller \(p\) values are closer to the case where \(\alpha=1/2\) in the corollary and higher \(p\) values means that \(\alpha\) will grow away from \(1/2\) which worsens the theoretical rate of convergence. ## 6 Conclusion and Future Work This paper studied the optimization trajectories of neural networks in the inverse problem setting and provided both convergence guarantees for the network and recovery guarantees of the solution. Our results hold for a broad class of loss functions thanks to the Kurdyka-Lojasewiecz inequality. We also demonstrate that for a two-layers DIP network with smooth activation and sufficient overparametrization, we obtain with high probability our theoretical Figure 2: Effect of the noise on both the signal and the loss convergence in different contexts. guarantees. Our proof relies on bounding the minimum singular values of the Jacobian of the network through an overparametrization that ensures a good initialization of the network. Then the recovery guarantees are obtained by decomposing the distance to the signal in different error terms explained by the noise, the optimization and the architecture. Although our bounds are not tight as demonstrated by the numerical experiments, they provide a step towards the theoretical understanding of neural networks for inverse problem resolution. In the future we would like to study more thorougly the multilayer case and adapt our result to take into account the ReLU function. Another future direction is to adapt our analysis to the supervised setting and to provide a similar analysis with accelerated optimization methods.
2309.09483
An Accurate and Efficient Neural Network for OCTA Vessel Segmentation and a New Dataset
Optical coherence tomography angiography (OCTA) is a noninvasive imaging technique that can reveal high-resolution retinal vessels. In this work, we propose an accurate and efficient neural network for retinal vessel segmentation in OCTA images. The proposed network achieves accuracy comparable to other SOTA methods, while having fewer parameters and faster inference speed (e.g. 110x lighter and 1.3x faster than U-Net), which is very friendly for industrial applications. This is achieved by applying the modified Recurrent ConvNeXt Block to a full resolution convolutional network. In addition, we create a new dataset containing 918 OCTA images and their corresponding vessel annotations. The data set is semi-automatically annotated with the help of Segment Anything Model (SAM), which greatly improves the annotation speed. For the benefit of the community, our code and dataset can be obtained from https://github.com/nhjydywd/OCTA-FRNet.
Haojian Ning, Chengliang Wang, Xinrun Chen, Shiying Li
2023-09-18T04:47:12Z
http://arxiv.org/abs/2309.09483v1
# An Accurate and Efficient Neural Network for Octa Vessel Segmentation and a New Dataset ###### Abstract Optical coherence tomography angiography (OCTA) is a noninvasive imaging technique that can reveal high-resolution retinal vessels. In this work, we propose an accurate and efficient neural network for retinal vessel segmentation in OCTA images. The proposed network achieves accuracy comparable to other SOTA methods, while having fewer parameters and faster inference speed (e.g. 110x lighter and 1.3x faster than U-Net), which is very friendly for industrial applications. This is achieved by applying the modified Recurrent ConvNeXt Block to a full resolution convolutional network. In addition, we create a new dataset containing 918 OCTA images and their corresponding vessel annotations. The data set is semi-automatically annotated with the help of Segment Anything Model (SAM), which greatly improves the annotation speed. For the benefit of the community, our code and dataset can be obtained from [https://github.com/nhjdywd/OCTA-FRNet](https://github.com/nhjdywd/OCTA-FRNet). Haojian Ning, Chengliang Wang\({}^{*}\), Xinrun Chen Chongqing University College of Computer Science Chongqing, China Shiying Li Xiang'an Hospital of Xiamen University Department of Ophthalmology Xiamen, China OCTA, Vessel Segmentation, Neural Network, ConvNeXt, Dataset, Segment Anything Model ## 1 Introduction Optical coherence tomography angiography (OCTA) is a rapid and non-invasive imaging technology for retinal micro-vasculature[1]. It can help diagnose various retinal diseases such as age-related macular degeneration (AMD)[2], diabetic retinopathy (DR)[3], glaucoma[4], etc. Automatic OCTA vessel segmentation based on neural networks has been a research hotspot because it helps to improve the efficiency of diagnosis. On-device inference is important for industrial applications of neural networks. Since the computing power and storage space of end-side devices are often very limited, it is necessary for the model to have a small number of parameters and high computing efficiency. Previous work [5, 6, 7, 8, 9, 10] mainly adopted the encoder-decoder architecture and achieved very high accuracy in OCTA vessel segmentation. However, these networks usually have a large number of parameters and slow inference speed, which is unfriendly to on-device applications. There is still a lack of models that are friendly to industrial applications in OCTA images. In this paper, we make the following contributions: * We propose a full-resolution convolutional network (FRNet), which consists of several modified Recurrent ConvNeXt Blocks. The network has comparable accuracy to other SOTA methods while having significantly fewer parameters and faster inference speed, making it very friendly for industrial applications. * We create a new dataset containing 918 OCTA images and their corresponding vessel annotations. The data set is semi-automatically annotated with the help of Segment Anything Model (SAM), which greatly improves the annotation speed. To the best of our knowledge, our dataset is the largest (with 918 images) OCTA vessel segmentation dataset, and we believe it can help the community alleviate the problem of insufficient datasets. * We conduct various experiments to demonstrate the effectiveness of the proposed model. And we make the Figure 1: (a) An OCTA image from ROSSA dataset that will be released in this paper. (b) The corresponding vessel annotation. code and dataset of this work publicly available, which helps improve the reproducibility of the work. ## 2 Related Work ### Neural Networks for OCTA Vessel Segmentation In recent years, convolutional neural networks have been widely applied in medical image processing, including OCTA images. Previous work on OCTA vessel segmentation usually adopts the encoder-decoder architecture, more specifically, the U-Net[11] architecture. For example, Peng et al[5] applied an iterative encoder-decoder network to correct wrong output. Hu et al[12] proposed a joint encoding and seperate decoding network to handle multiple tasks. Ma et al[10] designed a two-stage split attention residual UNet to refine the segmentation results. Ziping et al[13] added a contrastive learning module after an encoder-decoder network to improve the performance. The above method achieves quite high accuracy, but they are heavyweight, having a large number of parameters and slow inference speed, which brings inconvenience to industrial applications. There is a trend in the 2020s for transformers[14] to replace convolutional networks in various fields of computer vision. However, the work of Liu et al. on ConvNeXt[15] demonstrated that properly designed convolutional networks can achieve better results than transformers. So in this work we adopt convolutional networks on the basis of ConvNeXt. Details will be discussed in Section 3.1. ### Datasets for OCTA Vessel Segmentation OCTA is a relatively new imaging modality with a late start. There are only two main datasets used in vessel segmentation studies: ROSE[10] and OCTA-500[7]. ROSE contains 229 OCTA projection images with \(3mm\times 3mm\) FOV and manual labels of vessels and capillaries. OCTA-500 contains 300 OCTA projection images with \(6mm\times 6mm\) FOV and 200 OCTA projection images with \(3mm\times 3mm\) FOV and their corresponding annotations. To further increase the amount of publicly available data in this field, we release the ROSSA dataset, which contains 918 images. For more information, please refer to Section 3.2. ## 3 Methods ### Proposed Model As shown in figure 1(a), OCTA images contain many tiny vessels, which may only consist of a few pixels. For the widely used encoder-decoder architecture, the image is downsampled and then up-sampled, making it difficult to fully restore the pixels of these small vessels, which affects the segmentation accuracy. Furthermore, each downsampling typically results in doubling the number of channels in the convolutional layer, which results in a model with a large number of parameters. This is why we believe that the encoder-decoder model is not the most suitable for OCTA vessel segmentation. To address these problems, we start with a simple full-resolution network(FRNet-base). As shown in figure 2(a), FRNet-base consists of convolutional blocks without any downsampling or upsampling modules. The structure of these convolutional blocks is the same as the BasicBlock in ResNet[16], that is, a residual link is added after two convolutions. FRNet-base contains 6 such convolution blocks, and their number of channels is 32. The benefits of this design are: First, the total number of convolution channels is greatly reduced, which leads to a significant decrease in the number of parameters. Second, it avoids the loss of information of small vessels during downsampling, so that high accuracy can be achieved. We take inspiration from ConvNeXt[15] to further improve FRNet. Based on ConvNeXt Block, we set convolutional channels to 32. In order to increase the receptive field, we replace the 1x1 pixelwise convolution with a 3x3 convolution. Furthermore, we apply the idea of recurrent convolution in this module. Recurrent convolution means that the input will be looped through the convolutional layer R times. It is proven to be quite effective for vessel segmentation in [9]. In our model we set R=2. The structure of the Recurrent ConvNeXt Block is shown in Figure 3. It is used to build FRNet, which improves the Dice score by 0.24%-0.32% compared to FRNet-base. Figure 2: (a) FRNet-base. (b) The final designed FRNet, mainly composed of 7x7 depthwise separable convolutions, each followed by two 3x3 recurrent convolutions ### Dataset Vessel annotation is a labor-intensive task. Unlike regular object datasets (e.g. COCO), vessel annotations cannot be drawn in the form of polygons but must be drawn pixel by pixel. From our experience, pixel-by-pixel annotation of vessels in an OCTA image usually takes a researcher 10-30 minutes and consumes a lot of mental energy. This is an important reason why there are so few OCTA vessel segmentation datasets. In 2023, Facebook released their Segment Anything Model (SAM)[17], which can output accurate pixel-level masks by inputting a small number of point prompts. This can greatly save annotation time. The original weights of SAM cannot be directly applied to OCTA vessel annotation because it was not trained on similar datasets. In order to apply SAM to OCTA vessel annotation, we first manually annotated 300 OCTA vessel images (NO.1-NO.300), and then used these data to fine-tune SAM. Finally, using fine-tuned SAM, we were able to annotate a vessel with just a simple click of prompt, as shown in Figure 4. Although sometimes images annotated by SAM require a small amount of manual correction, overall the time to annotate an image has dropped to about 2 minutes. In this study, we used SAM to annotate 618 images (NO.301-NO.918). Since a large amount of data is semi-automatically annotated using SAM, we named the dataset ROSSA (Retinal OCTA Segmentation dataset with Semi-automatic Annotations). We divide ROSSA into a training set (NO.1-NO.100 & NO.301-NO.918), a validation set (NO.101-NO.200) and a test set (NO.201-NO.300). ## 4 Experiments ### Datasets and Experimental Settings Experiments are conducted on OCTA-500 and ROSSA datasets. OCTA-500 has two subsets: OCTA_6M and OCTA_3M, which contain 300 and 200 images respectively. For OCTA-500, the division of training set, validation set, and test set is the same as [7]. For ROSSA, please refer to 3.2 for the division of training set, validation set, and test set. We use the training set to train network parameters, use the validation set to select the best model and use the test set for evaluation. Our methods are implemented with Pytorch using an NVIDIA A100 Tensor Core GPU. We use the Adam optimizer with a learning rate of \(1\times 10^{-4}\) and train the network for a total of 300 epochs. The batch size is set to 2. The loss function is the Dice loss, which is defined as follows: \[DiceLoss=1-\frac{2|X\cap Y|}{|X|+|Y|} \tag{1}\] The Dice score is used as the evaluation metric to choose the best model, which is defined as follows: \[Dice=\frac{2|X\cap Y|}{|X|+|Y|} \tag{2}\] where X is the predicted vessel mask and Y is the ground truth vessel mask. ### Results All methods are evaluated using Dice(%) and Acc(%). We take the mean and standard deviation (\(mean\pm std\)) of 5 experiments. We compare our proposed network with UNet[11], UNet++[18], ResUNet[19], OVS-Net[8] and FARGO[5]. The results are tabulated in Table 1. As can be seen from the table, our proposed method achieves higher mean values on Dice and Acc scores for most of the datasets. Although Figure 4: An example of using SAM to annotate vessels. Green represents positive prompts, red represents negative prompts, and yellow represents the output vessel mask. (a) A prompt. (b) Added another prompt. (c) The final annotation with all prompts. Figure 3: The original ConvNext Block and our designed Recurrent ConvNeXt Block FARGO[5] sometimes achieves better Dice scores, its fluctuations (standard deviation) are significantly larger, which means that its results are unstable. In contrast, the standard deviation of our proposed method is very small, which makes it stable in training and easy to reproduce. We believe this is a benefit due to the simplicity of our proposed architecture. We can see that the number of parameters of FRNet and FRNet-base is more than two orders of magnitude lower than the opponent (e.g. FRNet's 0.13M vs. FARGO's 17.52M). And their inference speed is also faster than that of their opponents. This proves that our proposed method is more efficient. We expect that they will be more suitable for industrial applications. ### Ablation Study #### 4.3.1 Contribution of Components in FRNet In Table 2 we investigate the contributions of different components to FRNet's performance. Experiments are conducted on the ROSSA dataset. The first line represents only using Residual Block[16], which is the same as FRNet-base. The second line represents using ConvNeXt Block[15] to replace the Residual Block. The reduction in the number of parameters is due to the use of depthwise separable convolutions in ConvNeXt. The third line represents the use of 3x3 convolution to replace the 1x1 convolution in the ConvNeXt Block. As the number of parameters increases, the accuracy also increases. The fourth line represents the application of recurrent convolution, which achieves the best accuracy. And recurrent convolution will not increase the number of parameters, but will increase the inference time. ## 5 Conclution In this work, we propose an accurate and efficient neural network(FRNet) for retinal vessel segmentation in OCTA images. By applying the modified Recurrent ConvNeXt Block to a full resolution convolutional network, the proposed model, with very tiny model size, can run faster and more accurate than opponents. Besides, we create a new dataset(ROSSA) containing 918 OCTA images and their corresponding vessel annotations. We use Segment Anything Model (SAM) to semi-automatically annotate images, which greatly speeds up the annotation work. The datasets and new annotation pipelines we provide can help solve the problem of lack of data in the medical field. ## Acknowledgement This work is supported by the Chongqing Technology Innovation \(\&\) Application Development Key Project (cstc2020jscx; dkwtBX0055; cstb2022tiad-kpx0148).
2309.15096
Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs
Recently, theoretical analyses of deep neural networks have broadly focused on two directions: 1) Providing insight into neural network training by SGD in the limit of infinite hidden-layer width and infinitesimally small learning rate (also known as gradient flow) via the Neural Tangent Kernel (NTK), and 2) Globally optimizing the regularized training objective via cone-constrained convex reformulations of ReLU networks. The latter research direction also yielded an alternative formulation of the ReLU network, called a gated ReLU network, that is globally optimizable via efficient unconstrained convex programs. In this work, we interpret the convex program for this gated ReLU network as a Multiple Kernel Learning (MKL) model with a weighted data masking feature map and establish a connection to the NTK. Specifically, we show that for a particular choice of mask weights that do not depend on the learning targets, this kernel is equivalent to the NTK of the gated ReLU network on the training data. A consequence of this lack of dependence on the targets is that the NTK cannot perform better than the optimal MKL kernel on the training set. By using iterative reweighting, we improve the weights induced by the NTK to obtain the optimal MKL kernel which is equivalent to the solution of the exact convex reformulation of the gated ReLU network. We also provide several numerical simulations corroborating our theory. Additionally, we provide an analysis of the prediction error of the resulting optimal kernel via consistency results for the group lasso.
Rajat Vadiraj Dwaraknath, Tolga Ergen, Mert Pilanci
2023-09-26T17:42:52Z
http://arxiv.org/abs/2309.15096v1
# Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs ###### Abstract Recently, theoretical analyses of deep neural networks have broadly focused on two directions: 1) Providing insight into neural network training by SGD in the limit of infinite hidden-layer width and infinitesimally small learning rate (also known as gradient flow) via the Neural Tangent Kernel (NTK), and 2) Globally optimizing the regularized training objective via cone-constrained convex reformulations of ReLU networks. The latter research direction also yielded an alternative formulation of the ReLU network, called a gated ReLU network, that is globally optimizable via efficient unconstrained convex programs. In this work, we interpret the convex program for this gated ReLU network as a Multiple Kernel Learning (MKL) model with a weighted data masking feature map and establish a connection to the NTK. Specifically, we show that for a particular choice of mask weights that do not depend on the learning targets, this kernel is equivalent to the NTK of the gated ReLU network on the training data. A consequence of this lack of dependence on the targets is that the NTK cannot perform better than the optimal MKL kernel on the training set. By using iterative reweighting, we improve the weights induced by the NTK to obtain the optimal MKL kernel which is equivalent to the solution of the exact convex reformulation of the gated ReLU network. We also provide several numerical simulations corroborating our theory. Additionally, we provide an analysis of the prediction error of the resulting optimal kernel via consistency results for the group lasso. ## 1 Introduction Neural Networks (NNs) have become popular in various machine learning applications due to their remarkable modeling capabilities and generalization performance. However, their highly nonlinear and non-convex structure precludes an effective theoretical analysis. Therefore, developing theoretical tools to understand the fundamental mechanisms behind neural networks is still an active research topic. To tackle this problem, [1] studied the training dynamics of neural networks trained with Stochastic Gradient Descent (SGD) in a regime where each layer has infinitely many neurons and SGD uses an infinitesimally small learning rate, i.e., gradient flow. Thus, they related the training dynamics of neural networks to the training dynamics of a fixed kernel called the Neural Tangent Kernel (NTK). However, [2] showed that neurons barely move from their initial values in this regime so that neural networks fail to learn useful features from the training data. This is in contrast to their finite width counterparts, which are able to learn predictive features in practice [3]. Moreover, [4; 5] provided further theoretical and empirical evidence to show that existing kernel approaches are not able to explain the remarkable performance of finite width networks. Therefore, although NTK and similar kernel based approaches enable theoretical analysis unlike standard finite width networks, they fail to explain the effectiveness of finite width neural networks that are employed in practice. Recently a series of papers [6; 7; 8; 9; 10; 11; 12; 13; 14; 15] introduced an analytic framework to analyze finite width neural networks by leveraging certain convex duality arguments. Particularly, they showed that the standard regularized non-convex training problem can be equivalently cast as a finite dimensional convex program. This convex approach has two major advantages over standard non-convex training: **(1)** Since the training objective is convex, one can find globally optimal parameters of the network efficiently and reliably unlike standard nonconvex training which can get stuck at a local minimum, and **(2)** As we show in this work, a class of convex reformulations can be interpreted as an instance of Multiple Kernel Learning (MKL) [16] which allows us to characterize the corresponding finite width networks by a learned data-dependent kernel that can be iteratively computed. This is in contrast to the infinite-width kernel characterization in which the NTK stays constant throughout training. **Notation and Preliminaries.** In the paper, we use lowercase and uppercase bold letters to denote vectors and matrices respectively. We also use subscripts to denote a certain column or element. We denote the identity matrix of size \(k\times k\) as \(\mathbf{I}_{k}\). To denote the set \(\{1,2,\ldots,n\}\), we use \([n]\). We also use \(\left\lVert\cdot\right\rVert_{p}\) and \(\left\lVert\cdot\right\rVert_{F}\) to represent the standard \(\ell_{p}\) and Frobenius norms. Additionally, we denote 0-1 valued indicator function and ReLU activation as \(\mathbb{1}\)\(\{x\geq 0\}\) and \(\left(x\right)_{+}:=\max\{x,0\}\), respectively. In this paper, we focus on analyzing the regularized training problem of ReLU networks. Particularly, we consider a two-layer ReLU network with \(m\) neurons whose output function is defined as follows \[f(\mathbf{x},\boldsymbol{\theta})\!:=\sum_{j=1}^{m}\left(\mathbf{x}^{T}\mathbf{ w}_{j}^{(1)}\right)_{+}w_{j}^{(2)}=\sum_{j=1}^{m}\left(\mathbb{1}\left\{ \mathbf{x}^{T}\mathbf{w}_{j}^{(1)}\geq 0\right\}\mathbf{x}^{T}\mathbf{w}_{j}^{(1)} \right)w_{j}^{(2)}. \tag{1}\] where \(\mathbf{w}_{j}^{(1)}\in\mathbb{R}^{d}\) and \(w_{j}^{(2)}\) are the \(j^{th}\) hidden and output layer weights, respectively and \(\boldsymbol{\theta}:=\{(\mathbf{w}_{j}^{(1)},w_{j}^{(2)})\}_{j=1}^{m}\) represents all trainable parameters. Given a training data matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\) and a target vector \(\mathbf{y}\in\mathbb{R}^{n}\), we minimize the following weight decay regularized training objective \[\min_{\mathbf{W}^{(1)},\mathbf{w}^{(2)}}\left\lVert\sum_{j=1}^{m}\left( \mathbf{X}\mathbf{w}_{j}^{(1)}\right)_{+}w_{j}^{(2)}-\mathbf{y}\right\rVert_{ 2}^{2}+\lambda\sum_{j=1}^{m}\left(\left\lVert\mathbf{w}_{j}^{(1)}\right\rVert_ {2}^{2}+|w_{j}^{(2)}|^{2}\right), \tag{2}\] where \(\lambda>0\) is the regularization coefficient. We also use \(f(\mathbf{X},\boldsymbol{\theta})=\sum_{j=1}^{m}\left(\mathbf{X}\mathbf{w}_{j} ^{(1)}\right)_{+}w_{j}^{(2)}\) for convenience. We discuss extensions to generic loss and deeper architectures in appendix. **Our Contributions.** * In section 4, we show that the convex formulation of the _Gated ReLU_ network is equivalent to _Multiple Kernel Learning_ with a specific set of _Masking Kernels_. (Theorem 4.3). * In section 5, we connect this formulation to the Neural Tangent Kernel by showing that the NTK is a specific weighted combination of masking kernels (Theorem 5.1). * In Corollary 5.2, we show that on the training set, the **NTK is suboptimal when compared to the optimal kernel learned by MKL**, which is equivalent to the model learnt by our convex Gated ReLU program. * In section 6, we also derive bounds on the prediction error of this optimal kernel and specify how to choose the regularization parameter \(\lambda\) (Theorem 6.1). ## 2 Convex Optimization and the NTK Here, we briefly review the literature on convex training and the NTK theory of neural networks. ### Convex Programs for ReLU Networks Even though the network in (1) has only two layers, previous studies show that (2) is a challenging optimization problem due to the non-convexity of the objective function. Thus, local search heuristics such as SGD might fail to globally optimize the training objective [17; 18; 19; 20]. To eliminate the issues associated with the inherent non-convexity, [7] introduced an exact convex reformulation of (2) as the following constrained optimization problem \[\min_{\mathbf{w}_{i},\mathbf{w}_{i}^{\prime}}\left\|\sum_{i=1}^{p}\mathbf{D}_{i} \mathbf{X}(\mathbf{w}_{i}-\mathbf{w}_{i}^{\prime})-\mathbf{y}\right\|_{2}^{2}+ \lambda\sum_{i=1}^{p}\left(\left\|\mathbf{w}_{i}\right\|_{2}+\left\|\mathbf{w }_{i}^{\prime}\right\|_{2}\right)\text{ s.t. }\begin{array}{ll}&(2\mathbf{D}_{i}-\mathbf{I})\mathbf{X}\mathbf{w}_{i} \geq\mathbf{0}\\ &(2\mathbf{D}_{i}-\mathbf{I})\mathbf{X}\mathbf{w}_{i}^{\prime}\geq\mathbf{0} \end{array},\forall i, \tag{3}\] where \(\mathbf{D}_{i}\in\mathcal{D}_{\mathbf{X}}\) are \(n\times n\) binary masking diagonal matrices given by \(p\!:=\left|\mathcal{D}_{\mathbf{X}}\right|\) and \[\mathcal{D}_{\mathbf{X}}\!:=\left\{\mathrm{diag}\left(\mathbb{1}\left\{ \mathbf{X}\mathbf{u}\geq\mathbf{0}\right\}\right):\mathbf{u}\in\mathbb{R}^{d }\right\}. \tag{4}\] The mask set \(\mathcal{D}_{\mathbf{X}}\) can be interpreted as the set of all possible ways to separate the training data \(\mathbf{X}\) by a hyperplane passing through the origin. With these masks, we can characterize a single ReLU activated neuron on the training data as follows: \(\left(\mathbf{X}\mathbf{w}_{i}\right)_{+}=\mathrm{diag}\left(\mathbb{1}\left\{ \mathbf{X}\mathbf{w}_{i}\geq\mathbf{0}\right\}\right)\mathbf{X}\mathbf{w}_{i} =\mathbf{D}_{i}\mathbf{X}\mathbf{w}_{i}\) provided that \((2\mathbf{D}_{i}-\mathbf{I}_{n})\mathbf{X}\mathbf{w}_{i}\geq 0\). Therefore, by enforcing these cone constraints in (3), we maintain the masking property of the ReLU activation and parameterize the neural network as a linear function of the weights, thus make the learning problem convex. We refer the reader to [6] for more details. Although (3) is convex and therefore eliminates the drawbacks associated with the non-convexity of (2), it might still be computationally complex to solve. Precisely, a worst-case upper-bound on the number of variables is \(\mathcal{O}(n^{r})\), where \(r=\mathrm{rank}(\mathbf{X})\leq\min\{n,d\}\). Although this is still significantly better than brute-force search over \(2^{mn}\) ReLU patterns, it could still be exponential in the dimension \(d\). To mitigate this issue, [8] proposed a relaxation of (1) called the _gated ReLU_ network \[f_{\mathcal{G}}\left(\mathbf{x},\boldsymbol{\theta}\right)\!:=\sum_{j=1}^{m} \left(\mathbb{1}\left\{\mathbf{x}^{T}\mathbf{g}_{j}\geq\mathbf{0}\right\} \mathbf{x}^{T}\mathbf{w}_{j}^{(1)}\right)w_{j}^{(2)}, \tag{5}\] where \(\mathcal{G}\!:=\{\mathbf{g}_{j}\}_{j=1}^{m}\) is a set of _gate_ vectors that are also optimized throughout training. Then, the corresponding non-convex learning problem is as follows \[\min_{\mathbf{W}^{(1)},\mathbf{w}^{(2)},\mathcal{G}}\left\|\sum_{j=1}^{m} \mathrm{diag}\left(\mathbb{1}\left\{\mathbf{X}\mathbf{g}_{j}\geq\mathbf{0} \right\}\right)\mathbf{X}\mathbf{w}_{j}^{(1)}w_{j}^{(2)}-\mathbf{y}\right\|_{2 }^{2}+\lambda\sum_{j=1}^{m}\left(\left\|\mathbf{w}_{j}^{(1)}\right\|_{2}^{2}+ |w_{j}^{(2)}|^{2}\right). \tag{6}\] By performing this relaxation, we decouple the dependence between the indicator function and the linear term in the exact ReLU network (1). To express the equivalent convex optimization problem corresponding to the gated ReLU network, we introduce the notion of complete gate sets. **Definition 2.1**.: _A gate set \(\mathcal{G}\) is complete with respect to a dataset \(\mathbf{X}\) if the corresponding set of hyperplane arrangements covers all possible arrangement patterns for \(\mathbf{X}\) defined in (4), i.e.,_ \[\left\{\mathrm{diag}\left(\mathbb{1}\left\{\mathbf{X}\mathbf{g}_{j}\geq \mathbf{0}\right\}\right):\mathbf{g}_{j}\in\mathcal{G}\right\}=\mathcal{D}_{ \mathbf{X}}.\] _Additionally, \(\mathcal{G}\) is minimally complete if \(\left|\mathcal{G}\right|=\left|\mathcal{D}_{\mathbf{X}}\right|=p\)._ Now, with this relaxation, [8] showed that the optimal value for (6) can always be achieved by choosing \(\mathcal{G}\) to be a complete gate set. Therefore, by working only with complete gate sets, we can modify (6) to only optimize over the network parameters \(\mathbf{W}^{(1)}\) and \(\mathbf{w}^{(2)}\) \[\min_{\mathbf{W}^{(1)},\mathbf{w}^{(2)}}\left\|\sum_{j=1}^{m}\mathrm{diag} \left(\mathbb{1}\left\{\mathbf{X}\mathbf{g}_{j}\geq\mathbf{0}\right\}\right) \mathbf{X}\mathbf{w}_{j}^{(1)}w_{j}^{(2)}-\mathbf{y}\right\|_{2}^{2}+\lambda \sum_{j=1}^{m}\left(\left\|\mathbf{w}_{j}^{(1)}\right\|_{2}^{2}+|w_{j}^{(2)}|^ {2}\right). \tag{7}\] Additionally, [21] also showed that we can set \(m=p\) without loss of generality. Then, [8] showed that the equivalent convex optimization problem for (6) and also (7) in the complete gate set setting is \[\min_{\mathbf{w}_{i}}\left\|\sum_{i=1}^{p}\mathbf{D}_{i}\mathbf{X}\mathbf{w}_{i }-\mathbf{y}\right\|_{2}^{2}+\lambda\sum_{i=1}^{p}\left\|\mathbf{w}_{i}\right\| _{2}. \tag{8}\] Notice that (8) is a least squares problem with _group lasso_ regularization [22]. Therefore, this relaxation for (3) can be efficiently optimized via convex optimization solvers. Furthermore, [8] proved that after solving the relaxed problem (8), one can construct an equivalent ReLU network from a gated ReLU network via a convex optimization based procedure called _cone decomposition_. We discuss the computational complexity of this approach in Section E of the supplementary material. ### The Neural Tangent Kernel Previous works [1; 2; 5; 23; 24] characterized the training dynamics of SGD with infinitesimally small learning rates on neural networks in the _infinite-width_ limit, i.e., as \(m\rightarrow\infty\), via the NTK. [25; 26] also present analyses in this regime via a mean-field approach. In this section, we provide a brief overview of this theory and refer the reader to [1; 27; 28] for more details. The main idea behind the NTK theory is to approximate the neural network model function \(f\left(\mathbf{x},\boldsymbol{\theta}\right)\) by _linearizing it_ with respect to the parameters \(\boldsymbol{\theta}\) around its initialization \(\boldsymbol{\theta}_{0}\): \[\hat{f}\left(\mathbf{x},\boldsymbol{\theta}\right)\approx f\left(\mathbf{x},\boldsymbol{\theta}_{0}\right)+\nabla_{\boldsymbol{\theta}}f\left(\mathbf{x },\boldsymbol{\theta}_{0}\right)^{T}\left(\boldsymbol{\theta}-\boldsymbol{ \theta}_{0}\right).\] The authors of [1] show that if \(f\) is a neural network (with appropriately scaled output), and parameters \(\boldsymbol{\theta}\) initialized as i.i.d standard Gaussians, the linearization \(\hat{f}\) better approximates \(f\) in the infinite-width limit. We can interpret the linearized model \(\hat{f}\) as a kernel method with a feature map given by \(\boldsymbol{\phi}\left(\mathbf{x}\right)=\nabla_{\boldsymbol{\theta}}f\left( \mathbf{x},\boldsymbol{\theta}_{0}\right)\). The corresponding kernel induced by this feature map is termed as the NTK. Note that this is a random kernel since it depends on the random initialization of the parameters denoted as \(\boldsymbol{\theta}_{0}\). The main result of [1] in the simplified case of two-layer neural networks is that, in the infinite-width limit this kernel approaches a fixed deterministic limit given by \(H\big{(}\mathbf{x},\mathbf{x}^{\prime}\big{)}:=\mathbb{E}\big{[}\nabla_{ \boldsymbol{\theta}}f\big{(}\mathbf{x},\hat{\boldsymbol{\theta}}_{0}\big{)}^ {T}\nabla_{\boldsymbol{\theta}}f\big{(}\mathbf{x}^{\prime},\hat{\boldsymbol{ \theta}}_{0}\big{)}\big{]}\), where \(\hat{\boldsymbol{\theta}}\) corresponds to the parameters of a single neuron. Furthermore, [1] show that in this infinite limit, SGD with an infinitesimally small learning rate is equivalent to performing kernel regression with the fixed NTK. To link the convex formulation (8) with NTK theory, we first present a scaled version of the gated ReLU network in (5) as follows \[\tilde{f}_{\psi}\left(\mathbf{x},\boldsymbol{\theta}\right) :=\frac{1}{\sqrt{2m}}\sum_{j=1}^{m}\left(\mathbb{I}\left\{\mathbf{x}^{T} \mathbf{g}_{j}\geq\boldsymbol{0}\right\}\mathbf{x}^{T}\mathbf{w}_{j}^{(1)} \right)w_{j}^{(2)}. \tag{9}\] In the next lemma, we provide the infinite width NTK of the scaled gated ReLU network in (9). **Lemma 2.2**.: 2 _The infinite width NTK of the gated ReLU network (9) with i.i.d gates sampled as \(\mathbf{g}_{j}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I_{d}}\right)\) and randomly initialized parameters as \(\mathbf{w}_{j}^{(1)}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I_{d}}\right)\) and \(w_{j}^{(2)}\sim\mathcal{N}\left(0,1\right)\) is_ Footnote 2: All the proofs and derivations are presented in the supplementary material. \[H\left(\mathbf{x},\mathbf{x}^{\prime}\right) :=\frac{1}{2\pi}\left(\pi-\arccos\left(\frac{\mathbf{x}^{T} \mathbf{x}^{\prime}}{\left\|\mathbf{x}\right\|_{2}\left\|\mathbf{x}^{\prime} \right\|_{2}}\right)\right)\mathbf{x}^{T}\mathbf{x}^{\prime}. \tag{10}\] Additionally, we introduce a reparameterization of the standard ReLU network (1) with \(\boldsymbol{\theta}:=\left\{\left(\mathbf{w}_{j}^{(+)},\mathbf{w}_{j}^{(-)} \right)\right\}_{j=1}^{m}\), with \(\mathbf{w}_{j}^{(+)}\), \(\mathbf{w}_{j}^{(-)}\in\mathbb{R}^{d}\) which can still represent all the functions that (1) can \[f_{r}(\mathbf{x},\boldsymbol{\theta}) :=\frac{1}{\sqrt{2m}}\sum_{j=1}^{m}\left(\mathbf{x}^{T}\mathbf{w} _{j}^{(+)}\right)_{+}-\left(\mathbf{x}^{T}\mathbf{w}_{j}^{(-)}\right)_{+}. \tag{11}\] **Lemma 2.3**.: _The gated ReLU network (9) and the reparameterized ReLU network (11) have the same infinite width NTK given by Lemma 2.2._ Next, we present an equivalence between the gated ReLU network and the MKL model [16; 29]. ## 3 Multiple Kernel Learning and Group Lasso The Multiple Kernel Learning (MKL) model [16; 29] is an extension of the standard kernel method that learns an optimal data-dependent kernel as a convex combination of a set of fixed kernels and then performs regression with this learned kernel. We provide a brief overview of the MKL setting based on the exposition in [30]. Consider a set of \(p\) kernels given by corresponding feature maps \(\boldsymbol{\phi}_{i}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d_{i}}\). Given \(n\) training samples \(\mathbf{X}\in\mathbb{R}^{n\times d}\) with targets \(\mathbf{y}\in\mathbb{R}^{n}\), we define the feature matrices on this data by stacking the feature vectors as \(\boldsymbol{\Phi}_{i}:=[\boldsymbol{\phi}_{i}(\mathbf{x}_{1})^{T};\ldots; \boldsymbol{\phi}_{i}(\mathbf{x}_{n})^{T}]\in\mathbb{R}^{n\times d_{i}}\). Then, the corresponding \(n\times n\) kernel matrices are given by \(\mathbf{K}_{i}:=\boldsymbol{\Phi}_{i}\boldsymbol{\Phi}_{i}^{T}\). A convex combination of these kernels can be written as \(\mathbf{K}\left(\boldsymbol{\eta}\right):=\sum_{i=1}^{p}\eta_{i}\mathbf{K}_{i}\) where \(\boldsymbol{\eta}\in\Delta_{p}:=\{\boldsymbol{\eta}:\mathbf{1}^{T}\boldsymbol{ \eta}=1,\ \boldsymbol{\eta}\geq\boldsymbol{0}\}\) is a set of weights in the unit simplex. By noticing that the feature map corresponding to \(\mathbf{K}\left(\boldsymbol{\eta}\right)\) is obtained by taking a weighted concatenation of \(\mathbf{\phi}_{i}\) with weights \(\sqrt{\eta_{i}}\), we can write the MKL optimization problem in terms of the feature matrices as \[\min_{\mathbf{\eta}\in\Delta_{p},\mathbf{\mathbf{v}}_{i}\in\mathbb{R}^{d_{i}}}\left\| \sum_{i=1}^{p}\sqrt{\eta_{i}}\mathbf{\Phi}_{i}\mathbf{v}_{i}-\mathbf{y}\right\|_{2} ^{2}+\hat{\lambda}\sum_{i=1}^{p}\left\|\mathbf{v}_{i}\right\|_{2}^{2}, \tag{12}\] where \(\hat{\lambda}>0\) is a regularization coefficient. For a set of fixed weights \(\mathbf{\eta}\), the optimal objective value of the kernel regression problem over \(\mathbf{v}\) is proportional to \(\mathbf{y}^{T}(\sum_{i=1}^{p}\eta_{i}\mathbf{K}_{i}+\hat{\lambda}\mathbf{I}_{ \mathbf{n}})^{-1}\mathbf{y}\) up to constant factors [16]. Thus, the MKL problem can be equivalently written as the following problem \[\min_{\mathbf{\eta}\in\Delta_{p}}\quad\mathbf{y}^{T}\left(\mathbf{K}\left(\mathbf{ \eta}\right)+\hat{\lambda}\mathbf{I}_{\mathbf{n}}\right)^{-1}\mathbf{y}. \tag{13}\] In this formulation, we can interpret MKL as finding the optimal data-dependent kernel that can be expressed as a convex combination of the fixed kernels given by \(\mathbf{K}_{i}\). In the next section, we link this kernel learning formulation with the convex group lasso problem in (8). ### Equivalence to Group Lasso We first show that the MKL problem in (12) can be equivalently stated as a group lasso problem. **Lemma 3.1** ([16; 29]).: _The MKL problem (12) is equivalent to the following kernel regression problem using a uniform combination of the fixed kernels with squared group lasso regularization where the groups are given by parameters corresponding to each feature map_ \[\min_{\mathbf{w}_{i}\in\mathbb{R}^{d_{i}}}\quad\left\|\sum_{i=1}^{p}\mathbf{\Phi} _{i}\mathbf{w}_{i}-\mathbf{y}\right\|_{2}^{2}+\hat{\lambda}\left(\sum_{i=1}^{ p}\left\|\mathbf{w}_{i}\right\|_{2}\right)^{2}. \tag{14}\] We now present a short derivation of this equivalence. Using the variational formulation of the squared group \(\ell_{1}\)-norm [31] \[\left(\sum_{i=1}^{p}\left\|\mathbf{w}_{i}\right\|_{2}\right)^{2}=\min_{\mathbf{ \eta}\in\Delta_{p}}\sum_{i=1}^{p}\frac{\left\|\mathbf{w}_{i}\right\|_{2}^{2} }{\eta_{i}},\] we can rewrite the group lasso problem (14) as a joint minimization problem over both the parameters \(\mathbf{w}\) and regularization weights \(\mathbf{\eta}\) as follows \[\min_{\mathbf{\eta}\in\Delta_{p}}\quad\min_{\mathbf{w}_{i}\in\mathbb{R}^{d_{i}}} \quad\left\|\sum_{i=1}^{p}\mathbf{\Phi}_{i}\mathbf{w}_{i}-\mathbf{y}\right\|_{2} ^{2}+\hat{\lambda}\sum_{i=1}^{p}\frac{\left\|\mathbf{w}_{i}\right\|_{2}^{2}}{ \eta_{i}}.\] Finally, with a change of variables given by \(\mathbf{v}_{i}=\mathbf{w}_{i}/\sqrt{\eta_{i}}\), we recover the MKL problem (12). We note that the MKL problem (12) is also equivalent to the following standard group lasso problem \[\min_{\mathbf{w}_{i}\in\mathbb{R}^{d_{i}}}\quad\left\|\sum_{i=1}^{p}\mathbf{\Phi} _{i}\mathbf{w}_{i}-\mathbf{y}\right\|_{2}^{2}+\lambda\sum_{i=1}^{p}\left\| \mathbf{w}_{i}\right\|_{2}. \tag{15}\] This is due to the fact that squared and standard group lasso problems have the same regularization paths [31], so (14) and (15) are equivalent when \(\hat{\lambda}=\frac{\lambda}{\sum_{i=1}^{p}\left\|\mathbf{w}_{i}^{*}\right\|_{ 2}}\), where \(\mathbf{w}^{*}\) is the solution to (14). ### Solving Group Lasso by Iterative Reweighting Previously, we used a variational formulation of the squared group \(\ell_{1}\)-norm to show equivalences to MKL. Now, we present the Iteratively Reweighted Least Squares (IRLS) algorithm [32; 33; 34; 35] to solve the group lasso problem (15) using the following variational formulation of the group \(\ell_{1}\)-norm [34] \[\sum_{i=1}^{p}\left\|\mathbf{w}_{i}\right\|_{2}=\min_{\mathbf{\eta}\in\mathbb{R}_{ +}^{p}}\frac{1}{2}\sum_{i=1}^{p}\left(\frac{\left\|\mathbf{w}_{i}\right\|_{2}^{ 2}}{\eta_{i}}+\eta_{i}\right).\] Based on this, we rewrite the group lasso problem (15) as the following minimization problem \[\min_{\mathbf{\eta}\in\mathbb{R}_{+}^{p}}\min_{\mathbf{w}_{i}\in\mathbb{R}^{d_{i}}} \quad\left\|\sum_{i=1}^{p}\mathbf{\Phi}_{i}\mathbf{w}_{i}-\mathbf{y}\right\|_{2}^{ 2}+\frac{\lambda}{2}\sum_{i=1}^{p}\left(\frac{\left\|\mathbf{w}_{i}\right\|_{2} ^{2}}{\eta_{i}}+\eta_{i}\right).\] Since the objective is jointly convex in \((\mathbf{\eta},\mathbf{w})\), it can be solved using alternating minimization [33]. Particularly, note that the inner minimization problem in \(\mathbf{w}_{i}\)'s is simply a \(\ell_{2}\) regularized least squares problem with different regularization strengths for each group and this can be solved in closed form \[\min_{\mathbf{w}_{i}\in\mathbb{R}^{d_{i}}}\quad\left\|\sum_{i=1}^{p}\mathbf{\Phi} _{i}\mathbf{w}_{i}-\mathbf{y}\right\|_{2}^{2}+\lambda\sum_{i=1}^{p}\frac{ \left\|\mathbf{w}_{i}\right\|_{2}^{2}}{\eta_{i}}. \tag{16}\] The outer problem in \(\mathbf{\eta}\) is also directly solved by setting \(\eta_{i}=\left\|\mathbf{w}_{i}\right\|_{2}\)[34]. To avoid convergence issues and instability around \(\eta_{i}=0\), we approximate the reweighting by adding a small positive constant \(\epsilon\). We use this procedure to solve the group lasso formulation of the gated ReLU network (8) by setting \(\mathbf{\Phi}_{i}=\mathbf{D}_{i}\mathbf{X}\). A detailed description is provided Algorithm 1. For further details regarding convergence, we refer the reader to [32, 33, 34]. ``` 1:Set iteration count \(k\gets 0\) 2:Initialize weights \(\eta_{i}^{(0)}\) 3:Set \(\mathbf{\Phi}_{i}\!:=\!\mathbf{D}_{i}\mathbf{X},\;\forall\;\mathbf{D}_{i}\in \mathcal{D}_{\mathbf{X}}\) 4:while not converged and \(k\leq\text{max}\) iteration count do 5: Solve the weighted \(\ell_{2}\) regularized least squares problem: \[\left\{\mathbf{w}_{i}^{(k)}\right\}_{i}=\operatorname*{argmin}_{\left\{ \mathbf{w}_{i}\right\}_{i}}\left\|\sum_{i=1}^{p}\mathbf{\Phi}_{i}\mathbf{w}_{i}- \mathbf{y}\right\|_{2}^{2}+\lambda\sum_{i=1}^{p}\frac{\left\|\mathbf{w}_{i} \right\|_{2}^{2}}{\eta_{i}^{(k)}}\] 6: Update the weights: \(\eta_{i}^{(k+1)}=\sqrt{\left\|\mathbf{w}_{i}^{(k)}\right\|_{2}+\epsilon}\) 7: Increment iteration count: \(k\gets k+1\) 8:endwhile 9:Optional: Convert the gated ReLU network to a ReLU network (see Section E for details) ``` **Algorithm 1** Iteratively Reweighted Least Squares (IRLS) for gated ReLU and ReLU networks ## 4 Gated ReLU as MKL with Masking Kernels Motivated by the MKL interpretation of group lasso, we return to the convex reformulation (8) of the gated ReLU network. Notice that this problem has the same structure as the MKL equivalent group lasso problem (15) with a specific set of feature maps that we define below. **Definition 4.1**.: _The masking feature maps \(\mathbf{\phi}_{j}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) generated by a fixed set of gates \(\mathcal{G}\) are defined as \(\mathbf{\phi}_{j}(\mathbf{x})=\mathbb{1}\left\{\mathbf{x}^{T}\mathbf{g}_{j}\geq 0 \right\}\mathbf{x}\)._ These feature maps can be interpreted as simply passing the input unchanged if it lies in the positive halfspace of the corresponding gate vector \(\mathbf{g}_{j}\), i.e., \(\mathbf{x}^{T}\mathbf{g}_{j}\geq 0\), and returning zero if the input does not lie in this halfspace. Since \(\operatorname{diag}\left(\mathbb{1}\left\{\mathbf{X}\mathbf{g}_{j}\geq \mathbf{0}\right\}\right)\in\mathcal{D}_{\mathbf{X}},\;\forall\;\mathbf{g}_{j} \in\mathcal{G}\) holds for an arbitrary gate set \(\mathcal{G}\), we can conveniently express the corresponding feature matrices of these masking feature maps on the data \(\mathbf{X}\) in terms of fixed diagonal data masks as \(\mathbf{\Phi}_{j}=\mathbf{D}_{j}\mathbf{X}\), where \(\mathbf{D}_{j}=\operatorname{diag}\left(\mathbb{1}\left\{\mathbf{X}\mathbf{g} _{j}\geq\mathbf{0}\right\}\right)\). Similarly, the corresponding masking kernel matrices take the form \(\mathbf{K}_{j}=\mathbf{D}_{j}\mathbf{X}\mathbf{X}^{T}\mathbf{D}_{j}\). Note that for an arbitrary set of gates, the generated masking feature matrices on \(\mathbf{X}\) may not cover the entire set of possible masks \(\mathcal{D}_{\mathbf{X}}\). Additionally, multiple gate vectors can result in identical masks if \(\operatorname{diag}\left(\mathbb{1}\left\{\mathbf{X}\mathbf{g}_{i}\geq \mathbf{0}\right\}\right)=\operatorname{diag}\left(\mathbb{1}\left\{\mathbf{X }\mathbf{g}_{j}\geq\mathbf{0}\right\}\right)\) for \(i\neq j\) leading to degenerate feature matrices. However, if we work with minimally complete gate sets as defined in Definition 2.1, we can rectify these issues. **Lemma 4.2**.: _For a minimally complete gate set \(\mathcal{G}\) defined in Definition 2.1, we can uniquely associate a gate vector \(\mathbf{g}_{i}\) to each data mask \(\mathbf{D}_{i}\in\mathcal{D}_{\mathbf{X}}\) such that \(\mathbf{D}_{i}=\operatorname{diag}\left(\mathbb{1}\left\{\mathbf{X}\mathbf{g} _{i}\geq\mathbf{0}\right\}\right),\forall i\in[p]\)._ Consequently, for minimally complete gate sets \(\mathcal{G}\), the generated masking feature matrices \(\forall i\in[p]\) can be expressed as \(\mathbf{\Phi}_{i}=\mathbf{D}_{i}\mathbf{X}\) and the masking kernel matrices take the form \(\mathbf{K}_{i}=\mathbf{D}_{i}\mathbf{X}\mathbf{X}^{T}\mathbf{D}_{i}\). In the context of the gated ReLU problem (7), since \(\mathcal{G}\) is complete, we can replace it with a minimally complete subset of \(\mathcal{G}\) without loss of generality since [21] showed that increasing \(m\) beyond \(p\) cannot reduce the value of the regularized training objective in (7). We are now ready to combine the MKL-group lasso equivalence with the convex reformulation of the gated ReLU network to present the following characterization of the nonconvex gated ReLU learning problem. **Theorem 4.3**.: _The non-convex gated ReLU problem (7) with a minimally complete gate set \(\mathcal{G}\) is equivalent to performing multiple kernel learning (12) with the masking feature maps generated by \(\mathcal{G}\)_ \[\min_{\boldsymbol{\eta}\in\Delta_{p},\mathbf{v}_{i}\in\mathbb{R}^{d}}\quad \left\|\sum_{i=1}^{p}\sqrt{\eta_{i}}\mathbf{D}_{i}\mathbf{X}\mathbf{v}_{i}- \mathbf{y}\right\|_{2}^{2}+\hat{\lambda}\sum_{i=1}^{p}\left\|\mathbf{v}_{i} \right\|_{2}^{2}.\] This theorem implies that the gated ReLU network finds the optimal combination of linear models restricted to the different masked datasets \(\mathbf{D}_{i}\mathbf{X}\) generated by the gates. By optimizing with all possible data maskings, we obtain the best possible gated ReLU network. From the kernel perspective, we have characterized the problem of finding an optimal finite width gated ReLU network as learning a data-dependent kernel and then performing kernel regression. This is in contrast to the NTK theory where the training of an infinite width network by gradient flow is characterized by regression with a constant kernel that is not learned from data. We further explore this connection below. ## 5 NTK as a Weighted Masking Kernel We now connect the NTK of a gated ReLU network with the masking kernels generated by its gates. **Theorem 5.1**.: _Let \(\mathbf{K}_{\mathcal{G}}\left(\boldsymbol{\tilde{\eta}}\right)\in\mathbb{R}^{n \times n}\) be the weighted masking kernel obtained by taking a convex combination of the masking feature maps generated by a minimally complete gate set \(\mathcal{G}\) with weights given by \(\tilde{\eta}_{i}=\mathbb{P}[\operatorname{diag}\left(\mathbb{1}\left\{ \mathbf{X}\mathbf{h}\geq\mathbf{0}\right\}\right)=\mathbf{D}_{i}]\) where \(\mathbf{h}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{\mathbf{d}})\) and let \(\mathbf{H}\in\mathbb{R}^{n\times n}\) be the infinite width NTK of the gated ReLU network (10) evaluated on the training data, i.e., the \(ij^{th}\) entry of \(\mathbf{H}\) is defined as \(\mathbf{H}_{ij}\!:=H(\mathbf{x}_{i},\mathbf{x}_{j})\). Then, \(\mathbf{K}_{\mathcal{G}}\left(\boldsymbol{\tilde{\eta}}\right)=\mathbf{H}\)._ A rough sketch of the proof of this theorem is to express the matrix \(\mathbf{H}\) as an expectation of indicator random variables using the definition of the NTK [1]. Then, by conditioning on the event that these indicators equal the masks \(\mathbf{D}_{i}\), we can express the NTK as a convex combination of the masking kernels \(\mathbf{K}_{i}\). The weights end up being precisely the probabilities that are described in Theorem 5.1. A detailed proof is provided in the supplementary material. This theorem implies that the outputs of the gated ReLU network obtained via (16) with regularization weights \(\boldsymbol{\tilde{\eta}}\) on the training data is identical to that of kernel ridge regression with the NTK. **Corollary 5.2**.: _Let \(\tilde{\mathbf{w}}\) be the solution to (16) with feature matrices \(\mathbf{\Phi}_{i}=\mathbf{D}_{i}\mathbf{X}\) and regularization weights \(\tilde{\eta}_{i}=\mathbb{P}[\operatorname{diag}\left(\mathbb{1}\left\{ \mathbf{X}\mathbf{h}\geq\mathbf{0}\right\}\right)=\mathbf{D}_{i}]\) where \(\mathbf{h}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{\mathbf{d}})\) and let \(\tilde{\mathbf{y}}=\mathbf{H}(\mathbf{H}+\lambda\mathbf{I}_{\mathbf{n}})^{-1} \mathbf{y}\) be the outputs of kernel ridge regression with the NTK on the training data. Then, \(\sum_{i=1}^{p}\mathbf{D}_{i}\mathbf{X}\tilde{\mathbf{w}}_{i}=\tilde{\mathbf{y}}\)._ Since \(\mathcal{G}\) is minimally complete, the weights in Theorem 5.1 satisfy \(\sum_{i=1}^{p}\tilde{\eta}_{i}=1\). In other words, \(\boldsymbol{\tilde{\eta}}\in\Delta_{p}\). Therefore, we can interpret Theorem 5.1 as follows - the NTK evaluated on the training data lies in the convex hull of all possible masking kernels of the training data. So \(\mathbf{H}\) lies in the feasible set of kernels for MKL using these masking kernels. By Theorem 4.3, we can find the optimal kernel in this set by solving the group lasso problem (8) of the gated ReLU network. **Therefore, we can interpret solving** (8) **as fixing the NTK by learning an improved data-dependent kernel.** **Remark 5.3** (Suboptimality of NTK).: _Note that the weights \(\boldsymbol{\tilde{\eta}}\) do depend on the training data \(\mathbf{X}\), but do not depend on the target labels \(\mathbf{y}\). Since MKL learns the optimal kernel using both \(\mathbf{X}\) and \(\mathbf{y}\), the NTK still cannot perform better than the optimal MKL kernel on the training set. Thus, we fix the NTK._ ## 6 Analysis of Prediction Error In this section, we present an analysis of the in-sample prediction error for the gated ReLU network given in (5) along the lines of existing consistency results [36; 37] for the group lasso problem (8). We assume that the data is generated by a noisy ReLU neural network model \(\mathbf{y}=f(\mathbf{X},\mathbf{\theta}^{*})+\epsilon\) where \(f\) is the ReLU network defined in (1) with true parameters \(\mathbf{\theta}^{*}\) and the noise is distributed as \(\epsilon\sim\mathcal{N}\big{(}0,\sigma^{2}\mathbf{I}_{n}\big{)}\). By the seminal universal approximation theorem of [38], this model is able to capture a broad class of ground-truth functions by using a ReLU network with enough neurons. We can transform \(\mathbf{\theta}^{*}\) to the weights \(\mathbf{w}^{*}\) in the convex ReLU model and write \(f(\mathbf{X},\mathbf{\theta}^{*})=\sum_{i=1}^{p}\mathbf{D}_{i}\mathbf{X}\mathbf{w} _{i}^{*}\). We denote by \(\hat{\mathbf{w}}\) the solution of the group lasso problem (8) for the gated ReLU network with an additional \(\frac{1}{n}\) factor on the loss to simplify derivations, \[\hat{\mathbf{w}}=\operatorname*{argmin}_{\mathbf{w}_{i}\in\mathbb{R}^{d}} \quad\frac{1}{n}\left\|\sum_{i=1}^{p}\mathbf{D}_{i}\mathbf{X}\mathbf{w}_{i}- \mathbf{y}\right\|_{2}^{2}+\lambda\sum_{i=1}^{p}\left\|\mathbf{w}_{i}\right\|_ {2}. \tag{17}\] We now present our main theorem which bounds the prediction error of the gated ReLU network obtained from the solution \(\hat{\mathbf{w}}\) below. **Theorem 6.1** (Prediction Error of Gated ReLU).: _For some \(t>0\), let the regularization parameter in (17) be \(\lambda=t\sigma\|\mathbf{X}\|_{F}/n\). Then, with probability at least \(1-2e^{-t^{2}/8}\), we have_ \[\frac{1}{n}\left\|f_{\mathcal{G}}\big{(}\mathbf{X},\hat{\mathbf{\theta}}\big{)}-f \left(\mathbf{X},\mathbf{\theta}^{*}\right)\right\|_{2}^{2}\leq 2\lambda\sum_{i=1}^{p} \left\|\mathbf{w}_{i}^{*}\right\|_{2}\] _where \(f_{\mathcal{G}}\big{(}\mathbf{X},\hat{\mathbf{\theta}}\big{)}=\sum_{i=1}^{p} \mathbf{D}_{i}\mathbf{X}\hat{\mathbf{w}}_{i}\) are the predictions of the gated ReLU network obtained from \(\hat{\mathbf{w}}\)._ The proof closely follows the analysis of the regular lasso problem presented in [37], but we extend it to the specific case of the group lasso problem corresponding to the gated ReLU network and leverage the masking structure of the lifted data matrix to obtain simplified bounds. ## 7 Experiments Here, we empirically corroborate our theoretical results via experiments on several datasets.3 Figure 1: Plot of objective value of problem (8) which is solved using IRLS (algorithm 1) for a toy 1D dataset with \(n=5\). The iterates are compared to the optimal value obtained by solving (8) using CVXPY (blue). Notice that the solution to (16) with regularization weights given by the NTK weights \(\mathbf{\tilde{\eta}}\) from Theorem 5.1 (green) is sub-optimal for problem (8), and running IRLS by initializing with these weights (red) converges to the optimal objective value. We also include plots of IRLS initialized with random weights (black). The right plot shows the corresponding learned functions. The blue curve shows the output of the solution to the group lasso problem (8) after performing a cone decomposition to obtain a ReLU network. The dashed curve shows the output of running gradient descent (GD) on a ReLU network with 100 neurons. The green curve is the result of kernel ridge regression (KRR) with the NTK. **We observe that our reweighted kernel method (Group Lasso) produces an output that matches the output of the NN trained via GD. In contrast, NTK produces an erroneous smooth function due to the infinite width approximation.** **1D datasets.** For the 1D experiments in Figure 1, we add a second data dimension with value equal to \(1\) for all data points to simulate a bias term in the first layer of the gated ReLU network. Also, in this case we can enumerate all \(2n\) data masks \(\mathbf{D}_{i}\) directly. We use the cone decomposition procedure described in [8] to obtain a ReLU network from the gated ReLU network obtained by solving (8). We also train a \(100\) neuron ReLU network using gradient descent (GD) and compare the learned output functions in the right plot in Figure 1. Exact details can be found in the supplementary material. **Student-teacher setting.** We generate the training data by sampling \(\mathbf{X}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{\mathbf{n}})\) and computing the targets \(\mathbf{y}\) using a fixed, randomly initialized gated ReLU teacher network. In Figure 2, \(n=10\), \(d=5\), and we use a teacher network with width \(m=10\), with gates and parameters randomly drawn from independent standard multivariate Gaussians. To solve the convex formulation (8), we estimate \(\mathcal{D}_{\mathbf{X}}\) by randomly sampling unique hyperplane arrangements. **Computing the NTK weights \(\boldsymbol{\tilde{\eta}}\).** The weights induced by the NTK are given as in Theorem 5.1 by \(\tilde{\eta}_{i}=\mathbb{P}[\operatorname{diag}\left(\mathbb{1}\left\{\mathbf{ X}\mathbf{h}\geq\mathbf{0}\right\}\right)=\mathbf{D}_{i}]\) where \(\mathbf{h}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{\mathbf{d}})\) is a standard multivariate Gaussian vector. These probabilities can be interpreted either as the orthant probabilities of the multivariate Gaussians given by \(\left(2\mathbf{D}_{i}-\mathbf{I}_{\mathbf{n}}\right)\mathbf{X}\mathbf{h}\) or as the solid angle of the cones given by \(\left\{\mathbf{u}\in\mathbb{R}^{d}:\left(2\mathbf{D}_{i}-\mathbf{I}_{\mathbf{n }}\right)\mathbf{X}\mathbf{u}\geq\mathbf{0}\right\}\). Closed form expressions exist for \(d=2,3\), and [39, 40] present approximating schemes for higher dimensions. We calculate these weights exactly for the 1D example presented in Figure 1, and estimate them using Monte Carlo sampling for the student-teacher example in Figure 2. \begin{table} \begin{tabular}{l c c|c c} \hline \hline **Dataset** & \(n\) & \(d\) & **NTK** & **Ours (Alg 1)** \\ \hline acute-inflammation & \(120\) & \(6\) & \(1.000\) & \(1.000\) \\ acute-nephritis & \(120\) & \(6\) & \(1.000\) & \(1.000\) \\ balloons & \(16\) & \(4\) & \(0.75\) & \(0.75\) \\ blood & \(748\) & \(4\) & \(0.524\) & **0.583** \\ breast-cancer & \(286\) & \(9\) & \(0.417\) & **0.625** \\ breast-cancer-wisc-prog & \(699\) & \(9\) & \(9.06\) & **0.966** \\ breast-cancer-wisc-diag & \(569\) & \(30\) & **0.965** & \(0.915\) \\ breast-cancer-wisc-prog & \(198\) & \(33\) & **0.7** & \(0.66\) \\ congressional-voting & \(435\) & \(16\) & \(0.266\) & **0.266** \\ conn-bench-sonar-mines-rocks & \(208\) & \(60\) & \(6.635\) & **0.712** \\ credit-approval & \(690\) & \(15\) & \(0.838\) & **0.844** \\ cylinder-bands & \(512\) & \(35\) & \(0.773\) & **0.82** \\ echocardiogram & \(131\) & \(10\) & \(0.758\) & **0.788** \\ fertility & \(100\) & \(9\) & \(0.76\) & \(0.76\) \\ haberman-survival & \(306\) & \(3\) & \(0.481\) & **0.532** \\ heart-hungarian & \(294\) & \(12\) & \(0.743\) & **0.878** \\ hepatitis & \(155\) & \(19\) & **0.923** & \(0.897\) \\ ilpd-indian-liver & \(583\) & \(9\) & \(0.432\) & **0.555** \\ ionosphere & \(351\) & \(33\) & \(0.955\) & **0.966** \\ mammographic & \(961\) & \(5\) & \(0.783\) & **0.792** \\ molec-biol-promoter & \(106\) & \(57\) & \(0.63\) & **0.815** \\ musk-1 & \(476\) & \(166\) & \(0.782\) & **0.866** \\ oocytes\_trisoperus\_nucleus\_2T & \(912\) & \(25\) & **0.781** & 0.759 \\ parkinsons & \(195\) & \(22\) & 0.939 & **0.959** \\ pima & \(768\) & \(8\) & \(0.552\) & **0.599** \\ pittsburg-bridges-T-OR-D & \(102\) & \(7\) & \(0.731\) & **0.846** \\ planning & \(182\) & \(12\) & \(0.435\) & **0.543** \\ stalog-australian-credit & \(690\) & \(14\) & **1.0** & \(0.74\) \\ stalog-german-credit & \(1000\) & \(24\) & \(0.512\) & **0.576** \\ stalog-heart & \(270\) & \(13\) & **0.779** & \(0.765\) \\ tic-tac-toe & \(958\) & \(9\) & \(1.0\) & \(1.0\) \\ trains & \(10\) & \(29\) & \(0.667\) & \(0.667\) \\ vertebral-column-2classes & \(310\) & \(6\) & **0.821** & \(0.731\) \\ \hline \multicolumn{4}{c}{Higher (or same) accuracy} & \multicolumn{2}{c}{14/33} & \multicolumn{2}{c}{26/33} \\ \end{tabular} \end{table} Table 1: Test accuracies for UCI experiments with \(75\%-25\%\) training-test split. Our approach achieves either higher or the same accuracy for \(26\) out of \(33\) datasets. Figure 2: Plot of the **group lasso objective** in the student-teacher setting with \(d=5\). Training data is generated using a teacher network with width \(m=10\). The NTK weights \(\boldsymbol{\tilde{\eta}}\) are estimated using Monte Carlo sampling, and are again sub-optimal. IRLS initialized with these weights successfully fixes the NTK and converges to the optimal weights. **Fixing the NTK weights by IRLS.** We solve (16) with \(\mathbf{\Phi}_{i}=\mathbf{D}_{i}\mathbf{X}\) and regularization weights given by the NTK weights \(\mathbf{\tilde{\eta}}\) to obtain the solution \(\tilde{\mathbf{w}}\). We use efficient least squares solvers from [41; 42]. By Theorem 5.1, this corresponds to choosing the weights \(\mathbf{\eta}\) in the MKL problem (13) such that the resulting kernel matrix is equal to the NTK matrix \(\mathbf{H}\). We find the exact solution of the group lasso problem (8) using CVXPY. Comparing the optimal value of the group lasso problem (8) with the objective value of \(\tilde{\mathbf{w}}\) (given by the green line) in Figures 1 and 2, we observe that the NTK weighted solution is sub-optimal. This means that \(\mathbf{H}\) is not the optimal kernel that would be learnt by MKL (which is expected since \(\mathbf{H}\) has no dependence on the targets \(\mathbf{y}\)). By applying IRLS initialized with the NTK weights, we _fix_ the NTK and find the weights of the optimal MKL kernel. In Figures 1 and 2, we observe that IRLS converges to the solution of the group lasso problem (8) and fixes the NTK. **UCI datasets.** We compare the regularised NTK with our IRLS algorithm (Algorithm 1) on the UCI ML Repository datasets. We follow the procedure described in [43] for \(n\leq 1000\) to extract and standardize the datasets. We observe that our method achieves higher (or the same) test accuracy for \(26\)**out of \(33\)** datasets (see Table 1 for details) while the NTK achieves higher (or the same) test accuracy for \(14\) datasets which empirically supports our main claim that the IRLS procedure fixes the NTK. Details of the experiments can be found in Section B of the supplementary material. ## 8 Discussion and Limitations In this work, we explored the connection between finite-width theories of neural networks given by convex reformulations and infinite-width theories of neural networks given by the NTK. To bridge these theories, we first interpreted the group lasso convex formulation of the gated ReLU network as a multiple kernel learning model using the masking kernels generated by its gates. Then, we linked this MKL model with the NTK of the gated ReLU network evaluated on the training data. Specifically, we showed that the NTK is equivalent to the weighted masking kernel with weights that depend only on the input data \(\mathbf{X}\) and not on the targets \(\mathbf{y}\). We contrast this with the MKL interpretation of the gated ReLU network which learns the optimal data-dependent kernel using both \(\mathbf{X}\) and \(\mathbf{y}\). Therefore, the NTK cannot perform better than the optimal MKL kernel. To fix the NTK, we improve the weights induced by it using the iteratively reweighted least squares (IRLS) scheme to obtain the optimal solution of the group lasso formulation of the gated ReLU network. We corroborated our theoretical results by empirically running IRLS on toy datasets. While our theory is able to link the optimization properties of the NTK with those of finite width networks via the MKL characterization of group lasso, we do not derive explicit generalization results on the test set. Applying existing generalization theory for kernel methods [44; 45] to the MKL interpretation of the convex reformulation could be a promising direction for future work. Finally, although we studied fully connected networks in this paper, our approach can be directly extended to various neural network architectures, e.g., threshold/binary networks [46], convolution networks [47], generative adversarial networks [48], NNs with batch normalization [49], autoregressive models [50], and Transformers [51; 52]. ## Acknowledgements This work was supported in part by the National Science Foundation (NSF) CAREER Award under Grant CCF-2236829, Grant DMS-2134248 and Grant ECCS-2037304; in part by the U.S. Army Research Office Early Career Award under Grant W911NF-21-1-0242; in part by the Stanford Precourt Institute; and in part by the ACCESS--AI Chip Center for Emerging Smart Systems through InnoHK, Hong Kong, SAR.
2305.19753
The Tunnel Effect: Building Data Representations in Deep Neural Networks
Deep neural networks are widely known for their remarkable effectiveness across various tasks, with the consensus that deeper networks implicitly learn more complex data representations. This paper shows that sufficiently deep networks trained for supervised image classification split into two distinct parts that contribute to the resulting data representations differently. The initial layers create linearly-separable representations, while the subsequent layers, which we refer to as \textit{the tunnel}, compress these representations and have a minimal impact on the overall performance. We explore the tunnel's behavior through comprehensive empirical studies, highlighting that it emerges early in the training process. Its depth depends on the relation between the network's capacity and task complexity. Furthermore, we show that the tunnel degrades out-of-distribution generalization and discuss its implications for continual learning.
Wojciech Masarczyk, Mateusz Ostaszewski, Ehsan Imani, Razvan Pascanu, Piotr Miłoś, Tomasz Trzciński
2023-05-31T11:38:24Z
http://arxiv.org/abs/2305.19753v2
# The Tunnel Effect: Building Data Representations ###### Abstract Deep neural networks are widely known for their remarkable effectiveness across various tasks, with the consensus that deeper networks implicitly learn more complex data representations. This paper shows that sufficiently deep networks trained for supervised image classification split into two distinct parts that contribute to the resulting data representations differently. The initial layers create linearly-separable representations, while the subsequent layers, which we refer to as _the tunnel_, compress these representations and have a minimal impact on the overall performance. We explore the tunnel's behavior through comprehensive empirical studies, highlighting that it emerges early in the training process. Its depth depends on the relation between the network's capacity and task complexity. Furthermore, we show that the tunnel degrades out-of-distribution generalization and discuss its implications for continual learning. ## 1 Introduction Neural networks have been the powerhouse of machine learning in the last decade. A significant effort has been put into understanding the mechanisms underlying their effectiveness. One example is the analysis of building representations in neural networks applied to image processing [19]. The consensus is that networks learn to use layers in the hierarchy by extracting more complex features than the layers before [22; 41], meaning that each layer contributes to the final network performance. Extensive research has shown that increasing network depth exponentially enhances capacity, measured as the number of linear regions [35; 46; 50]. However, practical scenarios reveal that deep and overparameterized neural networks tend to simplify representations with increasing Figure 1: **The tunnel effect** for VGG19 trained on the CIFAR-10. In the tunnel ( shaded area), the performance of linear probes attached to each layer saturates (blue line), and the representations rank is steeply reduced (red dashed line). depth [13; 53]. This paradox arises because, despite their large capacity, these networks strive to reduce dimensionality and focus on discriminative patterns during supervised training [13; 15; 42; 53]. Motivated by these contradictory findings, we aim to investigate this phenomenon further and formulate the following research questions: _How do representations depend on the depth of a layer?_ Our investigation focuses on severely overparameterized neural networks through the prism of their representations as the core components for studying neural network behavior [20; 38]. We challenge the commonly held intuition that deeper layers are responsible for capturing more complex and task-specific features [41; 57]. Specifically, we demonstrate that deep neural networks split into two parts exhibiting distinct behavior. The first part, which we call the extractor, builds representations, while the other, dubbed _the tunnel_, propagates the representations further to the model's output, compressing them significantly. To investigate the tunnel effect, we conduct multiple experiments that support our findings and shed some light on the potential source of this behavior. Our findings can be summarized as follows: * We discover and extensively examine the tunnel effect, namely, deep networks naturally split into _the extractor_ responsible for building representations and _the compressing tunnel_, which minimally contributes to the final performance. The extractor-tunnel split emerges early in training and persists later on. * We show that the tunnel deteriorates the generalization ability on out-of-distribution data. * We show that the tunnel exhibits task-agnostic behavior in a continual learning scenario. Simultaneously it leads to higher catastrophic forgetting of the model. ## 2 The tunnel effect The paper introduces and studies the dynamics of representation building in overparameterized deep neural networks called _the tunnel effect_. The following section validates the tunnel effect hypothesis in a number of settings. Through an in-depth examination in Section 3.1, we reveal that the tunnel effect is present from the initial stages and persists throughout the training process. Section 3.2 focuses on the out-of-distribution generalization and representations compression. Section 3.3 hints at important factors that impact the depth of the tunnel. Finally, in Section 4, we confront an auxiliary question: How does the tunnel's existence impact a model's adaptability to changing tasks and its vulnerability to catastrophic forgetting? To answer these questions we formulate our main claim as: _The tunnel effect hypothesis: Sufficiently large * neural networks develop a configuration in which network layers split into two distinct groups. The first one which we call the extractor, builds linearly-separable representations. The second one, the tunnel, compresses these representations, hindering the model's out-of-distribution generalization._ Footnote *: We note that ‘sufficiently large’ covers most modern neural architectures, which tend to be heavily overparameterized. ### Experimental setup To examine the phenomenon, we designed the setup to include the most common architectures and datasets, and use several different metrics to validate the observations. **Architectures** We use three different families of architectures: MLP, VGGs, and ResNets. We vary the number of layers and width of networks to test the generalizability of results. See details in Appendix A.1. **Tasks** We use three image classification tasks to study the tunnel effect: CIFAR-10, CIFAR-100, and CINIC-10. The datasets vary in the number of classes: \(10\) for CIFAR-10 and CINIC-10 and \(100\) for CIFAR-100, and the number of samples: \(50000\) for CIFAR-10 and CIFAR-100 and \(250000\) for CINIC-10). See details in Appendix A.2. We probe the effects using: _the average accuracy of linear probing, spectral analysis of representations, and the CKA similarity between representations_. Unless stated otherwise, we report the average of \(3\) runs. **Accuracy of linear probing:** a linear classification layer is attached to a given layer \(\ell\) of the neural network. We train this layer on the classification task and report the average accuracy. This metric measures to what extent \(\ell\)'s representations are linearly separable. **Numerical rank of representations:** we compute singular values of the sample covariance matrix for a given layer \(\ell\) of the neural network. Using the spectrum, we estimate the numerical rank of the given representations matrix as the number of singular values above a certain threshold \(\sigma\). The numerical rank of the representations matrix can be interpreted as the measure of the degeneracy of the matrix. **CKA similarity:** is a metric computing similarity between two representations matrices. Using this normalized index, we can identify the blocks of similar representations within the network. The definition and more details can be found in Appendix E. **Inter and Intra class variance:** inter-class variance refers to the measure of dispersion or dissimilarity between different classes or groups in a dataset, indicating how distinct they are from each other. Intra-class variance, on the other hand, measures the variability within a single class or group, reflecting the homogeneity or similarity of data points within that class. The exact formula for computing these values can be found in Appendix F ### The main result Table 1 presents our main result. Namely, we report the network layer at which the tunnel begins which we define as the point at which the network reaches \(95\%\) (or \(98\%\)) of its final accuracy. We found that all tested architectures exhibit the extractor-tunnel structure across all datasets used in the evaluation, but the relative length of the tunnel varies between architectures. \begin{table} \begin{tabular}{c c c|c c} \hline \hline Architecture & \# layers & Dataset & \(>0.95\) & \(>0.98\) \\ \hline \hline MLP & 13 & CIFAR-10 & 4 (31\%) & 5 (38\%) \\ \hline \multirow{3}{*}{VGG} & \multirow{3}{*}{19} & CIFAR-10 & 7 (36\%) & 7 (36\%) \\ & & CIFAR-100 & 8 (42\%) & 8 (42\%) \\ \cline{1-1} & & CINIC-10 & 7 (36\%) & 7 (36\%) \\ \hline \multirow{2}{*}{ResNet} & \multirow{2}{*}{34} & CIFAR-10 & 20 (58\%) & 29 (85\%) \\ \cline{1-1} & & CIFAR-100 & 29 (85\%) & 30 (88\%) \\ \end{tabular} \end{table} Table 1: The tunnel of various lengths is present in all tested configurations. For each architecture and dataset, we report the layer for which the _average linear probing accuracy is above \(0.95\) and \(0.98\) of the final performance_. The values in the brackets describe the part of the network utilized for building representations with the extractor. Figure 2: The tunnel effect for networks trained on CIFAR-10. The blue line depicts the linear probing accuracy, and the shaded area depicts the tunnel. The red dashed line is the numerical rank of representations. The spike in the ResNet-34 representations rank coincides with the end of the penultimate residual stage. We now discuss the tunnel effect using MLP-12, VGG-19, and ResNet-34 on CIFAR-10 as an example. The remaining experiments (for other architectures, datasets combinations) are available in Appendix B. As shown in Figure 1 and Figure 2, the early layers of the networks, around five for MLP and eight for VGG, are responsible for building linearly-separable representations. Linear probes attached to these layers achieve most of the network's final performance. These layers mark the transition between the extractor and the tunnel part (shaded area). In the case of ResNets, the transition takes place in deeper stages of the network at the \(19^{th}\) layer. While the linear probe performance nearly saturates in the tunnel part, the representations are further refined. Figure 2 shows that the numerical rank of the representations (red dashed line) is reduced to approximately the number of CIFAR-10 classes, which is similar to the neural collapse phenomenon observed in [42]. For ResNets, the numerical rank is more dynamic, exhibiting a spike at \(29^{th}\) layer, which coincides with the end of the penultimate residual block. Additionally, the rank is higher than in the case of MLPs and VGGs. Figure 3 reveals that for VGG-19 the inter-class representations variation decreases throughout the tunnel, meaning that representations clusters contract towards their centers. At the same time, the average distance between the centers of the clusters grows (inter-class variance). This view aligns with the observation from Figure 2, where the rank of the representations drops to values close to the number of classes. Figure 3 (right) presents an intuitive explanation of the behavior with UMAP [31] plots of the representations before and after the tunnel. To complement this analysis, we studied the similarity of MLPs representations using the CKA index and the L1 norm of representations differences between the layers. Figure 4 shows that the representations change significantly in early layers and remain similar in the tunnel part when measured with the CKA index (left). The L1 norm of representations differences between the layers is computed on the right side of Figure 4. Tunnel effect analysis This section provides empirical evidence contributing to our understanding of the tunnel effect. We hope that these observations will eventually lead to explanations of this phenomenon. In particular, we show that a) the tunnel develops early during training time, b) it compresses the representations and hinders OOD generalization, and c) its size is correlated with network capacity and dataset complexity. ### Tunnel development **Motivation** In this section, we investigate tunnel development during training. Specifically, we try to understand whether the tunnel is a phenomenon exclusively related to the representations and which part of the training is crucial for tunnel **Experiments** We train a VGG-19 on CIFAR-10 and save intermediate checkpoints every \(10\) epochs of training. We use these checkpoints to compute the layer-wise weight change during training ( Figure 5) and the evolution of numerical rank throughout the training (Figure 6). **Results** Figure 5 shows that the split between the extractor and the tunnel is also visible in the parameters space. It could be perceived already at the early stages, and after that, its length stays roughly constant. Tunnel layers change significantly less than layers from the extractor. This result raises the question of whether the weight change affects the network's final output. Inspired by [59], we reset the weights of these layers to the state before optimization. However, the performance of the model deteriorated significantly. This suggests that although the change within the tunnel's parameters is relatively small, it plays an important role in the model's performance. Figure 6 shows that this apparent paradox can be better understood by looking at the evolution of representations' numerical rank during the very first gradient updates of the model. Throughout these steps, the rank collapses to values near-the-number of classes. It stays in this regime until the end of the training, meaning that the representations of the model evolve within a low-dimensional subspace. It remains to be understood if (and why) low-rank representations and changing weights coincide with forming linearly-separable representations. **Takeaway** Tunnel formation is observable in the representation and parameter space. It emerges early in training and persists throughout the whole optimization. The collapse in the numerical rank of deeper layers suggest that they preserve only the necessary information required for the task. Figure 6: The representations rank for deeper layers collapse early in training. The curves present the evolution of representations’ numerical rank over the first \(75\) training steps for all layers of the VGG-19 trained on CIFAR-10. We present a more detailed tunnel development analysis in Appendix G. ### Compression and out-of-distribution generalization **Motivation** Practitioners observe intermediate layers to perform better than the penultimate ones for transfer learning [5; 23; 48]. However, the reason behind their effectiveness remains unclear [9]. In this section, we investigate whether the tunnel and, specifically, the collapse of numerical rank within the tunnel impacts the performance on out-of-distribution (OOD) data. **Experiments** We train neural networks (MLPs, VGG-19, ResNet-34) on a source task (CIFAR-10) and evaluate it with linear probes on the OOD task, in this case, a subset of 10 classes from CIFAR-100. We report the accuracy of linear probing and the numerical rank of the representations. **Results** Our results presented in Figure 7 reveal that _the tunnel is responsible for the degradation of out-of-distribution performance_. In most of our experiments, the last layer before the tunnel is the optimal choice for training a linear classifier on external data. Interestingly, we find that the OOD performance is tightly coupled with the numerical rank of the representations, which significantly decreases throughout the tunnel. To assess the generalization of our findings we extend the proposed experimentation setup to additional dataset. To that end, we train a model on different subsets of CIFAR-100 while evaluating it with linear probes on CIFAR-10. The results presented in Figure 8 are consistent with our initial findings. We include detailed analysis with reverse experiment (CIFAR-10 \(\rightarrow\) CIFAR-100), additional architectures and datasets in the Appendix C. In all tested scenarios, we observe a consistent relationship between the start of the tunnel and the drop in OOD performance. An increasing number of classes in the source task result in a shorter tunnel and a later drop in OOD performance. In the fixed source task experiment (Appendix C), the drop in performance occurs around the \(7^{th}\) layer of the network for all tested target tasks, which matches the start of the tunnel. This observation aligns with our earlier findings suggesting that the tunnel is a prevalent characteristic of the model rather than an artifact of a particular training or dataset setup. Moreover, we connect the coupling of the numerical rank of the representations with OOD performance, to a potential tension between the objective of supervised learning and the generalization of OOD setup. Analogous tension was observed in [52] where adversarial robustness is at odds with model's accuracy. The results in Figure 7 align with the findings presented in Figure 3, demonstrating how the tunnel compresses clusters of class-wise representations. In work [54], the authors show that reducing the variation within each class leads to lower model transferability. Our experiments support this observation and identify the tunnel as the primary contributor to this effect. **Takeaway** Compression of representations happening in the tunnel severely degrades the OOD performance of the model which is tightly coupled with the drop of representations rank. Figure 8: Fewer classes in the source task create a longer tunnel, resulting in worse OOD performance. The network is trained on subsets of CIFAR-100 with different classes, and linear probes are trained on CIFAR-100. Shaded areas depict respective tunnels. Figure 7: The tunnel degrades the out-of-distribution performance correlated with the representations’ numerical rank. The accuracy of linear probes (blue) was trained on the out-of-distribution data subset of 10 classes from CIFAR-100. The backbone was trained on CIFAR-10. The shaded area depicts the tunnel, and the red dashed line depicts the numerical rank of representations. ### Network capacity and dataset complexity **Motivation** In this section, we explore what factors contribute to the tunnel's emergence. Based on the results from the previous section we explore the impact of dataset complexity, network's depth, and width on tunnel emergence. **Experiments** First, we examine the impact of networks' depth and width on the tunnel using MLPs (Figure 9), VGGs, and ResNets (Table 2) trained on CIFAR-10. Next, we train VGG-19 and ResNet34 on CIFAR-{10,100} and CINIC-10 dataset investigating the role of dataset complexity on the tunnel. **Results** Figure 9 shows that the depth of the MLP network has no impact on the length of the extractor part. Therefore increasing the network's depth contributes only to the tunnel's length. Both extractor section and numerical rank remain relatively consistent regardless of the network's depth, starting the tunnel at the same layer. This finding suggests that overparameterized neural networks allocate a fixed capacity for a given task independent of the overall capacity of the model. Results in Table 2 indicate that the tunnel length increases as the width of the network grows, implying that representations are formed using fewer layers. However, this trend does not hold for ResNet34, as the longest tunnel is observed with the base width of the network. In the case of VGGs, the number of layers in the network does not affect the number of layers required to form representations. This aligns with the results in Figure 9. The results presented above were obtained from a dataset with a consistent level of complexity. The data in Table 3 demonstrates that the number of classes in the dataset directly affects the length of the tunnel. Specifically, even though the CINIC-10 training dataset is three times larger than CIFAR-10, the tunnel length remains the same for both datasets. This suggests that the number of samples in the dataset does not impact the length of the tunnel. In contrast, when examining CIFAR-100 subsets, the tunnel length for both VGGs and ResNets increase. This indicates a clear relationship between the dataset's number of classes and the tunnel's length. **Takeaway** Deeper or wider networks result in longer tunnels. Networks trained on datasets with fewer classes have longer tunnels. \begin{table} \begin{tabular}{c|c c c} \hline \hline & \(1/4\) & I & 2 \\ \hline \hline VGG-16 & 8 (50\%) & 7 (44\%) & 7 (44\%) \\ \hline VGG-19 & 8 (42\%) & 7 (37\%) & 7 (37\%) \\ \hline ResNet18 & 15 (83\%) & 13 (72\%) & 13 (72\%) \\ \hline ResNet34 & 24 (68\%) & 20 (59\%) & 24 (68\%) \\ \hline \end{tabular} \end{table} Table 2: Widening networks layers results in a longer tunnel and shorter extractor. Column headings describe the factor in which we scale each model’s base number of channels. The models were trained on the CIFAR-10 to the full convergence. We use the \(95\%\) threshold of probing accuracy to estimate the tunnel beginning. \begin{table} \begin{tabular}{c|c|c c c} \hline \hline model & dataset & 30\% & 50\% & 100\% \\ \hline \hline \multirow{3}{*}{VGG-19} & CIFAR-10 & 6 (32\%) & 7 (37\%) & 7 (37\%) \\ & CIFAR-100 & 8 (42\%) & 8 (42\%) & 9 (47\%) \\ & CINIC-10 & 6 (32\%) & 7 (37\%) & 7 (37\%) \\ \hline \multirow{2}{*}{ResNet34} & CIFAR-10 & 19 (56\%) & 19 (56\%) & 21 (61\%) \\ & CIFAR-100 & 30 (88\%) & 30 (88\%) & 31 (91\%) \\ \hline \end{tabular} \end{table} Table 3: Networks trained on tasks with fewer classes utilize fewer resources for building representations and exhibit longer tunnels. Column headings describe the size of the class subset used in training. Within the (architecture, dataset) pair, the number of gradient steps during training in all cases was the same. We use the \(95\%\) threshold of probing accuracy to estimate the tunnel beginning. Figure 9: Networks allocate a fixed capacity for the task, leading to longer tunnels in deeper networks. The extractor is consistent across all scenarios, with the tunnel commencing at the 4th layer. The tunnel effect under data distribution shift Based on the findings from the previous section and the tunnel's negative impact on transfer learning, we investigate the dynamics of the tunnel in continual learning scenarios, where large models are often used on smaller tasks typically containing only a few classes. We focus on understanding the impact of the tunnel effect on transfer learning and catastrophic forgetting [11]. Specifically, we examine how the tunnel and extractor are altered after training on a new task. ### Exploring the effects of task incremental learning on extractor and tunnel **Motivation** In this section, we aim to understand the tunnel and extractor dynamics in continual learning. Specifically, we examine whether the extractor and the tunnel are equally prone to catastrophic forgetting. **Experiments** We train a VGG-19 on two tasks from CIFAR-10. Each task consists of 5 classes from the dataset. We subsequently train on the first and second tasks and save the corresponding extractors \(E_{t}\) and tunnels \(T_{t}\), where \(t\in\{1,2\}\) is the task number. We also save a separate classifying head for trained on each task, that we use during evaluation. **Results** As presented in Table 4, in any combination changing \(T_{1}\) to \(T_{2}\) or vice versa have a marginal impact on the performance. This is quite remarkable, and suggests that the tunnel is not specific to the training task. It seems that it _compresses the representations in a task-agnostic way_. The extractor part, on the other hand, is _task-specific_ and prone to forgetting as visible in the first four rows of Table 4. In the last two rows, we present two experiments that investigate how the existence of a tunnel affects the possibility of recovering from this catastrophic forgetting. In the first one, referred to as (\(E_{2}+T_{1}(FT)\)), we use original data from Task 1 to retrain a classifying head attached on top of extractor \(E_{2}\) and the tunnel \(T_{1}\). As visible, it has minimal effect on the accuracy of the first task. In the second experiment, we attach a linear probe directly to the extractor representations (\(E_{2}(FT)\)). This difference hints at a detrimental effect of the tunnel on representations' usability in continual learning. In Appendix D.1 we study this effect further by training a tunnels on two tasks with a different number of classes, where \(n_{1}>n_{2}\). In this scenario, we observe that tunnel trained with more classes (\(T_{1}\)) maintains the performance on both tasks, contrary to the tunnel (\(T_{2}\)) that performs poorly on Task 1. This is in line with our previous observations in Section 2.2, that the tunnel compresses to the effective number of classes. These results present a novel perspective in the ongoing debate regarding the layers responsible for causing forgetting. However, they do not align with the observations made in the previous study [47]. In Appendix D, we delve into the origin of this discrepancy and provide a comprehensive analysis of the changes in representations with a setup introduced with this experiment and the CKA similarity. **Takeaway** The tunnel's task-agnostic compression of representations provides immunity against catastrophic forgetting when the number of classes is equal. These findings offer fresh perspectives on studying catastrophic forgetting at specific layers, broadening the current understanding in the literature. \begin{table} \begin{tabular}{l|c c} \hline \hline \multicolumn{1}{c}{} & First Task & Second Task \\ \hline \hline \(E_{1}+T_{1}\) & 92.04\% & 56.8\% \\ \hline \(E_{1}+T_{2}\) & 92.5\% & 58.04 \% \\ \hline \(E_{2}+T_{2}\) & 50.84 \% & 93.94 \% \\ \hline \(E_{2}+T_{1}\) & 50.66 \% & 93.72 \% \\ \hline \hline \(E_{2}+T_{1}(FT)\) & 56.1\% & – \\ \hline \(E_{2}(FT)\) & 74.4\% & – \\ \hline \end{tabular} \end{table} Table 4: The tunnel part is task-agnostic and can be freely mixed with different extractors retaining the original performance. We test the model’s performance on the first or second task using a combination of extractor \(E_{t}\) and tunnel \(T_{t}\) from tasks \(t\in\{1,2\}\). The last two rows \((FT)\) show how much performance can be recovered by retraining the linear probe attached to the penultimate layer \(E_{1}+T_{1}\) or the last layer of the \(E_{2}\). ### Reducing catastrophic forgetting by adjusting network depth **Motivation** Experiments from this section verify whether it is possible to retain the performance of the original model by training a shorter version of the network. A shallower model should also exhibit less forgetting in sequential training. **Experiments** We train VGG-19 networks with different numbers of convolutional layers. Each network is trained on two tasks from CIFAR-10. Each task consists of 5 classes from the dataset. **Results:** The results shown in Figure 10 indicate that training shorter networks yields similar performance compared to the original model. However, performance differences become apparent when the network becomes shorter than the extractor part in the original model. This observation aligns with previous findings suggesting that the model requires a certain capacity to perform the task effectively. Additionally, the shorter models exhibit significantly less forgetting, which corroborates the conclusions drawn in previous works [32; 34] on the importance of network depth and architecture in relation to forgetting. **Takeaway** It is possible to train shallower networks that retain the performance of the original networks and experience significantly less forgetting. However, the shorter networks need to have at least the same capacity as the extractor part of the original network. ## 5 Limitations and future work This paper empirically investigates the tunnel effect, opening the door for future theoretical research on tunnel dynamics. Further exploration could involve mitigating the tunnel effect through techniques like adjusting learning rates for specific layers. One limitation of our work is its validation within a specific scenario (image classification), while further studies on unsupervised or self-supervised methods with other modalities would shed more light and verify the pertinence of the tunnel elsewhere. In the experiments, we observed that ResNet-based networks exhibited shorter tunnels than plain MLPs or VGGs. This finding raises the question of whether the presence of skip connections plays a role in tunnel formation. In Appendix H, we take the first step toward a deeper understanding of this relationship by examining the emergence of tunnels in ResNets without skip connections. ## 6 Related work The analysis of representations in neural network training is an established field [28; 56; 58]. Previous studies have explored training dynamics and the impact of model width [18; 26; 30; 45; 51; 55], but there is still a gap in understanding training dynamics [4; 37; 47; 58]. Works have investigated different architectures' impact on continual learning [33; 34] and linear models' behavior [10; 24; 25; 29]. Our work builds upon studies examining specific layers' role in model performance [4; 9; 38; 39; 45; 59] and sheds light on the origins of observed behaviors [12; 16; 42; 62]. Previous works have explored the role of specific layers in model performance [4; 9; 38; 39; 45; 59]. While some studies have observed a block structure in neural network representations, their analysis was limited to ResNet architectures and did not consider continual learning scenarios. In our work, we investigate a similar phenomenon, expanding the range of experiments and gaining deeper insights into its origins. In [59], authors distinguish between critical and robust layers, highlighting the importance of the former for model performance, while individual layers from the latter can be reset without impacting the final performance. Our analysis builds upon this finding and further categorizes these layers into the extractor and tunnel, providing insights into their origins and their effects on model performance and generalization ability. Figure 10: Training shorter networks from scratch gives a similar performance to the longer counterparts (top) and results in significantly lower forgetting (bottom). The horizontal lines denote original model’s performance. Our findings are related to the Neural Collapse phenomenon [42], which has gained recent attention [12; 16; 62]. In our experiments, we also analyze the rank of the representation matrix and observe that the examined tunnel is characterized by a low representation rank. ## 7 Conclusions This work presents new insights into the behavior of deep neural networks during training. We discover the tunnel effect, an intriguing phenomenon in modern deep networks where they split into two distinct parts - the extractor and the tunnel. The extractor part builds representations, and the tunnel part compresses these representations to a minimum rank without contributing to the model's performance. This behavior is prevalent across multiple architectures and is positively correlated with overparameterization, i.e., it can be induced by increasing the model's size or decreasing the complexity of the task. We discuss potential sources of the tunnel and highlight the unintuitive behavior of neural networks during the initial training phase. This novel finding has significant implications for improving the performance and robustness of deep neural networks. Moreover, we demonstrate that the tunnel hinders out-of-distribution generalization and can be detrimental in continual learning settings. Overall, our work offers new insights into the mechanisms underlying deep neural networks and can potentially improve the performance and robustness of these powerful models.
2309.07684
deepFDEnet: A Novel Neural Network Architecture for Solving Fractional Differential Equations
The primary goal of this research is to propose a novel architecture for a deep neural network that can solve fractional differential equations accurately. A Gaussian integration rule and a $L_1$ discretization technique are used in the proposed design. In each equation, a deep neural network is used to approximate the unknown function. Three forms of fractional differential equations have been examined to highlight the method's versatility: a fractional ordinary differential equation, a fractional order integrodifferential equation, and a fractional order partial differential equation. The results show that the proposed architecture solves different forms of fractional differential equations with excellent precision.
Ali Nosrati Firoozsalari, Hassan Dana Mazraeh, Alireza Afzal Aghaei, Kourosh Parand
2023-09-14T12:58:40Z
http://arxiv.org/abs/2309.07684v1
# deepFDEnet: A Novel Neural Network Architecture for Solving Fractional Differential Equations ###### Abstract The primary goal of this research is to propose a novel architecture for a deep neural network that can solve fractional differential equations accurately. A Gaussian integration rule and a \(L_{1}\) discretization technique are used in the proposed design. In each equation, a deep neural network is used to approximate the unknown function. Three forms of fractional differential equations have been examined to highlight the method's versatility: a fractional ordinary differential equation, a fractional order integrodifferential equation, and a fractional order partial differential equation. The results show that the proposed architecture solves different forms of fractional differential equations with excellent precision. _keywords:_ Neural Networks, Machine-Learning, Partial-Differential Equations, Fractional Calculus ## 1 Introduction Fractional equations are mathematical equations that include one or more fractional derivatives or fractional integral terms. These types of equations are encountered in many branches of science and engineering, such as physics, chemistry, and cognitive sciences [1, 2]. One important application of fractional equations is in modeling physical systems that exhibit non-integer behavior, such as visco-elastic materials or fractional-order circuits. Fractional equations are challenging to solve because they often involve complex algebraic manipulations and require the use of specialized techniques such as partial fraction decomposition. In recent years, there has been a surge of interest in the study of fractional calculus, which extends classical calculus to include fractional derivatives and integrals and provides a powerful mathematical framework for the analysis and modeling of fractional equations. Since nowadays neural networks have been proven to be a powerful tool in solving differential equations, in this research paper we present a deep neural network (DNN) framework to solve fractional differential equations (FDE). DNNs are the proper choice to solve FDEs because traditional methods for FDEs can be computationally expensive, difficult to implement for complex problems, and might not scale well to high-dimensional systems. In addition, these days we encounter big data coming from different sensors that measure the boundary and initial conditions of a FDE under consideration. DNNs have also demonstrated a high capability for dealing with large amounts of data. As a result, examining DNNs ability to solve FDEs seems necessary, which is the primary focus of this paper. Neural networks can learn the underlying dynamics of a system directly from the data. This is useful when one or more conditions (boundary or initial) of the equation under consideration are unknown or difficult to evaluate. In recent years, a powerful DNN architecture called Physics-Informed Neural Networks (PINNs) has been proposed by M. Raissi et al. in [3]. PINNs are a class of neural networks that utilize physical laws, boundaries, and initial conditions in their architecture to improve the accuracy and reliability of their predictions. These networks use the partial differential equation (PDE) or ordinary differential equation (ODE) as the loss function. Furthermore, boundary and initial conditions are considered in the loss function as well. The total loss in these networks is as follows: \[Loss=Loss_{PDE}+Loss_{Boundries}+Loss_{Initial}.\] The advantage of PINNs over traditional numerical methods is that they can learn from data and generalize to new forms of equations. This makes PINNs useful in situations where the governing equations are complex or where the initial and boundary conditions are uncertain or incomplete. In this work, a fractional PINN has been proposed to overcome the difficulties arising in solving fractional differential equations. Solving fractional PDEs using neural networks has been a pressing issue, and scientists have proposed different methods and techniques. In 2012, Almarashi proposed a neural network to approximate the solution of a two-sided fractional partial differential equation with RBF, and they presented their result in [4]. Dai and Yu proposed an artificial neural network to solve space fractional differential equations with the Caputo definition by applying truncated series expansion terms. [5] In addition, Pang et al. proposed a neural network, used the Grunwald-Letnikov formula to discretize the fractional operator, and compared their convergence against the Finite Difference Method (FDM) [6]. Guoa et al. proposed a Monte Carlo Neural network for forward and inverse neural networks. They used a similar approach to fPINNs, however, their approach yields less overall computational cost compared to fPINN [7]. The methods described above provide numerous approaches of solving fractional equations, each with its own set of advantages and disadvantages. Regardless, our suggested approach is accurate and versatile since it utilizes L1 discretization to discretize the fractional part of the equations and Gauss-Legendre integration to discretize the integral component. This research article has been divided into multiple key sections that will aid in a thorough comprehension of the proposed method. Section 2 provides background information necessary for understanding the proposed method. Section 3 will provide an overview of the methodology employed, and the results will be presented in Section 4. Finally, in Section 5 we will present our concluding remarks and discuss further research. ## 2 Background ### Fractional Calculus In this paper, we primarily focus on fractional-order equations. There are different definitions for the fractional order differential equations. We will examine some of the most important definitions and then will introduce our proposed method, utilizing the Gauss-Legendre integration method and L1-discretization method. Considering the Riemann-Liouville fractional integral with \(f\in C[a,b]\) and \(\alpha\in R^{+}\), we will have: \[{}_{a}\mathcal{I}_{x}^{\alpha}f(x)=\frac{1}{\Gamma(\alpha)}\int_{a}^{x}(x-s)^{ \alpha-1}f(s)ds, \tag{1}\] In the above equation, the \(\Gamma\) stands for the gamma function. The Riemann-Liouville(RL) fractional integral can be formulated as follows[8]: \[{}_{a}\mathcal{D}_{x}^{\alpha}f(x)=\frac{1}{\Gamma(n-\alpha)}\frac{d^{n}}{dx^ {n}}\int_{a}^{x}(x-s)^{n-\alpha-1}f(s)ds, \tag{2}\] Equation 2 shows the RL definition of fractional derivative. It can be observed that in this equation, the integral part is computed first, followed by the derivative; however, there is a more straightforward definition in which the derivative is calculated first, followed by the integral. This is known as the Caputo definition, and it goes as follows [9]: \[{}_{a}D_{t}^{\alpha}f(x)=\frac{1}{\Gamma(n-\alpha)}\int_{a}^{x}(x-s)^{n-\alpha -1}f^{(n)}(s)ds,\quad n-1<\alpha\leq n. \tag{3}\] According to the Caputo definition, for polynomials, we have: \[\partial_{0}^{\alpha}x^{n}=\begin{cases}0&\text{n}\in\text{N}_{0},\text{n}<\lceil \alpha\rceil,\\ \frac{\Gamma(n+1)}{\Gamma(n+1-\alpha)}x^{n-\alpha}&\text{n}\in\text{N},\text{n} \geq\lceil\alpha\rceil.\end{cases} \tag{4}\] The Caputo definition with \(0<\alpha\leq 1\) initiating at zero can be formulated as: \[\partial_{t}^{\alpha}u(x,t)=\frac{1}{\Gamma(1-\alpha)}\int_{0}^{t}\frac{1}{(t- s)^{\alpha}}\frac{\partial u(x,s)}{\partial s}ds. \tag{5}\] ### L1-discretization Because solving the Caputo fractional equation is computationally difficult, numerous scientists have proposed alternative ways for determining the fractional derivative. The L-1 and L1-2 discretization methods are two that are used to effectively calculate the fractional derivative. Using L-1 discretization to approximate the Caputo derivative where \(t=t_{n}+1\), will result in the following expression: \[\partial_{t}^{\alpha}u\left(x,t_{n+1}\right)=\mu(u^{n+1}-(1-b_{1})u^{n}-\sum_ {j=1}^{n-1}(b_{j}-b_{j+1})u^{n-j}-b_{k}u^{0})+r_{\Delta t}^{n+1},\quad n\geq 1, \tag{6}\] Where \(\alpha\) is the non-integer order, \(\mu=\frac{1}{\Delta t^{\alpha}\Gamma(2-\alpha)}\) and \(b_{j}=(j+1)^{1-\alpha}-j^{1-\alpha}\). This is known as the L-1 discretization, which is used in our proposed deep network. ## 3 Methodology This section will cover the Gauss-Legendre integration first, followed by an in-depth examination of the proposed method. ### Gauss-Legendre integration There are different methods to approximate the definite integral function numerically, some of which are the midpoint rule, trapezoidal rule[10], Simpson's rule[11], and the Gaussian quadrature[12]. In this paper, we have utilized the Gaussian Quadrature and we will briefly explain this method in the following paragraph. The Gaussian quadrature method used in this paper is based on the formulation that was developed by Carl Gustav Jacobi in 1826. The default domain for integration in the Gaussian quadrature rule is usually taken as [-1,1] and it is stated as: \[\int_{-1}^{1}w(x)\,dx\approx\sum_{i=1}^{n}w_{i}f(x_{i}), \tag{7}\] if the function \(f(x)\) can be approximated by a polynomial of degree \(2n-1\), then the integral calculated by the above rule is an accurate approximation. It should also be taken into consideration that the points for assessment in the Gaussian quadrature are chosen optimally and are not equally spaced. These points are the roots of Legendre polynomials of degree \(n\), which is the number of points used to approximate the solution. Since the integrals considered in this paper are not all in \([-1,1]\), we make use of the the following transformation which leverages the change of variables to calculate the integral in any arbitrary interval. \[t=\frac{2x-a-b}{b-a}\Longleftrightarrow x=\frac{1}{2}[(b-a)t+a+b], \tag{8}\] this will result in the following formula for the Gaussian quadrature: \[\int_{a}^{b}w(x)dx=\int_{-1}^{1}f\left(\frac{(b-a)t+(b+a)}{2}\right)\frac{(b- a)}{2}dt. \tag{9}\] ### Proposed Method We have already covered the necessary material to understand the proposed method. We employ a sequential Neural Network and to solve the equations, we use \(\tanh(.)\) as activation functions, moreover, we assume that our network will calculate the result as \(U(.)\), then to find the result of our equation, we utilize automatic differentiation to find the derivatives with respect to each variable. We then use L1-discretization for the fractional derivative and Gauss-Legendre to calculate the integral part. Consider the equation of the form: \[\wp:=\psi+\psi_{t}+\zeta(\psi)+\Im(\psi), \tag{10}\] where: \[\zeta:=\sum_{i=1}^{n}w_{i}f(\psi_{i}),\quad n\geq 1, \tag{11}\] and: \[\Im:=\mu(\psi^{n+1}-(1-b_{1})\psi^{n}-\sum_{j=1}^{n-1}(b_{j}-b_{j+1})\psi^{n-j} -b_{k}\psi^{0})+r_{\Delta t}^{n+1},\quad n\geq 1, \tag{12}\] where \(\psi(x,t)\) is calculated by the neural network, and the derivative with respect to each variable is calculated using automatic differentiation, however, the integral is calculated using the Gauss-Legendre method as is shown by \(\zeta\) and the fractional derivative is estimated using L1-discretization and it is shown by \(\Im\). In each iteration, a random point is generated and the value is calculated using the neural network, then the random value is chosen as the maximum value in both the integration scheme and to estimate the fractional derivative. The equation is considered as part of the loss function and the parameters are learned using the squared error of the values in each equation and initial and boundary conditions. The shared parameters will be learned by minimizing \(SE\) as follows: \[SE=SE_{\Im}+SE_{i}+SE_{b}, \tag{13}\] Figure 1: A diagram illustrating the proposed method where: \[SE_{\lambda}=|\psi(t_{0}^{0},x_{0}^{0})|^{2}, \tag{14}\] \[SE_{i}=|\psi(t_{0}^{i},x_{0}^{i})-\psi_{0}^{i}|^{2}, \tag{15}\] and \[SE_{b}=|\psi(t_{b}^{i},x_{b}^{i})-\psi_{b}^{i}|^{2}. \tag{16}\] Figure 1 illustrates the diagram of the proposed method. ## 4 Numerical Results In this section, some examples are investigated and solved using the given methods to demonstrate the efficacy of the proposed method. To demonstrate the generality of the proposed method, we have selected some different examples from different classes of fractional order differential equations including a fractional order differential equations, a fractional order integrodifferential equation, and a fractional order partial differential equation. ### Example 1 First, we consider a fractional ordinary differential equation as follows[13]: \[D^{\nu}\psi(x)+\psi^{2}(x)=x+\left(\frac{x^{\nu+1}}{\Gamma(\nu+2)}\right)^{2},\quad 0<\nu\leq 1,\quad 0\leq x\leq 1. \tag{17}\] The initial condition for this equation is \(\psi(0)=0\) and the exact solution is \(\psi(x)=\frac{1}{\Gamma(\nu+2)}x^{\nu+1}.\) The residual will consist of the initial condition and the equation itself: \[SE=SE_{\lambda}+SE_{i}, \tag{18}\] where \[SE_{\rm p}=\Im(\psi)+\psi^{2}-(x+\left(\frac{x^{\nu+1}}{\Gamma(\nu+2)}\right) ^{2}), \tag{19}\] and \[SE_{i}=|\psi(0)-0|. \tag{20}\] The results are presented in table 1, where the mean absolute error is calculated for different values for \(\alpha\) with \(n=1000\) discretization points and \(1000\) epochs. Figure 2 depicts the exact and predicted solution to the presented problem and figure 3 shows the residual. \begin{table} \begin{tabular}{c c c c c c} \hline x & 0.1 & 0.3 & 0.5 & 0.7 & 0.9 \\ \hline 0 & 3.23E-05 & -7.14E-06 & -6.25E-06 & 3.43E-05 & -6.31E-05 \\ 0.1 & -1.45E-05 & 7.75E-05 & -1.86E-05 & 4.12E-05 & -1.11E-04 \\ 0.2 & 2.51E-04 & -5.37E-05 & -1.42E-05 & -3.04E-05 & -7.69E-05 \\ 0.3 & -8.31E-05 & -6.13E-05 & -4.44E-05 & -1.49E-05 & -1.42E-04 \\ 0.4 & -1.25E-04 & -2.33E-05 & -1.92E-05 & 1.21E-05 & -2.15E-04 \\ 0.5 & 1.56E-05 & 2.59E-05 & 8.61E-06 & -1.89E-06 & -2.37E-04 \\ 0.6 & 9.12E-05 & 2.40E-06 & -2.46E-05 & -2.39E-05 & -2.12E-04 \\ 0.7 & 4.85E-05 & -3.96E-05 & -2.54E-05 & -2.42E-05 & -1.83E-04 \\ 0.8 & -2.86E-05 & -2.48E-05 & 9.58E-06 & -1.28E-05 & -1.91E-04 \\ 0.9 & -3.38E-05 & 1.59E-05 & -2.11E-05 & -1.31E-05 & -2.31E-04 \\ 1 & 1.18E-04 & -3.73E-05 & 2.95E-05 & -1.11E-05 & -2.11E-04 \\ \hline \end{tabular} \end{table} Table 1: Mean absolute error for different values of \(\alpha\) and \(1000\) epochs for example 1. ### Example 2 now we consider a fractional integral equation[14]: \[\left\{\begin{array}{l}D^{0.5}\psi\left(x\right)=\psi\left(x\right)+\frac{8}{ \sin\left(0.5\right)}x^{1.5}-x^{2}-\frac{1}{3}x^{3}+\int_{0}^{x}\psi\left(t \right)dt,\quad 0<x<1,\\ \psi\left(0\right)=0,\end{array}\right. \tag{21}\] The solution to this equation is \(\psi(x)=x^{2}.\) The residual will consist of the initial condition and the equation itself: \[SE=SE_{3}+SE_{i}, \tag{22}\] where \[SE_{\wp}=\Im(\psi)-(\psi(x)+\frac{8}{\sin\left(0.5\right)}x^{1.5}-x^{2}-\frac{ 1}{3}x^{3}+\zeta(\psi)), \tag{23}\] and \[SE_{i}=|\psi(0)-0|, \tag{24}\] and the results are as presented in table 2 with varying numbers of discretization points. figure 4 depicts the predicted and exact solutions, whereas figure 5 shows the residual graph for 1000 epochs. As shown in equation 23, this equation consists of a fractional part and an integral part. The fractional component is calculated with various numbers of discretization points as shown in table 2, however, we have utilized 400 discretization points in the Gauss-Legendre method to estimate the integral part. ### Example 3 Finally, we consider the initial-boundary problem of fractional partial differential equation[15]: \[\frac{\partial^{\alpha}\psi(x,t)}{\partial t^{\alpha}}+x\frac{\partial\psi(x, t)}{\partial x}+\frac{\partial^{2}\psi(x,t)}{\partial x^{2}}=2t^{\alpha}+2x^{2}+2,\quad 0<x<1,0<t<1, \tag{25}\] where \(\alpha\) is between 0 and 1, the initial condition is \(\psi(x,0)=x^{2}\) and the boundary conditions are: Figure 4: Predicted solution of example 2. Figure 5: Residual graph for example 2. \(\psi(0,t)=2\frac{\Gamma(\alpha+1)}{\Gamma(2\alpha+1)}t^{2\alpha}\), and \(\psi(1,t)=1+2\frac{\Gamma(\alpha+1)}{\Gamma(2\alpha+1)}t^{2\alpha}\). The residual will consist of the initial condition and the equation itself as follows: \[SE=SE_{\rho}+SE_{i}+SE_{b}, \tag{26}\] where \[SE_{\mathcal{P}}=\Im(\psi)+\psi_{x}+\psi_{xx}-(2t^{\alpha}+2x^{2}+2), \tag{27}\] and \[SE_{i}=|\psi(x,0)-x^{2}|, \tag{28}\] \[SE_{b}=|\psi(0,x^{i})-2\frac{\Gamma(\alpha+1)}{\Gamma(2\alpha+1)}t^{2\alpha}|^ {2}+|\psi(1,x^{i})-(1+2\frac{\Gamma(\alpha+1)}{\Gamma(2\alpha+1)}t^{2\alpha}) |^{2}, \tag{29}\] The results are presented in table 4 and figure 6 shows the approximate solution obtained by the current method using \(n=100\) discretization points along with the exact solution. the Comparison between the obtained solutions and the Wavelet method is also provided in table 3 which demonstrates that the current method can reliably evaluate the solution to the presented problem. \begin{table} \begin{tabular}{c c c c c c} \hline \hline x & t & 0.1 & 0.3 & 0.5 & 0.7 & 0.9 \\ \hline 0 & 8.07E-03 & 1.65E-02 & 7.71E-03 & 3.81E-03 & 1.13E-02 \\ 0.1 & 6.34E-03 & 1.59E-02 & 8.43E-03 & 2.49E-03 & 1.07E-02 \\ 0.2 & 4.67E-03 & 1.44E-02 & 8.45E-03 & 1.12E-03 & 8.95E-03 \\ 0.3 & 2.91E-03 & 1.21E-02 & 7.64E-03 & 7.14E-05 & 6.59E-03 \\ 0.4 & 1.11E-03 & 9.19E-03 & 6.19E-03 & 6.34E-04 & 3.93E-03 \\ 0.5 & 6.15E-04 & 6.10E-03 & 4.45E-03 & 1.14E-03 & 1.09E-03 \\ 0.6 & 2.09E-03 & 3.18E-03 & 2.73E-03 & 1.58E-03 & 1.85E-03 \\ 0.7 & 3.15E-03 & 6.91E-04 & 1.23E-03 & 2.07E-03 & 4.88E-03 \\ 0.8 & 3.67E-03 & 1.19E-03 & 8.98E-05 & 2.70E-03 & 8.01E-03 \\ 0.9 & 3.51E-03 & 2.36E-03 & 5.91E-04 & 3.60E-03 & 1.14E-02 \\ 1 & 2.56E-03 & 2.73E-03 & 6.55E-04 & 4.98E-03 & 1.52E-02 \\ \hline \hline \end{tabular} \end{table} Table 4: Mean absolute error for example 3 with different values for \(x\) and \(t\) and 1000 epochs. Figure 6: Predicted solution and the exact solution for example 3. In this paper, we proposed a novel framework to solve the fractional differential equations using L1-discretization and the Gauss-Legendre discretization to discretize the integral component. The method can be used in several different fields including biological systems, medical imaging, stock prices, and control systems [17]. To demonstrate the effectiveness of the model, we considered several fractional equations including an ODE, a PDE, and an integro-differential equation. Solving fractional equations using neural networks is a relatively new research area and, while the previous works have contributed to this field, a solid framework for obtaining effective solutions to these equations remains lacking. By utilizing the aforementioned methodologies, we have developed a new and reliable framework to calculate the solutions to these equations. While our work mostly focuses on fractional equations, the usage of the Gauss-Legendre method can be expanded to the broader integral equations as well; furthermore, other discretization methods including L1-2 which is another discretization method for estimating the value of Caputo-type fractional equations, Gauss-Lobatto rule and adaptive quadrature for estimating the solution of integrals can also be considered and investigated. Our findings demonstrate that different discretization methods can be efficiently incorporated into a neural network; nevertheless, the effectiveness and usability of the method can be affected by parameters such as the depth of the neural network, activation functions, and the overall structure of the network. Furthermore, the effectiveness of different discretization methods in solving various fractional definitions is yet another subject that can be investigated in future work, nonetheless, the choice of the specific discretization method may depend on the specific characteristics of the equation and the structure of the neural network, notably the order of the fractional equation, the convergence order of the method and the computational complexity of the method which could in some cases severely hinder the speed and performance of our model. Finally, our findings suggest that the proposed method and in general, discretization methods, are very valuable and could serve as a strong foundation for further research in this area.
2304.10476
HL-nets: Physics-informed neural networks for hydrodynamic lubrication with cavitation
Recently, physics-informed neural networks (PINNs) have emerged as a promising method for solving partial differential equations (PDEs). In this study, we establish a deep learning computational framework, HL-nets, for computing the flow field of hydrodynamic lubrication involving cavitation effects. Two classical cavitation conditions, i.e., the Swift-Stieber (SS) condition and the Jakobsson-Floberg-Olsson (JFO) condition, are implemented in the PINNs to solve the Reynolds equation. For the non-negativity constraint of the SS cavitation condition, a penalizing scheme with a residual of the non-negativity and an imposing scheme with a continuous differentiable non-negative function are proposed. For the complementarity constraint of the JFO cavitation condition, the pressure and cavitation fraction are taken as the neural network outputs, and the residual of the Fischer-Burmeister (FB) equation constrains their complementary relationships. Multi-task learning (MTL) methods are applied to balance the newly introduced loss terms described above. To estimate the accuracy of HL-nets, we present a numerical solution of the Reynolds equation for oil-lubricated bearings involving cavitation. The results indicate that the proposed HL-nets can highly accurately simulate hydrodynamic lubrication involving cavitation phenomena. The imposing scheme can effectively improve the accuracy of the training results of PINNs, and it is expected to have great potential to be applied to different fields where the non-negativity constraint is needed.
Yiqian Cheng, Qiang He, Weifeng Huang, Ying Liu, Yanwen Li, Decai Li
2022-12-18T06:19:10Z
http://arxiv.org/abs/2304.10476v1
HL-nets: Physics-informed neural networks for hydrodynamic lubrication with cavitation ###### Abstract Recently, physics-informed neural networks (PINNs) have emerged as a promising method for solving partial differential equations (PDEs). In this study, we establish a deep learning computational framework, HL-nets, for computing the flow field of hydrodynamic lubrication involving cavitation effects. Two classical cavitation conditions, i.e., the Swift-Stieber (SS) condition and the Jakobsson-Floberg-Olsson (JFO) condition, are implemented in the PINNs to solve the Reynolds equation. For the non-negativity constraint of the SS cavitation condition, a penalizing scheme with a residual of the non-negativity and an imposing scheme with a continuous differentiable non-negative function are proposed. For the complementarity constraint of the JFO cavitation condition, the pressure and cavitation fraction are taken as the neural network outputs, and the residual of the Fischer-Burmeister (FB) equation constrains their complementary relationships. Multi-task learning (MTL) methods are applied to balance the newly introduced loss terms described above. To estimate the accuracy of HL-nets, we present a numerical solution of the Reynolds equation for oil-lubricated bearings involving cavitation. The results indicate that the proposed HL-nets can highly accurately simulate hydrodynamic lubrication involving cavitation phenomena. The imposing scheme can effectively improve the accuracy of the training results of PINNs, and it is expected to have great potential to be applied to different fields where the non-negativity constraint is needed. **Keywords**: Hydrodynamic lubrication; Reynolds equation; Cavitation; Non-negativity constraint; PINNs. ## 1 Introduction Friction, which occurs when solid surfaces are in contact with one another and moving toward each other, causes energy loss and wear of mechanical systems; so, reducing friction is essential to increasing the cost efficacy of energy systems. To reduce friction and wear, hydrodynamic lubrication is widely used to form a thin lubricant film between solid surfaces with normal load-bearing capacity and low shear strength [1]. As a result of Reynolds' work [2], the Reynolds equation was developed, which is a elliptic PDE governing the pressure distribution of thin viscous fluid films; as one of the fundamental equations of the classical lubrication theory, the Reynolds equation has been extensively applied to a wide range of lubrication problems with indisputable success[3, 4]. Cavitation in liquid lubricating films is common. When surfaces covered in hydrodynamic lubrication contain diverging regions, the pressure will drop, and cavitation will occur when the pressure drops to a certain level [5]. Cavitation directly impacts the lubrication film's pressure distribution, affecting the load-carrying capacity and friction. Thus, cavitation phenomena play a crucial role in lubrication modeling. While liquids can be maintained at a certain pressure and the cavitation region contains a mixture of gas and vapor, the details of cavitation are still an area of interest to researchers studying bubble dynamics and cavitation theory [6]. In the domain of tribology, it is usually assumed that a liquid will completely vaporize when the pressure of the liquid in a region is below zero or a constant cavitation pressure. This restriction on the pressure field by introducing cavitation into the Reynolds equation is called the cavitation condition. Over the past few decades, several mathematical models and numerical methods have been developed to address the cavitation problem, among which the most important models are the Swift-Stieber (SS) condition and the Jakobsson-Floberg-Olsson (JFO) condition. The SS cavitation condition, formulated independently by Swift [7] and Stieber and Schwimmlager [8], has been widely used because of its simplicity, ease of implementation, and superior accuracy compared to the full- and half-Sommerfeld conditions. The SS cavitation model is a typical obstacle problem [9]. This condition assumes that the pressure in the cavitation region is the cavitation pressure, and the pressure gradient is zero at the boundary of the cavitation region. However, this boundary condition does not include the film reformation boundary where the cavitation ends and the full film begins; hence, it does not enforce mass conservation. The JFO model for cavitation was proposed [10, 11] to account for film reformation and to ensure mass conservation. In the tribology field, this model is known as the JFO boundary conditions. By incorporating the binary switch function, Elrod [12] provided the first algorithm to incorporate the JFO boundary conditions in a single equation for both full-film and cavitated regions, which can predict the cavitation and full-film regions. Several modifications of the Elrod algorithm have been proposed to improve convergence for the highly nonlinear nature of the algorithm with a binary switch function [13, 14, 15, 16, 17]. Upon considering mass-conserving cavitation, the governing equation remains elliptic within full-film regions but becomes hyperbolic within cavitated regions, forming a mixture of nonlinear PDEs. For numerical stability and accuracy, the convective terms are discretized by the finite difference method and finite volume method using the upwind format [18, 19, 20, 21], and the weak format in the finite element method (FEM) requires a stabilization method, such as the Streamline Upwind/Petrov-Galerkin (SUPG) method [22]. Giacopini et al. [23] converted the cavitation problem into a linear-complementarity problem by constructing a complementarity condition between pressure and cavitation fraction. Woloszynski et al. [24] developed an efficient algorithm, Fischer-BurmeisterNewton-Schur, and reformulated the constrained optimization problem into an unconstrained one. The two parameters, pressure and cavitation fraction, were determined simultaneously by a system of nonlinear equations formed by discretizing the Reynolds equation and the Fischer-Burmeister (FB) complementary equation, respectively. The method has sufficient accuracy and very high efficiency with low-cost gradient-based methods. This complementary function approach has also been combined with many other numerical algorithms [25, 26, 27]. The above-mentioned studies on the Reynolds equation and cavitation are based on the FEM, finite volume method, finite-difference method, or other traditional numerical methods. However, a new deep learning method called physics-informed neural networks (PINNs) has emerged recently as a promising method for solving forward and inverse PDEs. Raissi et al. [28] introduced a general framework of PINNs and verified its ability to solve PDEs and their inverse problems, such as extracting parameters from a few observations. Compared to other deep learning approaches in physical modeling, the PINNs approach incorporates physical laws and PDEs in addition to using data in a flexible manner. The PINNs are not subject to Courant-Friedrichs-Lewy (CFL) constraints for transient equations, and the time discretization terms can be treated as general boundary conditions [29]. For equations containing convective terms, the flow field direction is generally not treated during the solution of PINNs without additional treatment [30]. PINNs have been extensively applied in recent years to forward and inverse problems in fluid mechanics [31; 32; 33; 34; 35], solid mechanics [36; 37], heat transfer [38; 39], and flow in porous media [40][41]. Numerous function libraries have been developed for PINNs to make it easier for researchers to use [42; 43; 44]. Some preliminary studies on the solution of PINNs for hydrodynamic lubrication have been published recently. For example, Almqvist [45] and Zhao et al. [46] used PINNs to solve the one- and two-dimensional steady-state Reynolds equation for a linear slider. Li et al. [47] devised a PINN scheme to solve the Reynolds equation to predict the gas bearing's flow fields and aerodynamic characteristics. Despite being an effective numerical tool, PINNs have not been well applied in the field of tribology. The only exploratory studies on PINN methods have been based on the most basic PINNs, and some newer approaches such as adaptive methods and special treatment of boundary conditions have not been discussed. In terms of tribological problems, the solution of the Reynolds equation is still limited to specific cases of cavitation-free applications such as linear slider and gas bearing, while more practical applications of hydrodynamic lubrication problems involving cavitation (such as oil- or water-lubricated bearings) still cannot be solved by PINNs. The SS cavitation condition makes hydrodynamic lubrication become a typical obstacle problem; in terms of the JFO cavitation condition, the cavitation fraction has a jump condition, which makes it challenging for the PINNs to solve. None of these issues have been addressed to enable the use of PINNs for simulating hydrodynamic lubrication. In this study, HL-nets, a PINNs-based solver, is developed to solve the Reynolds equation, and the SS cavitation and JFO cavitation conditions are introduced into the PINNs to ensure HL-nets is applicable to simulating hydrodynamic lubrication with cavitation. In addition to the penalizing scheme of traditional PINNs, an imposing scheme is developed to imply different conditions, including the Dirichlet boundary (DB) condition, SS cavitation condition, and JFO cavitation condition. Multi-task learning (MTL) methods are applied to balance the newly introduced loss terms described above. The oil-lubricated bearing is studied as a typical hydrodynamic lubrication problem to test the performance of the proposed HL-nets; the traditional penalizing schemes and imposing condition schemes for DB conditions are compared, and cavitation problems with SS or JFO cavitation conditions are studied. The rest of the paper is organized as follows: in Section 2, the mathematical modeling of HL-nets is presented, including an introduction to the Reynolds equation and several cavitation conditions as well as descriptions of PINNs designed for solving the Reynolds equation with the cavitation conditions. Numerical tests of oil-lubricated bearings are performed to validate HL-nets in Section 3. Some concluding remarks and a brief discussion are presented in Section 4. ## 2 Methodology ### 2.1 Reynolds equation and cavitation conditions The Reynolds equation describes the flow of a thin lubricant film between two surfaces. This equation was derived from the N-S equation based on several assumptions, including ignoring inertial forces and pressure changes along the thickness direction of the lubrication film. For a Newtonian fluid with constant viscosity, the steady-state Reynolds equation can be expressed as- \[\mathbf{\nabla}\cdot(\rho h^{3}\mathbf{\nabla}p)=6\mu\mathbf{U}\cdot\mathbf{\nabla}(\rho h), \tag{1}\] where \(\mathbf{p}\) is the pressure, \(h\) is the film thickness, \(\mu\) is the viscosity of the fluid, \(\rho\) is the density of the fluid, and \(\mathbf{U}\) is the relative sliding velocity. Without loss of generality, we can assume that the sliding velocity is always directed along the \(x\)-axis direction. Then, Eq. (1) can be rewritten as \[\mathbf{\nabla}\cdot(\rho h^{3}\mathbf{\nabla}p)=6\mu U\frac{\partial(\rho h)}{ \partial x}. \tag{2}\] \(\bullet\) DB condition The pressure at the boundary is always specified as the ambient pressure \(p_{\partial\Omega}\) for the hydrodynamic lubrication problem, which is the DB condition for the Reynolds equation: \[p=p_{\partial\Omega}. \tag{3}\] The Reynolds equation (Eq. (2)) can be solved with the DB condition (Eq. (3)), but there is a possibility that the pressure value might be smaller than the cavitation pressure \(p_{cav}\) in applications where there is an evanescent gap (e.g., sliding bearings and thrust bearings with surface textures). To bridge the gap between simulation and physical reality, two different cavitation conditions have been developed based on the Reynolds equation, namely the SS cavitation condition and the JFO cavitation condition. \(\bullet\) SS cavitation condition The cavitation pressure \(p_{cav}\) is set to zero for simplicity in this study, which can be derived directly by translational transformation. Then, the SS cavitation condition imposes the following pressure constraints in the cavitation region [7, 8]: \[\mathbf{\nabla}p=0,p=p_{cav}=0, \tag{4}\] which indicates that the pressure is continuous and differentiable throughout the flow field, and it is subject to a physical lower limit. This is a typical obstacle problem [9], and the governing equations in the form of a variational inequality are \[\left\{\begin{aligned} -\mathbf{\nabla}\cdot(\rho h^{3}\mathbf{\nabla}p) \geq-6\mu U\frac{\partial(\rho h)}{\partial x},\\ p\geq 0,\\ \left[\mathbf{\nabla}\cdot(\rho h^{3}\mathbf{\nabla}p)-6\mu U\frac{ \partial(\rho h)}{\partial x}\right]p=0.\end{aligned}\right. \tag{5}\] \(\bullet\) JFO cavitation condition In the JFO cavitation model, the fluid region consists of a full-film region and a cavitation region with varying fluid densities. The cavitation fraction \(\theta\) assumes two extreme values, zero and one, in the bulk of the full-film region and cavitation region. The density of the lubricant film \(\rho\) is a function of the cavitation fraction \(\theta\) and the constant reference density of the lubricant \(\rho_{0}\): \[\rho=\rho_{0}(1-\theta), \tag{6}\] where \(\rho_{0}\) is the constant density in the full-film region. Substituting Eq. (6) into Eq. (2) yields \[\mathbf{\nabla}\cdot(h^{3}\mathbf{\nabla}p)=6\mu U\frac{\partial[(1-\theta)h]}{\partial x}. \tag{7}\] In the full-film region, Eq. (7) can be reduced to an incompressible fluid Reynolds equation. The JFO cavitation model can be converted into a complementary problem: \[\mathbf{\nabla}\cdot(h^{3}\mathbf{\nabla}p)=6\mu U\frac{\partial[(1-\theta)h]}{\partial x },\qquad p\theta=0,\qquad p>=0,\qquad\theta>=0. \tag{8}\] The complementary relationship between pressure and cavitation fraction can be constrained by introducing the FB equation: \[p+\theta-\sqrt{p^{2}+\theta^{2}}=0, \tag{9}\] which satisfies the non-negativity of the pressure. To improve the numerical stability of the calculation in practice, the above equations need to be dimensionalized to obtain \[\frac{\partial}{\partial X}\bigg{(}H^{3}\frac{\partial P}{ \partial X}\bigg{)}+\frac{L^{2}}{B^{2}}\frac{\partial}{\partial Y}\bigg{(}H^{3 }\frac{\partial P}{\partial Y}\bigg{)}=\frac{\partial[(1-\theta)H]}{\partial X}, \tag{10}\] \[X=\frac{x}{L};Y=\frac{y}{B};H=\frac{h}{h_{0}};P=\frac{h_{0}^{2}} {6\eta UL}p,L=2\pi R,\] where \(L\) and \(B\) represent the length and width of the lubrication region, respectively, \(p_{cav}\) represents the cavitation pressure, \(h_{0}\) represents the minimum film thickness, and \(R\) represents the radius of bearings. It should be noted that Eq. (10) can be viewed as a generalized form of the Reynolds equation, where the cavitation fraction \(\theta\) is set to zero if the JFO cavitation condition is not used. ## 2.2 PINNs for solving Reynolds equation in HL-nets #### 2.2.1 PINNs for solving Reynolds equation A scheme of the PINN framework is depicted in Fig. 2. In this scheme, \(u\left(\mathbf{x}\right)\) is approximated by a feed-forward fully connected network, for which the \(\mathbf{x}\) coordinate is the input and \(\mathbf{P_{\Theta}}\) is the output, where \(\mathbf{\Theta}\) denotes the trainable parameter set of the neural network. In this study, \(u\left(\mathbf{x}\right)\) can be the pressure \(P\left(\mathbf{x}\right)\) or the cavitation fraction \(\theta\left(\mathbf{x}\right)\). The fully connected neural network is used to approximate the solution \(P\left(\mathbf{x}\right)\). As shown in Fig. 1, the neural network is composed of an input layer, \(M-1\) hidden layers, and an output layer, as follows: \[\begin{array}{ll}\textbf{Input layer:}&\mathbf{\widetilde{u}}^{[0]}=\mathbf{x},\\ \textbf{Hidden layers:}&\mathbf{\widetilde{u}}^{[m]}=\sigma\big{(}\mathbf{W}^{[m]}\mathbf{ \widetilde{u}}^{[m-1]}+\mathbf{b}^{[m]}\big{)},\text{for}\ \ m=2,3,...,M-1,\\ \textbf{Output layer:}&\mathbf{P_{\Theta}}=\mathbf{\widetilde{u}}^{[M]}=\mathbf{W}^{[M]} \mathbf{\widetilde{u}}^{[M-1]}+\mathbf{b}^{[M]},\end{array} \tag{11}\] where \(\sigma(.)\) is the activation function representing a simple nonlinear transformation, such as \(relu(.)\), \(softmax(.)\), or \(simgoid(.)\), and \(\mathbf{W}^{[m]}\) and \(\mathbf{b}^{[m]}\) are trainable weights and biases, respectively, at the \(m\)-th layer. All the trainable weights and biases form the trainable parameter set of the neural network, \(\mathbf{\Theta}=\big{\{}\mathbf{W}^{[m]},\mathbf{b}^{[m]}\big{\}}_{1\leq m\leq M}\). The traditional scheme of PINNs for solving the Reynolds equation involves defining the loss of the Reynolds equation \(\mathcal{L}_{R}(\mathbf{\Theta})\) and the loss of the constraint conditions \(\mathcal{L}_{i}(\mathbf{\Theta})\). \(\mathbf{P}(\mathbf{x})\) is the solution of the Reynolds equation, which is approximated by the PINN output \(\mathbf{P_{\Theta}(x)}\). The parameters of the network in the traditional scheme can be trained by minimizing a composite loss function taking the form \[\mathcal{L}(\mathbf{\Theta})\coloneqq\lambda_{R}\mathcal{L}_{R}(\mathbf{ \Theta})+\sum\nolimits_{i=1}^{N_{c}}\lambda_{i}\mathcal{L}_{i}(\mathbf{\Theta}). \tag{12}\] The mean square error is adopted to measure the loss of the neural network; the Reynolds equation loss \(\mathcal{L}_{R}(\mathbf{\Theta})\) can be defined as \[\mathcal{L}_{R}(\mathbf{\Theta})=\frac{1}{N_{R}}\sum\limits_{i=1}^{N_ {R}}\left[\frac{\partial}{\partial x}\bigg{(}H^{3}\big{(}\mathbf{x}^{i}\big{)} \frac{\partial P_{\mathbf{\Theta}}\big{(}\mathbf{x}^{i}\big{)}}{\partial X}\bigg{)}+ \frac{L^{2}}{B^{2}}\frac{\partial}{\partial Y}\bigg{(}H^{3}\big{(}\mathbf{x}^{i} \big{)}\frac{\partial P_{\mathbf{\Theta}}\big{(}\mathbf{x}^{i}\big{)}}{\partial Y} \bigg{)}-\frac{\partial\left[\left(1-\theta\big{(}\mathbf{x}^{i}\big{)}\right)H \big{(}\mathbf{x}^{i}\big{)}\right]}{\partial X}\right]^{2}, \tag{13}\] where \(N_{R}\) is the number of data points for the bulk domain and \(\lambda_{R}\) and \(\lambda_{i}\) are weight parameters used to balance different loss terms. It should be noted that the cavitation fraction \(\theta\big{(}\mathbf{x}^{i}\big{)}\) is set to zero, unless the JFO cavitation condition is used. The constraint condition loss \(\mathcal{L}_{i}(\mathbf{\Theta})\) is different for the DB, SS, and JFO cavitation conditions; so, the detailed expression is not given here. Calculating the residuals of Eq. (13) requires derivations of the outputs with respect to the inputs, which can be achieved conveniently by automatic differentiation. Automatic differentiation capabilities are widespread in deep-learning frameworks such as TensorFlow [48] and PyTorch [49]. Using the automatic differentiation method can eliminate the need to perform tedious derivations or numerical discretizations to calculate the derivatives in space and time. The parameters of the fully connected network are trained using gradient-descent methods based on the backpropagation of the loss function as follows: \[\mathbf{\Theta}^{(k+1)}=\mathbf{\Theta}^{(k)}-\eta\nabla_{\mathbf{\Theta}} \mathcal{L}\big{(}\mathbf{\Theta}^{(k)}\big{)}, \tag{14}\] where \(\eta\) is the learning rate, and \(k\) is the iteration step. An important part of solving a PDE is to apply boundary conditions or other constraint conditions. Using a soft constraint for the penalizing scheme is the traditional method in PINNs. As expressed in Eq. (12), the total loss \(\mathcal{L}(\mathbf{\Theta})\) is composed of different kinds of loss, and the constraint condition is achieved by reducing the residual between the numerical value and theoretical value. Although this method is effective for training PINNs of boundary conditions and other constraint conditions, the softly penalizing constraining scheme does not ensure that the constraint conditions are strictly satisfied, resulting in computational errors. Recently, the method of embedding the constraint conditions into the network structure was proposed [50, 51]. Sukumar et al. [50] introduced an Fig. 1: General structure of PINNs for solving Reynolds equation. approach based on distance fields to exactly impose constraint conditions in PINNs, including Dirichlet, Neumann, and Robin boundary conditions. In their method, a distance function is multiplied by the output of the neural network and then added to the value required by the constraint condition, the result of which is used as the output of PINNs; the formula for imposing constraint conditions is \[\boldsymbol{P_{\Theta}}=\varphi\big{(}\boldsymbol{\widetilde{u}}^{[M]}\big{)}, \tag{15}\] where \(M\) denotes the last layer of the neural network, and \(\varphi\) is a function used to apply the constraint condition, which can be the DB or SS cavitation condition. When the imposing method is adopted for a constraint condition or a boundary condition, the corresponding loss term for this condition in Eq. (5) can be canceled. **2.2.2 Multi-task learning (MTL)** The optimization direction and optimization efficiency of the network are directly affected by loss weight selection. The total loss expression in Eq. (12) can be rewritten into a more general form: \[\mathcal{L}(\boldsymbol{\Theta})\coloneqq\sum\nolimits_{i}^{N_{t}}\lambda_{ i}\mathcal{L}_{i}(\boldsymbol{\Theta}), \tag{16}\] where \(N_{t}\) is the number of loss terms. With a large weight coefficient, the corresponding term contributes more to the total loss, and then, an optimization process is carried out to reduce this loss term. Therefore, it is important to choose reasonable weight coefficients. For classical PINNs with only PDE loss and boundary loss, a too-large or too-small weighting factor of the boundary residuals will make the computational results less accurate [52, 53]. It is possible to search for suitable multi-losses weight parameters via manual scaling to make the results of PINNs highly accurate, but this approach is very costly and tedious. Balancing different loss terms during training is typical of MTL, a learning technique that allows multiple tasks to be tackled concurrently based on shared representations [54]. Three different MTL methods are adopted in this study, namely the dynamic weight (DW) method, the uncertainty weight (UW) method, and the projecting conflicting gradient (PCGrad) method. \(\bullet\) DW method The DW method [53] is widely used for balancing loss of governing equations and boundary conditions in the field of PINNs [55]. The main idea of DW is balancing the gradients of each different loss term to the same order of magnitude during PINN training. The weights are updated as \[\lambda_{i}^{(k+1)}=\frac{\overline{|\nabla_{\Theta}\mathcal{L}_{R}(\boldsymbol {\Theta})|}}{\overline{|\nabla_{\Theta}\mathcal{L}_{i}(\boldsymbol{\Theta})|}}, \tag{17}\] \[\lambda_{i}^{(k+1)}=(1-\alpha)\lambda_{i}^{(k)}+\alpha\hat{\lambda}_{i}^{(k+1)},\] where \(\alpha\) is a hyperparameter determining how fast the contributions of previous dynamic weights \(\lambda_{i}^{(k)}\) decay, and it is set to 0.1 in this study, and \(k\) is the training step. \(\bullet\) UW method The UW method uses homoscedastic uncertainty to balance the multi-task losses based on maximizing the Gaussian likelihood [56]. When a task's homoscedastic uncertainty is high, the effect of this task on the network weight update is small. The loss function is defined as \[\mathcal{L}(\mathbf{\Theta};\mathbf{s})\coloneqq\sum_{i=1}^{N_{T}}\Big{(}\frac{1}{2}\exp{(- s_{i})}\mathcal{L}_{i}(\mathbf{\Theta})+s_{i}\Big{)}, \tag{18}\] where the log variance \(s_{i}\) balances the task-specific losses during training [57], and \(N_{T}\) is the number of loss terms. The goal is to find the best model weights \(\mathbf{\Theta}\) and log variance \(\mathbf{s}\) by minimizing the loss \(\mathcal{L}(\mathbf{\Theta};\mathbf{s})\). It should be noted that the training results of the neural network may be affected by the initialization of the log variance \(s_{i}\). \(\bullet\) PCGrad method The PCGrad method is a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task with a conflicting gradient [58]. The PCGrad method is applied to the training of the neural network rather than applying constraint conditions directly, which is applied in the solution of PINNs and proved to be effective [59]. The parameters of the neural network are trained as follows: \[\mathbf{\Theta}^{(k+1)}=\mathbf{\Theta}^{(k)}-\eta\cdot\text{PCGrad}\big{(}\big{\{} \nabla_{\mathbf{\Theta}}[\mathcal{L}_{i}(\mathbf{\Theta}^{(k)})]\big{\}}\big{\}}. \tag{19}\] In a general form of the loss function with multiple loss terms, all training processes of the MTL method mentioned above are denoted as \[\mathbf{\Theta}^{(k+1)}=\mathbf{\Theta}^{(k)}-\eta\cdot\text{MTL}\big{(}\big{\{} \nabla_{\mathbf{\Theta}}[\mathcal{L}_{i}(\mathbf{\Theta}^{(k)})]\big{\}}\big{)}, \tag{20}\] where MTL(\(\cdot\)) represents an MTL method, specifically, DW(\(\cdot\)), UW(\(\cdot\)), or PCGrad(\(\cdot\)) in this study. ## 2.3 PINNs for cavitation conditions in HL-nets There are three different constraint conditions in HL-nets, namely the DB condition, SS cavitation condition, and JFO cavitation condition. As described above, there are two different schemes to apply constraint conditions--the penalizing scheme and the imposing scheme. In this section, the implementation of the two schemes for three different constraint conditions is described. #### 2.3.1 PINNs for solving Reynolds equation with Dirichlet boundary condition Satisfying the DB condition is the basis for solving the Reynolds equation in hydrodynamic lubrication. It should be noted that when applying SS or JFO cavitation conditions, the DB condition should also be satisfied at the boundary of the computational domain. For simplicity, we denote the penalizing scheme for the DB condition as the "Pe-DB" scheme and the imposing scheme for the DB condition as the "Imp-DB" scheme. \(\bullet\) Pe-DB scheme The DB condition loss \(\mathcal{L}_{b}(\mathbf{\Theta})\) is defined as \[\mathcal{L}_{b}(\mathbf{\Theta})=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}[\mathbf{P}_{\mathbf{ \Theta}}\big{(}\mathbf{x}^{i}\big{)}-P_{\mathbf{\partial\mathbf{\Theta}}}\big{(}\mathbf{x}^{ i}\big{)}]^{2}. \tag{21}\] The parameters of the fully connected network are trained based on the backpropagation of the loss function consisting of the Reynolds equation loss \(\mathcal{L}_{R}(\mathbf{\Theta})\) and DB condition loss \(\mathcal{L}_{b}(\mathbf{\Theta})\). The multiple loss terms in the Pe-DB scheme are denoted as \[\mathcal{L}(\mathbf{\Theta})=[\mathcal{L}_{R}(\mathbf{\Theta}),\mathcal{L}_{b}(\mathbf{ \Theta})]. \tag{22}\] \(\bullet\) Imp-DB scheme An approximate distance function \(\mathbf{\phi}\) to the boundary of a domain is multiplied by the output of the neural network and then added to the boundary value function \(\mathbf{P}_{\mathbf{\partial\mathbf{\Theta}}}\) required by the boundary condition, the result of which is used as the output of PINNs [50]. Elimination of one loss will significantly improve the accuracy and stability of the calculation. The specific formula is- \[\mathbf{P_{\theta}}=\varphi(\mathbf{\bar{P}}^{[M]})=\mathbf{P_{\theta\Omega}}+\mathbf{\phi\bar{P} }^{[M]}, \tag{23}\] where \(\mathbf{\phi}\) is the distance function with a value of zero at boundaries. In this scheme, only Reynolds equation residuals remain in the loss function; so, the loss term is denoted as \[\mathcal{L}(\mathbf{\theta})=[\mathcal{L}_{R}(\mathbf{\theta})]. \tag{24}\] #### 2.3.2 PINNs for solving Reynolds equation with SS cavitation condition The SS cavitation condition requires that the solved pressure field is non-negative according to Eq. (3). We propose two methods to approximate the non-negative and differentiable solution, namely the penalizing SS cavitation condition scheme (denoted as the "Pe-SS" scheme) and the imposing SS cavitation condition scheme (denoted as the "Imp-SS" scheme). The Pe-SS scheme penalizes the residuals of non-negativity by adding a loss term; the Imp-SS scheme constructs a differentiable non-negative output of the neural network, which imposes the condition in PINNs. Since the condition is constrained to a non-negative pressure field, the pressure converges to zero in the cavitation region for both the value and its gradient. When dealing with the SS cavitation condition, the non-singular term \(\frac{\partial H(\mathbf{x}^{t})}{\partial\mathbf{x}}\) at the right end of the Reynolds equation in the cavitation region should be set to zero; otherwise, this term will make the residual at that point non-zero. Before doing this, we must find the cavitation region during the calculation. Since the cavitation region is unknown at the beginning of the calculation, the range of the cavitation region needs to be obtained during the solution process. According to Eq. (3), the criterion \(\varepsilon_{P_{\theta}}\) for determining the cavitation region is set to the logical value of \((P<0)\)\(|\)\(|\)\((P=0\) &\(|\nabla P|=0)\). We develop a formula to determine the cavitation region: \[\varepsilon_{P_{\theta}}=1-\varepsilon_{+}(\mathbf{P})-\varepsilon_{-}(|P|) \varepsilon_{-}(|\nabla P|), \tag{25}\] where \(\varepsilon_{+}(\mathbf{x})\) and \(\varepsilon_{-}(\mathbf{x})\) are step functions, which are given as \[\varepsilon_{+}(\mathbf{x})=\begin{cases}0,x\geq 0\\ 1,x<0\end{cases},\varepsilon_{-}(\mathbf{x})=\begin{cases}0,x>0\\ 1,x\leq 0\end{cases}. \tag{26}\] Thus, the Reynolds equation loss with the SS cavitation condition is expressed as \[\mathcal{L}_{R}(\mathbf{\Theta})=\frac{1}{N_{RE}}\sum_{i=1}^{N_{RE}}\left[\frac{ \partial}{\partial\mathbf{x}}\bigg{(}H^{3}\big{(}\mathbf{x}^{i}\big{)}\frac{\partial P _{\theta}\big{(}\mathbf{x}^{i}\big{)}}{\partial\mathbf{x}}\bigg{)}+\frac{L^{2}}{B^{2}} \,\frac{\partial}{\partial\mathbf{Y}}\bigg{(}H^{3}\big{(}\mathbf{x}^{i}\big{)}\frac{ \partial P_{\theta}\big{(}\mathbf{x}^{i}\big{)}}{\partial\mathbf{Y}}\bigg{)}- \varepsilon_{P_{\theta}}\frac{\partial H\big{(}\mathbf{x}^{i}\big{)}}{\partial \mathbf{X}}\right]^{2}. \tag{27}\] \(\bullet\) Pe-SS scheme During the iterative computation, there are always positive and negative pressures, and the final goal is to converge the positive pressure region to the Reynolds equation and set the negative pressure region to zero. The final system of PDEs is satisfied. Then, the non-negativity loss is \[\mathcal{L}_{SS}(\mathbf{\Theta})=\frac{1}{N_{RE}}\sum_{i=1}^{N_{r}}\{Relu\big{[} -P_{\theta}\big{(}\mathbf{x}^{i}\big{)}\big{]}\}^{2}, \tag{28}\] where \(Relu\) is the linear rectification function. \(\bullet\) Imp-SS scheme Inspired by the imposition of the boundary condition scheme [50], we propose Imp-SS to make the PINNs exactly satisfy the non-negativity requirement and construct a differentiable and non-negative function as the activation function of the output layer of the neural network. Here, we use the simple and efficient Relu-square function, which is defined as \[Relu^{2}(x)=\left\{\begin{matrix}x^{2},x\geq 0\\ 0,x<0\end{matrix}\right. \tag{29}\] so that the non-negativity and differential continuity are preserved. Then, the output of the neural network is \[P_{\mathbf{\Theta}}=\varphi\big{(}\tilde{P}^{[M]}\big{)}=Relu^{2}\big{(}\tilde{P}^{ [M]}\big{)}. \tag{30}\] As described above, applying the DB condition is the basis for solving the Reynolds equation. When the two SS schemes and two DB schemes are combined in pairs, we can have four different schemes to simulate the hydrodynamic lubrication with SS cavitation, and the corresponding loss term of each pair can be obtained by summing the two loss terms generated by the two schemes comprising the pair, which are listed in Table 1. #### 2.3.3 PINNs for solving Reynolds equation with SFO cavitation condition Compared with the SS cavitation condition, the JFO cavitation condition has an additional FB equation (Eq. (8)), which is used to achieve non-negative pressure. When the JFO cavitation condition is adopted in PINNs, it is natural to apply the algorithm since the cavitation fraction parameter \(\theta\) in the FB complementary equation is continuously differentiable. Fig. 2 shows the frame of HL-nets for solving the Reynolds equation with the JFO cavitation condition. We substitute Eq. (8) into Eq. (9) to define Reynolds equation loss: \[\mathcal{L}_{R}(\mathbf{\Theta})=\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}\left[\frac{ \partial}{\partial x}\bigg{(}H^{3}\big{(}\mathbf{x}^{i}\big{)}\frac{\partial P_{ \mathbf{\Theta}}(\mathbf{x}^{i})}{\partial x}\bigg{)}+\frac{L^{2}}{B^{2}}\frac{ \partial}{\partial Y}\bigg{(}H^{3}\big{(}\mathbf{x}^{i}\big{)}\frac{\partial P_{ \mathbf{\Theta}}(\mathbf{x}^{i})}{\partial Y}\bigg{)}-\frac{\partial\left(\left(1- \theta_{\mathbf{\Theta}}(\mathbf{x}^{i})\right)H(\mathbf{x}^{i})\right)\right)}{\partial x }\right]^{2}. \tag{31}\] In addition to the loss term of the Reynolds equation, an additional loss term of the FB complementarity function is added to constrain the complementarity of the pressure and cavitation ratio; the FB equation loss is defined as \[\mathcal{L}_{FB}(\mathbf{\Theta})=\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}\left[P_{\mathbf{ \Theta}}\big{(}\mathbf{x}^{i}\big{)}+\theta_{\mathbf{\Theta}}\big{(}\mathbf{x}^{i}\big{)} -\sqrt{P_{\mathbf{\Theta}}(\mathbf{x}^{i})^{2}+\theta_{\mathbf{\Theta}}(\mathbf{x}^{i})^{2}} \right]^{2}. \tag{32}\] Although the FB function (Eq. (9)) contains a constraint for non-negative pressure values, it is not strictly satisfied because it is a soft constraint. When the pressure in the cavitation zone is not exactly equal to zero and fluctuates around zero, the second-order differential term of the pressure of the equation residuals will not be equal to zero, which will lead to a change in the nature of the equation and will significantly impact the cavitation fraction convection equation. To alleviate this problem in PINNs, we can add an additional non-negative constraint of the pressure (\(p\geq 0\)), like the SS cavitation condition; for implementation in PINNs, the Imp-SS scheme (see Section 2.3.2) can be adopted here due to its good performance (see Section 3.3). With the Imp-SS scheme in the previous section, the Relu-square activation function for the output layer of the pressure will be used to ensure that the pressure is non-negative. Part of the constraint task of the FB function will be satisfied in advance so that the pressure is exactly equal to zero in the cavitation region. Besides, \begin{table} \begin{tabular}{c c} \hline Scheme & Loss term \(\mathcal{L}(\mathbf{\Theta})\) \\ \hline Pe-DB \& Pe-SS & \([\mathcal{L}_{R}(\mathbf{\Theta}),\,\mathcal{L}_{B}(\mathbf{\Theta}),\,\mathcal{L}_{SS}( \mathbf{\Theta})]\) \\ Imp-DB \& Pe-SS & \([\mathcal{L}_{R}(\mathbf{\Theta}),\,\mathcal{L}_{SS}(\mathbf{\Theta})]\) \\ Pe-DB \& Imp-SS & \([\mathcal{L}_{R}(\mathbf{\Theta}),\,\mathcal{L}_{b}(\mathbf{\Theta})]\) \\ Imp-DB \& Imp-SS & \([\mathcal{L}_{R}(\mathbf{\Theta})]\) \\ \hline \end{tabular} \end{table} Table 1: Loss terms for solving Reynolds equation with SS cavitation condition in HL-nets the output layer of the cavitation ratio uses the sigmoid activation function to limit the cavitation ratio \(\theta\) to between zero and one. When Imp-DB is applied, the multiple loss terms are denoted as \[[\mathcal{L}_{i}(\boldsymbol{\Theta})]=[\mathcal{L}_{R}(\boldsymbol{\Theta}), \mathcal{L}_{FB}(\boldsymbol{\Theta})]. \tag{33}\] ## 3 Results and Discussion To illustrate the prediction performance of HL-nets in solving the Reynolds equation, the flow fields of a bearing are simulated, which is a typical hydrodynamic lubrication problem. We first solve the Reynolds equation without considering cavitation to compare the accuracy of the Pe-DB scheme and Imp-DB scheme for the DB condition; then, for hydrodynamic lubrication involving cavitation, we compare different schemes for SS and JFO cavitation conditions. ### 3.1 Problem setup In this section, we employ HL-nets to the problem of idealistic oil-lubricated bearing, as shown in Fig. 3, with the dimensions being the same as those of the bearing studied in Ref [60], presented in Table 2. An absolute pressure scale is used, with an ambient pressure of \(p_{\partial\Omega}=72\) kPa and the cavitation pressure arbitrarily taken as \(p_{cav}=0\) kPa. The bearing surface is kept free of inlet holes or grooves for simplicity. In this numerical experiment, the journal is fixed at an eccentricity ratio of 0.6, with the journal rotating at 459 r/min. Figure 3: Geometry of bearing and Reynolds equation domain Figure 2: Frame of HL-nets for solving Reynolds equation with JFO cavitation condition. The film thickness for the journal bearing and DB condition is set to \[h=ecos\big{(}2\pi(X+0.5)\big{)}+c;h_{0}=c, \tag{34}\] \[P_{\partial\Omega}=\frac{h_{0}^{2}}{6\eta UL}p_{\partial\Omega}. \tag{35}\] The coordinates \(\{x,y\}\) are mapped to \(\{X,Y\}\in[-0.5,0.5]\times[-0.5,0.5]\), because the input parameters of the neural network are more suitable for a symmetric distribution. The computational domain and the dimensionless film thickness are shown in Fig. 4. To evaluate the accuracy of the HL-nets result, the high-precision FEM result is assumed as the reference value. The relative \(L_{1}\) error \(L_{1-error}\) and the relative \(L_{2}\) error \(L_{2-error}\) are defined as follows: \[L_{1-error}(P)=\frac{\left\|\hat{P}-P\right\|_{1}}{\left\|\hat{P}\right\|_{1}},L_{2-error}(P)=\frac{\left\|\hat{P}-P\right\|_{2}}{\left\|\hat{P}\right\|_{2}}, \tag{36}\] where \(\hat{P}\) denotes the pressure inferred by reference, \(P\) represents the pressure inferred by PINNs, and \(\left\|\cdot\right\|_{1}\) and \(\left\|\cdot\right\|_{2}\) are \(L_{1}\)-norm and \(L_{2}\)-norm, respectively. The absolute error \(P_{error}\) is defined as \[P_{error}=\left|\hat{P}-P\right|, \tag{37}\] and the errors of the cavitation fraction \(\theta\) can be calculated in the same way. The reference pressure field \(\hat{P}\) and cavitation fraction fields are obtained in this study by discretizing the equations using the high-precision FEM. The SS cavitation condition is applied by setting the negative pressure to zero at each iterative computational step. The JFO cavitation condition is applied using the \begin{table} \begin{tabular}{c c} \hline Parameter & Value \\ \hline Radius \(R\) (mm) & 50 \\ Length \(L\) (mm) & 8/3R \\ Radius clearance \(c\) (m) & 145.5e-6 \\ Viscosity \(\eta\) (mP \(\cdot\) s) & 0.0127 \\ Eccentricity ratio \(e\) & 0.6 \\ Rotating speed \(n\) (r/min) & 459 \\ Cavitation pressure \(p_{cav}\) (kPa) & 0 \\ Ambient pressure \(p_{\partial\Omega}\) (kPa) & 72 \\ \hline \end{tabular} \end{table} Table 2: Structural and operating parameters of bearing stabilization method, SUPG [20], and the pressure and cavitation fraction are solved simultaneously using the Fischer-Burmeister-Newton-Schur method [22]. In the following numerical tests, the points selected in the domain and boundary are fixed; a total of 2,500 points in the computational domain are used to compute the equation loss of PINNs, and 100 points on each boundary are used to compute the loss of boundary conditions. All collocation points are generated using uniformly distributed sampling for better stability. The fully connected layer neural network consists of six layers, each containing 20 neurons. In the following case study section, the training procedure is performed by applying the full-batch Adam optimizer with decreasing learning rates \(\eta=10^{-3},\ 10^{-4}\), and \(10^{-5}\), and each involves 20,000 epochs. ## 3.2 Comparison of schemes for Dirichlet boundary condition In this subsection, to verify the treatment of the DB condition, the use of PINNs to solve the Reynolds equation without cavitation is considered, where the unphysical negative pressure is allowed. The DB condition treatment is important because it can greatly affect the accuracy and stability of the calculation with the cavitation in the next section. In this section, the Pe-DB and Imp-DB schemes are used separately. Since the Pe-DB scheme contains multiple loss terms, the three MTL methods UW, DW, and PCGrad will be armed with the Pe-DB scheme to balance the loss of the Reynolds equation and DB condition. The initial parameter of the UW method is set to \(\mathbf{s}=[2,-2]\). The DB condition loss \(\mathcal{L}_{b}\) and Reynolds equation loss \(\mathcal{L}_{R}\) of different DB condition schemes and MTL methods during the training process are shown in Fig. 5. From the training graph of losses in Fig. 5, we can observe that the traditional penalizing scheme (Pe-DB) without the MTL method produces both the largest \(\mathcal{L}_{b}\) and the largest \(\mathcal{L}_{R}\), and the \(\mathcal{L}_{b}\) is 1-2 orders of magnitude larger than the results of Pe-DB with the MTL method. Furthermore, the DW method is the most effective in restraining the DB condition loss \(\mathcal{L}_{b}\), while it produces a larger value of equation loss \(\mathcal{L}_{R}\). In terms of equation loss \(\mathcal{L}_{R}\), the Imp-DB scheme and the Pe-DB scheme with the UW method produce the same order of magnitude of the loss, which is smaller than that of Pe-DB with PCGrad or DW. Besides, the Imp-DB scheme has no boundary loss, and its equation loss \(\mathcal{L}_{R}\) can reach a low order of magnitude more quickly during the training process. Fig. 5 Training loss of equation residuals \(\mathcal{L}_{R}\) and boundary condition residuals \(\mathcal{L}_{b}\) with different schemes. Fig. 6 shows the comparison of the pressure value of FEM, the pressure predicted by the Imp-DB scheme, and the pressure predicted by the Pe-DB scheme without an MTL method. It is found that the results of the traditional Pe-DB scheme without an MTL method do not match the FEM results, whereas the results of the Imp-DB scheme agree well with the FEM results. However, when any one of the three MTL methods is employed in the Pe-DB scheme, the accuracy is greatly improved; the results of the Pe-DB scheme with MTL methods are not included in Fig. 6 because the results are indistinguishable from the results of the Imp-DB scheme. Both the Pe-DB scheme with MTL and the Imp-DB scheme produce results with sufficient accuracy for this hydrodynamic lubrication problem without cavitation. In addition, it is difficult to balance the boundary condition loss \(\mathcal{L}_{b}\) and Reynolds equation loss \(\mathcal{L}_{R}\) in this hydrodynamic lubrication problem; so, an MTL method is necessary for the Pe-DB scheme to obtain results with acceptable accuracy. Fig. 7 shows the error \(P_{error}\) between the PINN solution and the solution obtained by FEM. The results of the Imp-DB scheme are the most accurate, and \(P_{error}\) at the boundary is equal to zero, which proves that the Imp-DB scheme can ensure that the DB conditions are strictly satisfied. However, for the traditional Pe-DB scheme, the primary error lies in the boundary conditions since the penalizing scheme does not ensure that the boundary conditions are strictly satisfied. Besides, the maximum error on the boundary extends into the bulk area of the computational domain, which indicates that the satisfaction of the boundary conditions affects the global computational accuracy. Fig. 6 Comparison of the exact pressure value calculated by FEM, the pressure predicted by the Imp-DB scheme, and the pressure predicted by the traditional Pe-DB scheme. The left part represents the exact pressure field calculated by FEM. For a more quantitative analysis, the \(L_{1-error}\) and \(L_{2-error}\) of the pressure field obtained by different schemes are presented in Table 3. The results represent the mean \(\pm\) standard deviation from five independent runs with independent initial network parameters. According to the results in Table 3, the Pe-DB scheme without an MTL method performs worst, and both its \(L_{1-error}\) and \(L_{2-error}\) are at least two orders of magnitude larger than those of any other scheme. The three MTL methods adopted in the Pe-DB scheme are listed in descending order of accuracy as DW, UW, and PCGrad in this case. The DW method outperforms the other MTL methods with \(L_{1-error}\) = 2.389e-03 \(\pm\) 6.950e-04 and \(L_{2-error}\) = 2.665e-03 \(\pm\) 7.574e-04; the relative error of the UW method has the same order of magnitude as that of the DW method, and the relative error of the PCGrad method is \(\pm\) 1.002. Figure 7: Error \(P_{error}\) between the PINN solution and the solution obtained by FEM. an order of magnitude larger than that of other MTL methods. Combined with the training loss curves in Fig. 6, it is found that the boundary loss \(\mathcal{L}_{b}\) are more important than the equation loss \(\mathcal{L}_{R}\). The MTL method that can meet the boundary loss \(\mathcal{L}_{b}\) more precisely can obtain better overall computational accuracy. The DW method prefers to produce a larger weight \(\lambda_{b}\) for boundary loss \(\mathcal{L}_{b}\) than the other two MTL methods; so, it can obtain more precise prediction results. In addition, we consider different sets of initial log variance, and the experimental results show that the log variance \(\boldsymbol{s}\) converges to \([-1.186\text{e}01\pm 6.203\text{e}{\text{-}}01,-1.706\text{e}01\pm 5.302\text{e} {\text{-}}01]\), which indicates that the UW method is not sensitive to the initial weights in HL-nets (see Appendix A). Besides, the PCGrad method can also effectively balance the loss to obtain a result with acceptable accuracy, which is lower than that of other methods. The Imp-DB scheme changes the neural network optimization into single-objective unconstrained optimization, which allows the optimization objective to focus on the loss of the equations without considering the residuals of the output values at the boundary so that the equations can be solved with high accuracy to obtain the pressure field. The \(L_{1-error}\) and \(L_{2-error}\) of the Imp-DB scheme can reach \(3.368\text{e}{\text{-}}05\pm 8.274\text{e}{\text{-}}06\) and \(5.508\text{e}{\text{-}}05\pm 1.487\text{e}{\text{-}}05\), respectively, which are two orders of magnitude smaller than those of the traditional Pe-DB scheme with any MTL method. Besides, as shown in Fig. 7(E), the errors of the Imp-DB scheme are mainly concentrated in the region of positive to negative pressure transition in the middle of the flow field, where the pressure gradient is large; so, there are some difficulties for PINNs in solving this region with larger gradients. ### 3.3 Cavitation problem with Swift-Stieber cavitation condition The SS cavitation condition is implemented in solving the Reynolds equation in this section, which can make the pressure field closer to physical reality. When applying the SS cavitation condition, the DB condition also should be applied at boundaries. There are two schemes for the DB condition, and two schemes for the SS cavitation condition; by combining these schemes into pairs, we can obtain four different methods for this cavitation problem with the SS cavitation condition: 1) Pe-DB & Pe-SS, 2) Imp-DB & Pe-SS, 3) Pe-DB & Imp-SS, and 4) Imp-DB & Imp-SS. In this section, we examine the computational accuracy of these four different methods with different MTL methods for cavitation problems. The initial parameter of the UW method is set to \([5,-5,10]\), \([5,10]\), and \([5,-5]\) for Pe-DB & Pe-SS, Imp-DB & Pe-SS, and Pe-DB & Imp-SS, respectively. The influence of the initial parameter is discussed in the Appendix A. The loss variation during the PINN training process, as shown in Fig. 8, indicates that the PINNs reached a steady state after \(6\times 10^{4}\) epochs. Fig. 7 shows the absolute error \(P_{error}\) distribution between the PINN solution and the solution obtained by FEM. The relative errors \(L_{1-error}\) and \(L_{2-error}\) of the pressure field obtained by different schemes are \begin{table} \begin{tabular}{c c c c} \hline BC scheme & MTL method & \(L_{1-error}\) (P) & \(L_{2-error}\) (P) \\ \hline Pe-DB & - & 1.319e-01 \(\pm\) 7.362e-02 & 1.489e-01 \(\pm\) 8.777e-02 \\ Pe-DB & DW & 2.389e-03 \(\pm\) 6.950e-04 & 2.665e-03 \(\pm\) 7.574e-04 \\ Pe-DB & UW & 5.322e-03 \(\pm\) 1.846e-03 & 5.568e-03 \(\pm\) 1.804e-03 \\ Pe-DB & PCGrad & 9.977e-03 \(\pm\) 1.347e-03 & 1.154e-02 \(\pm\) 1.873e-03 \\ \hline **Imp-DB** & **-** & **3.368e-05 \(\pm\) 8.274e-06** & **5.508e-05 \(\pm\) 1.487e-05** \\ \hline \end{tabular} \end{table} Table 3: Performance comparison of Reynolds equation without cavitation presented in Table 4. The results represent the mean \(\pm\) standard deviation from five independent runs with independent initial network parameters. (B) Imp-DB & Pe-SS The Pe-DB & Pe-SS scheme contains three loss terms, and only combining this scheme with the UW method with the appropriate initial parameters can deliver barely acceptable results, whereas combining this scheme with either of the other two MTL methods results in failure. As shown in Fig. 10(A2), Pe-DB & Pe-SS with UW can better meet the DB condition and ensure that the error is mainly distributed in the transition zone between the pressure extremes and the cavitation zone, which also has a large gradient; the relative \(L_{1}\) error and \(L_{2}\) error of Pe-DB & Pe-SS with the UW scheme are \(1.126\)e-\(02\pm 5.301\)e-\(03\) and \(1.755\)e-\(02\pm 7.533\)e-\(03\), respectively. Applying the imposing scheme can reduce the loss terms and thereby simplify optimization. Fig. 8: Training losses of different residuals. According to Table 4, the Imp-DB & Pe-SS scheme contains the Reynolds equation loss \(\mathcal{L}_{R}\) and SS loss \(\mathcal{L}_{SS}\); the boundary loss \(\mathcal{L}_{b}\) is canceled due to the implementation of the Imp-DB scheme. The Imp-DB & Pe-SS scheme with the UW or PCGrad method can effectively treat the loss of the Reynolds equation and SS cavitation condition. The relative \(L_{1}\) error and the relative \(L_{2}\) error of the UW method can reach 4.697e-03 \(\pm\) 1.291e-03 and 6.269e-03 \(\pm\) 1.689e-03, respectively. As shown in Fig. 9(B2), the maximum values of the absolute error \(P_{error}\) are located between the peak pressure and the cavitation region, where the pressure obtained by PINNs with the UW method is larger than the reference value. For the Imp-DB & Pe-SS scheme with the PCGrad method, the relative \(L_{1}\) error and the relative \(L_{2}\) error are 6.034e-03 \(\pm\) 8.113e-04 and 8.704e-03 \(\pm\) 1.917e-03, respectively. Even though the PCGrad method appears to produce a larger error than the UW method, it does not require any initial parameters, as does UW. From Fig. 9(B3), we can see that the absolute error \(P_{error}\) is located between the peak pressure and the cavitation region, and the pressure obtained by PINNs is smaller than the reference value. As discussed in the previous section, the DW method outperforms the other two MTL methods for hydrodynamic lubrication with DB condition, but it produces the worst results when the Pe-SS scheme is adopted. The non-negative constraint cannot be strictly satisfied in the cavitation region when penalizing schemes are used, and we find that the DW method handles the non-negativity loss with too-large weights, thereby preventing the convergence of the loss of the Reynolds equation. As shown in Fig. 8(A, B), the non-negativity loss \(\mathcal{L}_{ss}\) of the DW method is far smaller than that of the other two MTL methods, while the equation loss \(\mathcal{L}_{R}\) is far larger than that of the others. The Pe-DB & Imp-SS scheme with the DW method can obtain high solution accuracy, with relative \(L_{1}\) and \(L_{2}\) errors of 6.999e-04 \(\pm\) 2.275e-05 and 8.398e-04 \(\pm\) 2.637e-05, respectively, which are both one order of magnitude smaller than those of the other schemes. As shown in Fig. 9(C1), in addition to the boundary error, the error is also distributed in the transition region between the high-pressure and cavitation regions near the boundary. The DW method is more suitable for dealing with the equation loss and the boundary condition loss than other MTL methods. It can be noted that the PCGrad method achieves a barely acceptable level of accuracy, and the error is mainly distributed in the boundary. With the appropriate initial parameters, the UW method can also achieve high accuracy. The Pe-DB & Pe-SS scheme can eliminate the boundary and SS cavitation condition residuals, and it obtains the best solution accuracy, with relative \(L_{1}\) and \(L_{2}\) errors of 1.331e-04 \(\pm\) 5.038e-06 and 3.491e-04 \(\pm\) 1.240e-04, respectively, which are significantly smaller than those of all the other schemes. For Fig. 9(D), the errors are likewise mainly distributed in the transition part of the high-pressure and cavitation regions near the boundary, where there are fluctuations. This scheme transforms the multi-objective optimization problem into an unconstrained single-objective optimization with the best accuracy and stability. The comparison of the exact pressure and the pressure predicted by the Imp-DB & Imp-SS scheme is shown in Fig. 10, which illustrates the high accuracy of HL-nets for solving the Reynolds equation with the SS cavitation condition. Figure 9: Absolute error \(\mathit{P}_{error}\) between the PINN solution and the solution obtained by FEM. ## 3.4 Cavitation problem with JFO cavitation condition In this section, the JFO cavitation condition is implemented in solving the Reynolds equation. Our computational experience shows that the Pe-DB scheme fails at solving the pressure and cavitation fraction fields; so, only the Imp-DB scheme is adopted to implement the DB condition in this cavitation problem with the JFO cavitation condition. Besides, the Imp-SS scheme can be used to apply another non-negativity constraint on the pressure in addition to the FB function, which is optional in this section. Since the loss function contains multiple loss terms, the three MTL methods UW, DW, and PCGrad are used to balance the loss. The initial parameter of DW is set to \([5,-3]\). The influence of the initial parameter is discussed in the Appendix A. Training losses of Reynolds equation residuals \(\mathcal{L}_{R}\) and FB function residuals \(\mathcal{L}_{FB}\) with different MTL methods are shown in Fig. 11. The training loss curve is smoother over time when the Imp-SS scheme is applied to strengthen the non-negativity constraint of pressure. In addition, the loss values in Fig. 11(A) are overall smaller than those in Fig. 11(B), which indicates the effectiveness of \begin{table} \begin{tabular}{c c c c c} \hline BC scheme & SSC scheme & MTL method & \(L_{1-error}(P)\) & \(L_{2-error}(P)\) \\ \hline Pe-DB & Pe-SS & - & \(>\)1 & \(>\)1 \\ Pe-DB & Pe-SS & DW & \(>\)1 & \(>\)1 \\ Pe-DB & Pe-SS & UW & 1.126e-02 \(\pm\) 5.301e-03 & 1.755e-02 \(\pm\) 7.533e-03 \\ Pe-DB & Pe-SS & PCGrad & 3.789e-01 \(\pm\) 8.934e-02 & 3.877e-01 \(\pm\) 7.895e-02 \\ \hline Imp-DB & Pe-SS & - & 1.810e-02 \(\pm\) 1.067e-02 & 3.077e-02 \(\pm\) 1.856e-02 \\ Imp-DB & Pe-SS & DW & 1.405e-01 \(\pm\) 3.059e-03 & 2.241e-01 \(\pm\) 4.758e-03 \\ Imp-DB & Pe-SS & UW & 4.697e-03 \(\pm\) 1.291e-03 & 6.269e-03 \(\pm\) 1.689e-03 \\ Imp-DB & Pe-SS & PCGrad & 6.034e-03 \(\pm\) 8.113e-04 & 8.704e-03 \(\pm\) 1.917e-03 \\ \hline Pe-DB & Imp-SS & - & 3.806e-01 \(\pm\) 3.291e-01 & 3.743e-01 \(\pm\) 3.066e-01 \\ Pe-DB & Imp-SS & DW & 6.999e-04 \(\pm\) 2.275e-05 & 8.398e-04 \(\pm\) 2.637e-05 \\ Pe-DB & Imp-SS & UW & 2.007e-03 \(\pm\) 1.006e-03 & 3.173e-03 \(\pm\) 1.286e-03 \\ Pe-DB & Imp-SS & PCGrad & 1.467e-02 \(\pm\) 3.943e-03 & 1.639e-02 \(\pm\) 4.542e-03 \\ \hline **Imp-DB** & **Imp-SS** & **-** & **1.331e-04 \(\pm\) 5.038e-06** & **3.491e-04 \(\pm\) 1.240e-04** \\ \hline \end{tabular} \end{table} Table 4: Performance comparison of different schemes for the cavitation problem with SS cavitation condition. Figure 10: Comparison of the exact pressure and the pressure predicted by the Pe-DB & Imp-SS scheme. The left part represents the exact pressure field obtained by FEM. imposing non-negativity with the JFO cavitation condition. The loss of the Reynolds equation trained by the PCGrad and UW methods are close. The DW method fails to balance the loss of the Reynolds equation and FB function. Too significant a weight factor is assigned to the FB function loss, which makes the complementarity preferentially guaranteed, and the cavitation fraction converges to zero in the whole field, degenerating into the SS cavitation condition of the previous section. There are evident abrupt gradient extreme regions near the boundary of the cavitation fraction field for this cavitation problem with JFO cavitation condition, and these regions make the cavitation problem challenging for the PINNs to solve. Fig. 12 shows the comparison of the exact value and predicted value with the PCGrad method and non-negativity constraint, and the results of HL-nets agree well with the results of FEM, which proves the accuracy of HL-nets. Fig. 13 shows the absolute error of pressure \(P_{error}\) and cavitation fraction \(\theta_{error}\) between the HL-nets solution and the solution obtained by FEM. For a more quantitative analysis, the relative errors \(L_{\mathbf{1-}error}\) and \(L_{\mathbf{2-}error}\) of the pressure field and cavitation fraction \(\theta\) obtained by different schemes are presented in Table 5. The results represent the mean \(\pm\) standard deviation from five independent runs with independent initial network parameters. First, according to Table 5 and the error distribution map in Fig. 13, all calculations fail when no MTL method is adopted; so, an MTL method is necessary, regardless of whether or not the Imp-SS scheme is adopted. Besides, the Imp-DB & Imp-SS scheme with the PCGrad method can obtain the best solution; its relative \(L_{1}\) and \(L_{2}\) errors of pressure reach 3.447e-03 \(\pm\) 1.073e-03 and 5.811e-03 \(\pm\) 1.816e-03, respectively, and its relative \(L_{1}\) and \(L_{2}\) errors of cavitation fraction \(\theta\) reach 8.352e-02 \(\pm\) 2.366e-02 and 2.657e-02 \(\pm\) 5.512e-03, respectively. The pressure error and cavitation fraction error are shown in Fig. 13(B2). The error in the cavitation fraction is mainly located in the region of extreme gradient near the boundary of the cavitation region, which makes accurate calculation difficult due to the nonlinear abrupt changes in this region. The corresponding error in the pressure field in this region also remains, except that the main error in the pressure is located in the boundary of the cavitation region near the region of extreme pressure. The Imp-DB & Imp-SS scheme with the UW method can obtain a solution with similar average accuracy as the previous PCGrad method, but the UW method may not be stable without appropriate initial parameters, and the initial parameters of the UW method are further discussed in the Appendix A. When the non-negativity is not strictly constrained (without the Imp-SS scheme), the left side of the Reynolds equation of Eq. (7) is not equal to zero in the cavitation zone, and it can be seen that the calculation error of the cavitation fraction is larger than that of the Imp-SS scheme because the convective characteristic in the cavitation region is not captured. With the UW method, the relative \(L_{1}\) and \(L_{2}\) errors of pressure reach 3.981e-03 \(\pm\) 7.242e-04 and 6.173e-03 \(\pm\) 6.473e-04, respectively, and the relative \(L_{1}\) and \(L_{2}\) errors of cavitation fraction reach 1.393e-01 \(\pm\) 7.032e-03 and 3.608e-02 \(\pm\) 1.041e-03, respectively. The pressure and cavitation fraction error distributions are shown in Fig. 13(D1). There are significant errors in pressure and cavitation fraction within and on the boundary of the cavitation region. The PCGrad method achieves a barely acceptable level of accuracy, and its error distribution is similar to that of the UW method. Fig. 12: Comparison of the exact value and predicted value with the PCGrad method with non-negativity constraint. [MISSING_PAGE_POST] ## 4 Conclusions In this study, we establish a deep learning computational framework, HL-nets, for computing the flow field of hydrodynamic lubrication involving cavitation effects by proposing schemes to apply the SS cavitation condition or the JFO cavitation condition to the PINNs of the Reynolds equation. The results show that HL-nets can highly accurately simulate hydrodynamic lubrication involving cavitation phenomena. The conclusions of this study can be summarized as follows: (1) For the non-negativity constraint of the SS cavitation condition, the Pe-SS scheme with a loss of the non-negativity and the Imp-SS scheme with a differentiable non-negative function are proposed. By using a differentiable non-negative activation function to constrain the output, the SS cavitation condition can be imposed, and while the loss function contains only one equation loss, the unconstrained optimization process can lead to good computational stability. Both methods have good computational accuracy. (2) For the complementarity constraint of the JFO cavitation condition, the pressure and cavitation fraction are taken as the neural network outputs, and the residual of the FB equation constrains their complementary relationships. The\(\_\)negativity constraint non-negativity constraint is imposed on the pressure output, partially forcing the FB equation to be satisfied and effectively improving the accuracy of the calculation. (3) Three MTL methods are applied to balance the newly introduced loss terms described above. The traditional penalizing scheme without the MTL method fails to obtain acceptable results in all cases above, meaning that an appropriate MTL method is needed to improve the accuracy. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Imp-SS & MTL method & \(L_{1-error}(P)\) & \(L_{2-error}(P)\) & \(L_{1-error}(\theta)\) & \(L_{1-error}(\theta)\) \\ \hline \multirow{8}{*}{with} & - & \(>\)0.5 & \(>\)0.5 & \(>\)1 & \(>\)0.5 \\ & DW & 2.780e-02 & 4.945e-02 & 1.000 & 1.816e-01 \\ & & \(\pm\) 1.411e-05 & \(\pm\) 9.885e-07 & \(\pm\) 1.103e-06 & \(\pm\) 1.435e-07 \\ & & 3.422e-03 & 5.906e-03 & 8.686e-02 & 2.588e-02 \\ & & \(\pm\) 3.390e-03 & \(\pm\) 5.608e-03 & \(\pm\) 3.932e-02 & \(\pm\) 7.174e-03 \\ & **PCGrad** & **3.447e-03** & **5.811e-03** & **8.352e-02** & **2.657e-02** \\ & & \(\pm\) **1.073e-03** & **\(\pm\) 1.816e-03** & **\(\pm\) 2.366e-02** & **\(\pm\) 5.512e-03** \\ \hline \multirow{8}{*}{without} & - & \(>\)0.5 & \(>\)0.5 & \(>\)1 & \(>\)0.5 \\ & DW & 1.513e-01 & 2.126e-01 & 8.071e-01 & 1.479e-01 \\ & & \(\pm\) 8.168e-02 & \(\pm\) 1.111e-01 & \(\pm\) 3.859e-01 & \(\pm\) 6.746e-02 \\ \cline{1-1} & & 3.981e-03 & 6.173e-03 & 1.393e-01 & 3.608e-02 \\ \cline{1-1} & & \(\pm\) 7.242e-04 & \(\pm\) 6.473e-04 & \(\pm\) 7.032e-03 & \(\pm\) 1.041e-03 \\ \cline{1-1} & & 1.167e-02 & 1.753e-02 & 2.701e-01 & 5.529e-02 \\ \cline{1-1} & & \(\pm\) 3.845e-04 & \(\pm\) 3.668e-04 & \(\pm\) 6.116e-03 & \(\pm\) 9.951e-04 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance comparison of different schemes in HL-nets for the cavitation problem with JFO cavitation condition. ## Appendix A In this section, we study the effects of the initial parameters of the UW method in the tests. (1) Case 1: DB condition: Pe-BC, UW We test the robustness of the UW method with different initial parameters for the Reynolds equation with the DB condition. The results, presented in Table A1, indicate that the UW method is not sensitive to the initial parameters. However, it is worth noting that the UW method can be solved effectively only when the initial parameters are set so that the initial weight of the DB condition loss is large. When the weight of the boundary loss is not large enough relative to the weight of the equation loss, the iterative calculation of the UW method will intensify the imbalance between the losses. This will cause the weights of the boundary condition loss to be too small, resulting in poor computational accuracy and in the boundary condition not being effectively satisfied. (2) Case 2: Cavitation problem with SS cavitation condition We test the robustness of the UW method with different initial parameters for the Reynolds equation with the SS cavitation condition. The results are presented in Table A2. For the Imp-DB & Pe-SS scheme, initializing the log variance so that the weight coefficient corresponding to the non-negativity loss is small leads to more stable and accurate solutions, which indicates that under this condition, the UW method is not sensitive to the initial parameters. The weight coefficient of the non-negativity loss after training iterations is larger than that of the equation loss. For the Pe-DB & Imp-SS scheme, there are only equation loss and boundary loss in the loss function; so, the result is similar to that of the Reynolds equation with the DB condition in the previous section. As shown in Table A3, initializing the log variance so that the weight coefficient corresponding to the DB condition loss is small leads to more stable and accurate solutions, which indicates that under this condition, the UW method is not sensitive to the initial parameters. Similarly, the calculation results are poor when the weight coefficients of the boundary condition loss are not large enough relative to the weights of the equation loss. 3. Case 3: Cavitation problem with JFO cavitation condition We test the robustness of the UW method with different initial parameters for the Reynolds equation with the JFO cavitation condition. The results are presented in Table A4. With the Imp-SS & Imp-DB scheme, only the Reynolds equation loss and the FB equation loss remain in the loss function. Initializing the log variance so that the weight coefficient corresponding to the FB equation loss is large enough leads to a more stable and accurate solution, which indicates that under this condition, the UW method is not sensitive to the initial parameters. It is noteworthy that the log variance with initial values of [5, -4] is significantly better than that with other initial values. This suggests that fine-tuning the initial parameters in a stable range can be beneficial to the computational results. Table A4. Performance comparison of different initial parameters of the UW method in HL-nets for the cavitation problem with JFO cavitation condition with Imp-DB & Imp-SS scheme \begin{tabular}{c c c c c c} \hline Initial & Final & \(L_{1-error}(P)\) & \(L_{2-error}(P)\) & \(L_{1-error}(\theta)\) & \(L_{2-error}(\theta)\) \\ \hline [5, 0] & [-6.326e00 \(\pm\) 5.472e00, & 2.382e-01 \(\pm\) & 3.252e-01 \(\pm\) & 5.560e-01 \(\pm\) & 1.070e-01 \(\pm\) \\ & -1.636e01 \(\pm\) 3.337e00] & 2.398e-01 & 3.200e-01 & 4.441e-01 & 7.459e-02 \\ [5, -1] & [-7.952e00 \(\pm\) 4.131e00, & 1.996e-01 \(\pm\) & 2.319e-01 \(\pm\) & 8.369e-01 \(\pm\) & 1.113e-01 \(\pm\) \\ & -1.557e01 \(\pm\) 2.749e00] & 3.420e-01 & 3.952e-01 & 1.327e00 & 1.488e-01 \\ [5, -2] & [-1.014e01 \(\pm\) 8.123e-01, & 3.355e-03 \(\pm\) & 5.922e-03 \(\pm\) & 7.881e-02 \(\pm\) & 2.292e-02 \(\pm\) \\ & -1.767e01 \(\pm\) 5.577e-01] & 2.536e-03 & 4.537e-03 & 3.575e-02 & 7.518e-03 \\ [5, -3] & [-1.067e01 \(\pm\) 3.462e-01, & 3.422e-03 \(\pm\) & 5.906e-03 \(\pm\) & 8.686e-02 \(\pm\) & 2.588e-02 \(\pm\) \\ & -1.772e01 \(\pm\) 1.671e-01] & 3.390e-03 & 5.608e-03 & 3.932e-02 & 7.174e-03 \\ **[5, -4]** & **[-1.001e01 \(\pm\) 3.811e-02,** & **8.103e-04 \(\pm\)** & **1.193e-03 \(\pm\)** & **5.381e-02 \(\pm\)** & **2.032e-02 \(\pm\)** \\ & **-1.854e01 \(\pm\) 2.435e-02]** & **4.805e-04** & **6.178e-04** & **1.400e-02** & **3.371e-03** \\ [5, -5] & [-9.098e00 \(\pm\) 4.288e-01, & 3.171e-03 \(\pm\) & 5.321e-03 \(\pm\) & 8.133e-02 \(\pm\) & 2.301e-02 \(\pm\) \\ & -1.793e01 \(\pm\) 3.841e-01] & 2.729e-03 & 4.431e-03 & 2.730e-02 & 5.909e-03 \\ \hline \end{tabular} ## Appendix B The dimensionless total load capacity \(W\) and the attitude angle \(\Psi\) are the critical performance parameters of the bearing, which gives: \[\begin{bmatrix}Wcos\Psi\\ Wsin\Psi\end{bmatrix}=\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{-\frac{1}{2}}^{ \frac{1}{2}}P\begin{bmatrix}cos(2\pi X)\\ sin(2\pi X)\end{bmatrix}dXdY\] The relative absolute error \(L_{1}\) is defined as follows: \[L_{1}(W)=\frac{\left|\widehat{W}-W\right|}{\left|\widehat{W}\right|}\] where \(\widehat{W}\) denotes the dimensionless total load capacity inferred by reference, \(W\) represents the the dimensionless total load capacity inferred by PINNs. The relative absolute error of the maximum pressure \(L_{1}(P_{max})\), the relative absolute error of the maximum cavitation fraction \(L_{1}(\theta_{max})\) and the relative absolute error of the attitude angle \(L_{1}(\Psi)\) are defined similarly to the relative absolute error of the dimensionless total load capacity. To test the bearing performance obtained by HL-net for the cavitation problem with JFO cavitation condition, the relative absolute error \(L_{1}\) are presented in Table B1. The results represent the mean \(\pm\) standard deviation from five independent runs with independent initial network parameters. Table B1 Performance parameters comparison of different schemes in HL-nets for the cavitation problem with JFO cavitation condition. \begin{tabular}{c c c c c c} \hline \hline Imp-SS & MTL method & \(L_{1}(P_{max})\) & \(L_{1}(\theta_{max})\) & \(L_{1}(W)\) & \(L_{1}(\Psi)\) \\ \hline \multirow{8}{*}{with} & - & \(>\)0.5 & \(>\)0.5 & \(>\)0.5 & \(>\)1 \\ & & 9.966e-04 & & 4.655e-02 & 9.728e-03 \\ \cline{1-1} & DW & \(\pm\) 1.951e-05 & & \(\pm\) 4.112e-05 & \(\pm\) 2.752e-05 \\ \cline{1-1} & UW & 1.455e-03 & 2.339E-02 & 1.232e-03 & 3.068e-03 \\ \cline{1-1} & & \(\pm\) 1.347e-03 & \(\pm\) 1.182E-02 & \(\pm\) 8.940e-04 & \(\pm\) 3.505e-03 \\ \hline \hline \end{tabular} Table B1 Performance parameters comparison of different schemes in HL-nets for the cavitation problem with JFO cavitation condition. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Imp-SS & MTL method & \(L_{1}(P_{max})\) & \(L_{1}(\theta_{max})\) & \(L_{1}(W)\) & \(L_{1}(\Psi)\) \\ \hline \multirow{8}{*}{with} & - & \(>\)0.5 & \(>\)0.5 & \(>\)0.5 & \(>\)0.5 \\ & & 9.966e-04 & & 4.655e-02 & 9.728e-03 \\ \cline{1-1} & DW & \(\pm\) 1.951e-05 & & \(\pm\) 4.112e-05 & \(\pm\) 2.752e-05 \\ \cline{1-1} & UW & 1.455e-03 & 2.339E-02 & 1.232e-03 & 3.068e-03 \\ \cline{1-1} & & \(\pm\) 1.347e-03 & \(\pm\) 1.182E-02 & \(\pm\) 8.940e-04 & \(\pm\) 3.505e-03 \\ \hline \hline \end{tabular} \end{table} Table B1: Performance parameters comparison of different schemes in HL-nets for the cavitation problem with JFO cavitation condition. \begin{tabular}{c c c c c c} & **PCGrad** & \begin{tabular}{c} **1.974e-03** \\ **\(\pm\) 9.759e-04** \\ \end{tabular} & \begin{tabular}{c} **3.006E-02** \\ **\(\pm\) 2.548E-02** \\ \end{tabular} & \begin{tabular}{c} **1.165e-03** \\ **\(\pm\) 3.595e-04** \\ \end{tabular} & \begin{tabular}{c} **3.821e-03** \\ **\(\pm\) 2.360e-03** \\ \end{tabular} \\ \hline \multirow{6}{*}{without} & - & \(>\)0.5 & \(>\)0.5 & \(>\)0.5 & \(>\)1 \\ & DW & 9.670e-02 & 8.121e-01 & 7.617e-02 & 1.472e-01 \\ & \(\pm\) 9.421e-02 & \(\pm\) 3.759e-01 & \(\pm\) 4.958e-02 & \(\pm\) 8.639e-02 \\ & UW & 1.427e-03 & 3.345e-02 & 3.327e-03 & 4.951e-04 \\ & UW & \(\pm\) 3.085e-04 & \(\pm\) 1.113e-02 & \(\pm\) 6.206e-04 & \(\pm\) 3.795e-04 \\ & PCGrad & 2.623e-03 & 4.497e-02 & 1.144e-02 & 2.092e-03 \\ & PCGrad & \(\pm\) 3.364e-04 & \(\pm\) 2.331e-02 & \(\pm\) 9.904e-04 & \(\pm\) 2.988e-04 \\ \hline \end{tabular} \begin{tabular}{c c c c} & **Nomenclature** & & & \\ \hline PINNs & physics-informed neural networks & PDEs & partial differential equations \\ \multirow{2}{*}{ \begin{tabular}{c} HL- \\ nets \\ \end{tabular} } & physics-informed neural networks & & & \\ & for hydrodynamic lubrication with & SS & Swift-Stieber \\ & cavitation & & & \\ JFO & Jakobsson-Floberg-Olsson & FB & Fischer-Burmeister \\ MTL & multi-task learning & DB & Dirichlet boundary \\ \(\rho\) & density of the fluid & \(p\) & pressure \\ \(h\) & film thickness & \(\mu\) & viscosity of the fluid \\ \(\mathbf{U}\) & relative sliding velocity & \(p_{\partial\Omega}\) & ambient pressure \\ \(p_{cav}\) & cavitation pressure & \(\theta\) & cavitation fraction \\ & constant density in the full-film & \(L\) & length of the lubrication \\ \(\rho_{0}\) & region & \(L\) & region \\ \(B\) & width of the lubrication region & \(h_{0}\) & minimum film thickness \\ \(R\) & radius of bearings & \(x,y\) & coordinate \\ \(X,Y\) & dimensionless coordinate & SUPG & Streamline \\ & & & & Upwind/Petrov-Galerkin \\ FEM & finite element method & \(\mathbf{W}^{[m]}\) & & trainable weights at the \(m\)- \\ & & & & th layer \\ \(\mathbf{b}^{[m]}\) & trainable biases at the \(m\)-th layer & \(\mathbf{\Theta}\) & & trainable parameter set of \\ \(\sigma\) & activation function & \(\mathcal{L}_{R}\) & & \\ \(N_{R}\) & number of data points for the bulk & & & \\ & domain & \(\lambda_{R},\lambda_{i}\) & & weight parameters \\ \(\eta\) & learning rate & \(\mathcal{L}\) & & total loss \\ \(\varphi\) & function used to apply the constraint & & & \\ & condition & DW & dynamic weight \\ UW & uncertainty weight & PCGrad & & projecting conflicting \\ & & & & gradient \\ \(\alpha\) & hyperparameter in DW & \(N_{T}\) & & number of loss terms \\ \end{tabular}
2309.03390
A novel method for iris recognition using BP neural network and parallel computing by the aid of GPUs (Graphics Processing Units)
In this paper, we seek a new method in designing an iris recognition system. In this method, first the Haar wavelet features are extracted from iris images. The advantage of using these features is the high-speed extraction, as well as being unique to each iris. Then the back propagation neural network (BPNN) is used as a classifier. In this system, the BPNN parallel algorithms and their implementation on GPUs have been used by the aid of CUDA in order to speed up the learning process. Finally, the system performance and the speeding outcomes in a way that this algorithm is done in series are presented.
Farahnaz Hosseini, Hossein Ebrahimpour, Samaneh Askari
2023-09-06T22:50:50Z
http://arxiv.org/abs/2309.03390v1
A novel method for iris recognition using BP neural network and parallel computing by the aid of GPUs (Graphics Processing Units) ###### Abstract In this paper, we seek a new method in designing an iris recognition system. In this method, first the Haar wavelet features are extracted from iris images. The advantage of using these features is the high-speed extraction, as well as being unique to each iris. Then the back propagation neural network (BPNN) is used as a classifier. In this system, the BPNN parallel algorithms and their implementation on GPUs have been used by the aid of CUDA in order to speed up the learning process. Finally, the system performance and the speeding outcomes in a way that this algorithm is done in series are presented. Keywords:Iris recognition system, Haar wavelet, Graphics Processing Units (GPUs), neural network BP, CUDA. ## 1 Introduction To find out the human identity has been a long-term goal of humanity itself. Since the technologies and services are rapidly developing, an urgent need is required to identify a person. Examples include passport, computer login and in general, the security system's control. The requirements for this identification are the speed and increased reliability. Biometric as a model for human identification is one of the interesting domains to the researchers intended to increase the speed and security. Different biometric features offer various degrees of reliability and efficiency. Iris of a person is an internal organ in the eye, yet easily visible from one meter distance thus is a perfect biometric identification system. Iris recognition is widely accepted as one of the best biometric identification methods throughout the world. It has several advantages: First, its pattern variability is high among different individuals; therefore it meets the needs for unique identification. Second, the iris remains stable throughout life. Third, the iris image is almost insensitive to angled radiation and light, and every change in the angle leads only to some relative transferences. Fourth, according to Sandipan p (2009) the ease of eyes' localization in the face and the distinctive shape of iris ring add to its reliability and make its isolation more accurate. Several algorithms have been developed for the iris recognition system. All algorithms include a variety of steps such as: to obtain an image to localize the iris, to extract the features, to match and classify them. The first automatic iris recognition system was proposed by Daugman (1993). He used the Gabor filters to extract the features. While Daugman was improving his algorithm [3] several researchers have also been working on iris recognition. Wildes (1997) used a laplacian pyramid to represent the iris tissue. Boles and Boashash (1998) employed one-dimensional wavelet transforms at different resolution of concentric circles on an iris image. By using two-dimensional Haar wavelet, Lim (2001) decomposed the iris image into 4 levels, and he used a competitive learning neural network (LVQ) as a classifier. In all the above-mentioned methods, the serial algorithm has been used for iris recognition. In this paper, by using parallel algorithms and GPUs, we present a method that has increased the response speed substantially. The database used in this article is CASIA V.3.The rest of the paper is organized as the following: In section 2, the iris recognition system will be explained. In section 3, the results obtained from this implementation will be presented and finally, section 5 includes the conclusion along with the instructions for the future works. ## 2 Iris Recognition System An iris recognition system generally consists of the following three parts: pre-processing and normalizing the iris, extracting the features, and classifying the extracted features. ### 2-1 Iris Pre-processing Iris Pre-processing includes localization and normalizing the iris which basically consist of two main operations: One is to recognize the eyelashes and the other is to recognize the boundaries. The first step is to extract the circular edge of iris by removing Noisy areas. Eyelids and eyelashes cover the upper and lower portions of iris. Therefore, these areas should be segmented. The second step involves recognizing the inner and outer boundaries of iris. One is in the passage area of the iris and sclera and the other is in the iris and pupil. For this reason, canny edge detection for both horizontal and vertical direction is suggested by Wildes (1999). We have used the circular Hough transform method to localize the iris and the linear Hough transforms to separate the eyelids. Hough transform is a standard algorithm which can be used to determine the simple geometric shapes such as circles and lines. If we use the Hough transform first for the edge of pupil-sclera and then for the edge of iris-pupil, more accurate results will be obtained. Thus, the canny edge detection is first used to create edges and then the circles surrounding the iris space are obtained by using the Hough transform method. If the maximum Hough space is less than the threshold, then the block will not be displayed by the eyelids. It's easier to separate the eyelashes by thresholding because they are darker compared with other organs in the eye. Daugman suggested the normal Cartesian coordinates for polar conversion that maps every pixel in the space to a pair of polar coordinates (r,\(\theta\)) which r and \(\theta\) are in the range of [0,1] and [0,2\(\pi\)] respectively. This mapping can be formulated as following: \[I(x(r,\theta))\to I(r,\theta) \tag{1}\] \[x(r,\theta)=(1-r)x_{\mathrm{p}}\left(\theta\right)+r\mathbf{x}_ {1}(\theta)\] (2) \[y(r,\theta)=(1-r)y_{\mathrm{p}}(\theta)+r\mathbf{y}_{1}(\theta) \tag{3}\] Where (x\({}_{i}\),y\({}_{i}\)), (x\({}_{\mathrm{p}}\),y\({}_{\mathrm{p}}\)), (r,\(\theta\)), (x,y) and I(x,y) show the iris area, Cartesian coordinates, polar coordinates and the coordinates of the pupil and iris boundaries respectively in direction. According to Pandara and Chandra (2009) The rotational incompatibilities are not considered in this display. An example of this polar transform can be seen in Fig2. Figure 2: Example of polar transform for an iris. **2-2 Feature extraction** One of the key issues in identification system speed is the size of feature vector extracted from every iris. To reduce the size of iris feature's vector, an algorithm must be applied that does not miss the general and local information obtained from iris. We have used Haar wavelet to extract the iris features vector. This wavelet is one of the fastest wavelet's transformations. In this paper we have used the two-level Haar wavelet, i.e. the image of iris is decomposed and the coefficients that represent the iris pattern are obtained. The features that reveal the additional information are also deleted. As can be seen in Figure 3, in 2-level transform, one image of iris is decomposed into 7 sections, one of which is only (the above picture, left corner) is extracted as the main feature and the other sections are deleted as the additional information. ### Classification We have used the Back Propagation Neural Network (BPNN) to classify the feature vectors. According to Shylaja (2011) BPNN is a systematic method for training multi-layer artificial neural network. This network provides a computationally effective way to change the weights in a feed forward network. The basic structure of BPNN consists of an input layer, at least one hidden layer and output layer. The neural network works with regulating the weight values to reduce the errors between the actual output and the resulting output. BPNN pseudo-code algorithm used in this section is presented in Figure 4. As it can be seen, for a BP network with a middle layer, updating the weight for a given input has four stages, assuming that the network has n input neurons, h neurons in the hidden layers and o neurons in the outer layers. These four stages are: \begin{tabular}{l} \hline _While MSE is not_ satisfied \\ _for all_ _input vectors_ \(x\) \\ _Flayer_(_x_); \\ _Slayer_(_x_); \\ _Backpro_\_Flayer_(_x_); \\ _Backpro_\_layer_(_x_); \\ _recompute MSE_ \\ \hline \end{tabular} **Fig.4.BPNN algorithm** **Fig.5.** The number of iterations needed to extract the iris features vector. The number of iterations needed to extract the iris features vector is \(10^{-1}\). In the first stage, the values of middle layer should be specified for every input. In the second stage, the outer layer's values are obtained from the middle layer. In the third stage, considering the errors resulted from the output layer, the weight values between the hidden layers and output layers must be updated and finally, in the fourth stage, the weight values between the first layer and the middle layer are updated. These operations are repeated successively in order to keep the general error rate at an acceptable number. ## 3 Parallel processing using CUDA ### An Introduction to GPUs Today, the GPUs installed on graphic cards, have an exceptional processing power comparing to the central processors. This issue has enhanced these processors' application in areas beyond the computer games. Modern graphic processors with their parallel architecture are considered very fast processors. Graphics processing units or GPUs are specific tools that are used for graphic rendering (to create natural-looking images) in personal computers, workstations, or on gaming consoles. CUDA (Compute Unified Device Architecture) is a parallel computing architecture that was presented by the NIVIDIA Company in 2006 in order to carry out massive parallel computations with high efficiency on the GPUs developed on this company. CUDA is the computational engine of GPUs made in NVIDIA which is available in the form of different functions in programming languages to the software developers. CUDA comes with a software environment that allows the software developers to perform their programming in C language and to run on the GPUs. Each CUDA program consists of two parts: Host and Device. The Host is a program that is run sequentially on CPU, and the device is a program that is run on the GPU cores in a parallel way. From software perspective, each parallel program could be considered to consist of a number of strings. These strings are light-weight processors each of which performs an independent operation. A number of dependent strings form a block and a number of blocks form a grid. There are various types of memories in CPUs. Each string has a specific local memory of its own, each block has a shared memory which the inner strings has access to this memory. There is a global memory that all the strings have access to it. Besides, there is another type of memory called 'texture memory' which like the global memory, all the strings have access to it but the addressing mode is different and is used for specific data. In the Host, the number of strings or in other words the number of light-weighted processors which are to be run on GPU cores should be specified. The Device code is run based on the number of defined strings in Host. Each string can find its position by the specified functions in CUDA and can do its task considering its position. Finally, the computed results should be returned to the main memory. GPUs are on excellent tool for implementing the image processing algorithms since many operators that act on the image are local and should be applied on all pixels. Thus, by considering one string for each pixel (in case that the number of required strings could be defined), the computation time could be significantly reduced. Yang (2008) has implemented a number most popular images by CUDA. In addition, in one of our previous studies, CUDA was used for spatial image processing and or Gray (2008) has used CUDA in order to determine the course of action. ### BPNN paralleling by CUDA Since the iterations in learning algorithm of BP network are interdependent, these iterations could not be done in a parallel way, thus only the operations of iteration could be parallel. In this paper, all the four steps of BPNN in iteration are made parallel by CUDA. In the first stage, the values of middle layer should be specified for inputs. For this purpose, we use a block with h string. Initially, each string finds out which neuron a hidden layer is allocated to. Then, it updates the values of neuron according to values of input n neuron. Pseudo-code of these strings is given in Figure 5. Using this method, the entire values of the middle layers of neurons are calculated simultaneously. Similarly, the rest of the process is implemented by CUDA. ## 4 Results To test the system, the iris images of 100 people were used from CASIA3 database. 5 iris images were selected for every person's iris. Thus, the whole selected database includes 500 iris images. The dimensions of extracted iris area after pre-processing and normalizing iris images (in the way it was mentioned in the previous section) are 20 \(\times\) 480. Thus, the extracted features of Haar Wavelet from the images are obtained in 5 \(\times\) 60 dimension by two levels. After ordering the achieved values in one dimension, finally a vector of 30 is obtained as a feature from every iris image. Figure 5: Pseudo-code of parallel BPNN algorithm in the first layer One BPNN network of 300 neurons in input layer and 7 neurons in output layer is used in the classification of the obtained feature. The output of this network is a 7 bit binary data that represents a person to whom the input feature vector belongs. Of the total data, 80 and 20 per cent were considered as the training data and test data respectively. To test the system, different neuron numbers were used in the BPNN middle layer. The average precision obtained for every 5 times system training is mentioned for each of them in the table1: As it could be seen from the above table 1, the best result is obtained from using 50 neurons in the middle layer and from that on, the increase in neuron numbers in this layer does not enhance the system precision. Only the number of calculations increases. To make this system parallel by CUDA, a GeForce GT 430 graphic card is used which has 96 cores. As mentioned before, this paralleling is done in all the four steps of BPNN and finally its obtained results, running order ratio which is run on its processor 'Core (TM) i7 CPU 3.2 GHz' are shown in the table 2: ## 5 Conclusion and future works As seen in the previous section, we could enhance the speed of iris recognition to 45 times higher by using parallel algorithm and CUDA to implement it. The interesting point is that this speeds enhancement is gained through the ordinary graphic card. In case that a stronger and multi-core graphic card could be used, this enhancement would be much higher. Another interesting point is that in the real situation applications we commonly encounter with voluminous databases, thus the time matching pattern is much higher in these systems. But, CUDA could not be used to solve this problem since the memory in the graphic card is limited and the whole databases could not be loaded on \begin{table} \begin{tabular}{c c} \hline \hline **The gained precision by** & **Neuron numbers in** \\ **BPNN** & **the middle layer** \\ \hline 44\% & **20** \\ 79.6\% & **30** \\ 85\% & **40** \\ 98.4\% & **50** \\ **98\%** & **60** \\ \hline \hline \end{tabular} \end{table} Table 1: \begin{table} \begin{tabular}{c c c} \hline \hline processor & Run time & **Speed** \\ & (second) & **enhancement** \\ \hline CPU & 3521 & \\ GPU & 96 & **36** \\ \hline \hline \end{tabular} \end{table} Table 2: it. To overcome this problem, cluster could be used in a way that we can distribute the databases on cluster nodes. While an input data is received, a copy of it is sent to all the nodes and every node performs the recognition process and finally the results will be sent to the PC source.
2309.09694
Noise-Augmented Boruta: The Neural Network Perturbation Infusion with Boruta Feature Selection
With the surge in data generation, both vertically (i.e., volume of data) and horizontally (i.e., dimensionality), the burden of the curse of dimensionality has become increasingly palpable. Feature selection, a key facet of dimensionality reduction techniques, has advanced considerably to address this challenge. One such advancement is the Boruta feature selection algorithm, which successfully discerns meaningful features by contrasting them to their permutated counterparts known as shadow features. However, the significance of a feature is shaped more by the data's overall traits than by its intrinsic value, a sentiment echoed in the conventional Boruta algorithm where shadow features closely mimic the characteristics of the original ones. Building on this premise, this paper introduces an innovative approach to the Boruta feature selection algorithm by incorporating noise into the shadow variables. Drawing parallels from the perturbation analysis framework of artificial neural networks, this evolved version of the Boruta method is presented. Rigorous testing on four publicly available benchmark datasets revealed that this proposed technique outperforms the classic Boruta algorithm, underscoring its potential for enhanced, accurate feature selection.
Hassan Gharoun, Navid Yazdanjoe, Mohammad Sadegh Khorshidi, Amir H. Gandomi
2023-09-18T11:59:06Z
http://arxiv.org/abs/2309.09694v1
# Noise-Augmented Boruta: The Neural Network Perturbation Infusion with Boruta Feature Selection ###### Abstract With the surge in data generation, both vertically (i.e., volume of data) and horizontally (i.e., dimensionality) the burden of the curse of dimensionality has become increasingly palpable. Feature selection, a key facet of dimensionality reduction techniques, has advanced considerably to address this challenge. One such advancement is the Boruta feature selection algorithm, which successfully discerns meaningful features by contrasting them to their permutated counterparts known as shadow features. However, the significance of a feature is shaped more by the data's overall traits than by its intrinsic value, a sentiment echoed in the conventional Boruta algorithm where shadow features closely mimic the characteristics of the original ones. Building on this premise, this paper introduces an innovative approach to the Boruta feature selection algorithm by incorporating noise into the shadow variables. Drawing parallels from the perturbation analysis framework of artificial neural networks, this evolved version of the Boruta method is presented. Rigorous testing on four publicly available benchmark datasets revealed that this proposed technique outperforms the classic Boruta algorithm, underscoring its potential for enhanced, accurate feature selection. Feature Selection, Boruta, Neural networks, Perturbation analysis, Feature importance. ## I Introduction With the emergence of data centers and the advent of big data technologies in recent years, there has been a marked influence on the processes of data generation and storage, where these advancements have acted as powerful enablers for high-throughput systems, substantially augmenting the capacity to generate data both in terms of the number of data points (sample size) and the range of attributes or features collected for each data point (dimensionality) [1]. The explosive surge in the volume of gathered data has heralded unprecedented opportunities for data-driven insights. Yet, high-dimensionality simultaneously poses distinct challenges that obstruct the success of machine learning algorithms. This dichotomy is particularly emphasized in the so-called _"curse of dimensionality"_, a term coined by Richard Bellman [2], which encapsulates the challenges faced in handling high-dimensional data spaces. The curse of dimensionality is effectively addressed by employing a collection of techniques collectively referred to as dimensionality reduction. Dimensionality reduction can be categorized into two primary branches: 1. Feature extraction: the process of creating a smaller collection of new features from the original dataset while still preserving the majority of the vital information. 2. Feature selection: the process of identifying and choosing the most relevant features from the original dataset based on their contribution to the predetermined relevance criterion. Feature selection, similar to machine learning models, is classified into supervised, unsupervised, and semi-supervised types, depending on the availability of well-labeled datasets. Furthermore, supervised feature selection is divided into three main subcategories, namely (interested readers in delving deeper into feature extraction and feature selection, and their various types, are encouraged to refer to [3]): 1. Filter methods: rank features based on statistical measures and select the top-ranked features. 2. Wrapper methods: evaluates subsets of features which best contribute to the accuracy of the model 3. Hybrid methods: leverages the strengths of both filter and wrapper methods by first implementing a filter method to simplify the feature space and generate potential subsets, and then using a wrapper method to identify the most optimal subset [1]. 4. Embedded methods: utilize specific machine learning models that use feature weighting functionality embedded in the model to select the most optimal subset during the model's training [4]. Random Forest is a widely used algorithm for embedded feature selection. The Random Forest (RF) algorithm is a type of ensemble classifier that uses a concurrent set of decision trees, termed component predictors. RF applies a bootstrapping technique that randomly creates \(n\) training subsets from the main dataset, and this process is performed \(m\) times, leading to the construction of \(m\) independent decision trees. Each tree is built using a random subset of features. The ultimate decision is made based on the majority vote of the component predictors [5]. RF leverage permutation importance to calculate feature importance. Each tree in the forest contributes to the classification of instances it wasn't used to train. After the initial classification, the feature values are shuffled randomly and the classification process is repeated. The importance of a feature is determined by comparing the correct classifications before and after the permutation of feature values. If shuffling a feature's values leads to a large decrease in accuracy, then that feature is considered important. The final feature importance is obtained by averaging the importance of all individual trees. The utilization of Z-score is another viable approach, wherein the average value is divided by its standard deviation to derive an importance measure [6]. Algorithm 1 outlines the process of calculating feature importance in a RF model. ``` Random Forest, Instances, Features ``` 0: Feature Importance \(V_{\text{orig}}\gets 0\) \(V_{\text{perm}}\gets 0\) for each tree \(t\) in Random Forest do iftree \(t\) did not use the current instance for training then Classify all instance \(V_{\text{orig}}^{(t)}\leftarrow\) number of correct votes Permute feature values Classify all instance again \(V_{\text{perm}}^{(t)}\leftarrow\) number of correct votes \(I^{(t)}\leftarrow\frac{V_{\text{perm}}^{(t)}-V_{\text{perm}}^{(t)}}{\text{number of instances}}\) endif endfor \(I\leftarrow\frac{1}{\text{number of trees}}\sum_{t=1}^{\text{number of trees}}I^{(t)}\) return\(I\) ``` **Algorithm 1** Calculate Feature Importance by RF [6] argued that the trustworthiness of the evaluation of feature significance is grounded in the presumption that the separate trees cultivated within the random forest are unrelated while numerous analyses have occasionally demonstrated that this presupposition might not hold true for certain datasets. Furthermore, they contended that distinguishing genuinely important features becomes difficult when dealing with a large number of variables, as some may seem important due to random data correlations. Accordingly, the importance score by itself is inadequate to pinpoint significant associations between features and the target [6]. They address this issue by proposing Boruta algorithm. In Random Forest, the importance of features is calculated in comparison to each other. However, in Boruta, the main idea is to evaluate the importance of features in competition with a set of random features called shadow features. In this process, every feature in the dataset is duplicated and their values are shuffled randomly. The Random Forest algorithm is applied repeatedly, randomizing the shadow features each time and calculating feature importance for all attributes (original features and shadow features). If the importance of a given feature consistently exceeds the highest importance among all the shadow features, it is classified as important. The measure of consistency is established through a statistical test, grounded on the binomial distribution, which quantifies how frequently the feature's importance overtakes the Maximum Importance of the Random Attributes (MIRA). If this count (called 'hits') significantly outnumbers or undershoots the expected count, the feature is deemed 'important' or 'unimportant' respectively. This process iterates until either all features are conclusively categorized, or a predetermined iteration limit is reached. Algorithm 2 succinctly illustrates the steps of the Boruta algorithm [6]. ``` Let \(\mathcal{F}\) be the set of all features Let \(\mathcal{H}\) be the empty list to store the importance history Let \(maxIter\) be the maximum number of iterations for\(iter=1\) to \(maxIter\)do Create \(\mathcal{F}^{\prime}=\mathcal{F}\cup\{\text{shuffled copies of all }f\in\mathcal{F}\}\) (the shadow features) Train a \(RF\) classifier on the dataset using \(\mathcal{F}^{\prime}\) Compute \(I=RF\):importance, the importance score for all features in \(\mathcal{F}^{\prime}\) Compute \(maxShadow=\max(I_{f^{\prime}})\) where \(f^{\prime}\in\mathcal{F}^{\prime}\) are the shadow features for each\(f\in\mathcal{F}\)do Add \(I_{f}\) to \(\mathcal{H}_{f}\), the importance history for feature \(f\) if\(\bar{I}_{f}>maxShadow\)then Mark \(f\) as important elseif\(\bar{I}_{f}<maxShadow\) for some threshold number of times in \(\mathcal{H}_{f}\)then Mark \(f\) as unimportant endif endfor return The set of features marked as important ``` **Algorithm 2** Boruta Algorithm for Feature Selection Since the introduction of Boruta, this algorithm has been extensively and successfully utilized in across diverse research domains, including medicine [7, 8, 9, 10], cybersecurity [11], engineering [12, 13], and environmental [14, 15, 16, 17] studies. Even the Boruta algorithm has been successfully employed to reduce the dimensionality of features extracted from images by deep networks [18]. While the Boruta algorithm has indeed been successful in feature selection, contributing to improved predictive performance as highlighted in the literature, it's crucial to note that in Boruta, features are merely permuted. This permutation does not alter the inherent attributes of a feature. A similar phenomenon occurs in the Random Forest algorithm when calculating feature importance through permutation. However, The relevance of the feature is determined by the data's characteristics, not its value [19]. Therefore, in this study, we introduce a new variant of the Boruta algorithm. In the traditional Boruta algorithm, shadow features are constructed merely by random shuffling, which does not alter a feature's properties. To address this, in our study, the shadow features are not only shuffled but also perturbed with normal noise. Additionally, instead of employing RF for calculating feature importance, we have utilized the perturbation analysis approach within neural networks. The remainder of this paper is structured as follows: Section II offers a comprehensive discussion of the proposed algorithm. Section III details the dataset used and outlines the experimental design. The findings from the experiments are presented and analyzed in Section IV. Lastly, Section V provides concluding remarks and suggests avenues for future research. ## II Proposed Method ### _Noise-augmented shadow features_ The value of a feature in a dataset is often viewed as less important than the overall characteristics of the data, in terms of providing insight into the predictive modeling process [19]. This perspective holds true for the Boruta algorithm, where the shadow features, bearing the same characteristics as the original ones, are utilized. Yet, it should be noted that even though the permutation of these shadow features disrupts the original relationship with the target variable, the essence of the Boruta algorithm--that each original feature competes with random features mirroring their own characteristics--remains intact. This study aims to further the current understanding of the role and potential of shadow features by questioning the foundational assumption of their inherent similarity to the original features. To this end, the concept of 'noise-augmented shadow features' is proposed, where the characteristics of the shadow features are deliberately modified. This new approach allows for an exploration into whether diversifying the characteristics of shadow features can lead to improved feature selection performance. The theoretical rationale behind this new approach is to provide a broader spectrum of random features for the original ones to compete against, thereby enriching the competition space and potentially enhancing the robustness of the feature selection process. This investigation is driven by the belief that the performance of a feature selection algorithm may be influenced not only by the relevancy of the features but also by the diversity and characteristics of the shadow features. Algorithm 3 clearly outlines the steps involved in generation of noise-augmented shadow features. In this approach, each original feature undergoes a process of augmentation with white noise - a random value possessing zero mean and standard deviation equal to that computed from the original feature. This noise generation step mimics the statistical characteristics of the original feature while simultaneously disrupting its inherent relationship with the target variable. Subsequently, a random permutation is applied, which further ensures that any patterns or dependencies present in the original feature set do not unduly influence the shadow features. ``` 1: Let \(F\) be the set of all features 2:\(F\) 3:for each feature \(f\) in \(F\)do 4:\(\delta\leftarrow\) compute standard deviation of feature \(f\) 5:\(Noise\leftarrow\) generate white noise with 0 mean and \(\delta\) 6:\(Shadow_{i}\leftarrow\) augment feature \(f\) with \(Noise\) then permute randomly 7:endfor 8:return\(F_{NS}\): Noise-augmented shadow features. ``` **Algorithm 3** Generation of Noise-Augmented Shadow Features ### _Perturbation-based assessment of feature importance_ The concept of perturbation analysis offers a solution to quantify the influence of each variable within the framework of neural network models. In the procedure, disturbances are intentionally introduced to the neural network's inputs. To maintain control over the experiment, only one input variable is altered during each iteration, keeping the remainder unchanged. The variable that, when disturbed, yields the most significant impact on the dependent variable is then recognized as the variable of greatest importance [20]. Figure 1 shows the general scheme of perturbation analysis. In light of the above, this study introduces a novel variant of the Boruta feature selection method, inspired by the perturbation analysis paradigm employed in Artificial Neural Networks (ANNs). This approach incorporates the use of noise-augmented shadow features and utilizes a Shallow ANN as the underlying base model. Let's consider a dataset, \(D=(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{N},y_{N})\), where \(x_{i}\) represents the \(i^{th}\) observation vector in a \(d\)-dimensional feature space, and \(y_{i}\) corresponds to the label of the \(i^{th}\) observation. The first stage involves the creation of training and testing datasets, denoted by \(D_{train}\) and \(D_{test}\), respectively. In this proposed variant, \(D_{train}\) is solely used for feature selection, while \(D_{test}\) is reserved exclusively to evaluate the performance of the selected features. Thus, the feature selection process does not have any access to or influence from the test dataset, thereby ensuring an unbiased assessment of the feature selection process. Algorithm 4 offers a step-by-step delineation of the proposed method. In the proposed method, given \(D_{train}\) a new train set \(\mathcal{D}^{\prime}\) is constructed by a combination of original features and their noise-augmented counterparts (shadow features). This set is then normalized to prepare it for the learning algorithm (ANN shallow learner). The F1 score of the trained model by \(\mathcal{D}^{\prime}\) is then used as the baseline performance metric. Next, each feature in the \(\mathcal{D}^{\prime}\) is perturbed individually by adding a noise factor and shuffling while keeping the other features unchanged. The perturbed F1 score of the model is computed and the difference between the baseline and perturbed scores is noted. The difference in scores effectively quantifies the influence of perturbing each feature, and these differences are then normalized. This step allows us to measure the influence Fig. 1: Perturbation analysis scheme. of each feature on the model's performance. Afterward, a competition takes place between the original features and their noise-augmented shadow counterparts. The most influential shadow feature (i.e., the one with the highest normalized difference in F1 score after perturbation) sets a threshold. If an original feature's impact on the F1 score exceeds this threshold, it is considered important and one _'hit'_ is recorded. This process is repeated for a pre-specified number of iterations. At the end of these iterations, the features that have accumulated at least one hit are selected. ## III Experimental Setup ### _Data sets_ To evaluate the proposed method's performance, it was applied to four publicly available datasets. These datasets, each unique in their characteristics, offer a diverse range of scenarios to thoroughly test the robustness and efficacy of the proposed method. Brief descriptions of each dataset are presented below: * Smartphone-based recognition of human activities and postural transitions (SB-RHAPT) [21]: comprises of data collected from smartphone sensors recording basic activities and postural transitions. Each record is associated with an activity encoded as six different classes. * APS Failure at Scania Trucks (APSF) [22]: consists of data collected from the Air Pressure system (APS) of Scania Trucks. This dataset is imbalanced as most records represent normal operation while a small fraction corresponds to the APS failure. Missing values within this dataset are replaced with the mean value of the respective feature. * Epileptic seizure recognition (ESR) [23]: constitutes recorded EEG values over a certain period, aiming to distinguish between the presence and absence of epileptic seizure activity. The original target variable involves 5 different categories of brain activity patterns, four of them correspond to non-seizure activity and one category indicates seizure activity. In this study, the target variable is converted into a binary classification task to discern between seizure and non-seizure activities. This simplification leads to an imbalance in the dataset. * Parkinson's disease classification (PDC) [24]: incorporates instances representing various biomedical voice measurements from individuals, some of whom are afflicted with Parkinson's Disease. The dataset, designed for binary classification, categorizes instances into two classes of Parkinson's Disease and Healthy. Summarized information about the utilized datasets, including the number of instances, features, and classes, can be found in Table I. ### _Experiment configurations_ In this study, the performance of the proposed method has been compared with the original Boruta algorithm. For feature selection using the Boruta algorithm, Random Forest is utilized. Two principal parameters used in this study to tune the Random Forest are the number of estimators and the maximum depth. To obtain the optimum value for these two parameters, a Random Forest was initially trained on each dataset with all features, and the best value was determined via a greedy search. Table II presents the optimal parameter values for Boruta's estimators across each dataset. The Random Forest model obtained at this stage is employed in the Boruta algorithm for feature selection. In configuring the method proposed in this study, more parameters need to be decided upon. The first set of these parameters pertains to the learner model based on the artificial neural network. Given that in the proposed methodology, the learner is solely used for feature selection, and features are chosen based on the impact their perturbation has on reducing model accuracy, thus fine-tuning the learner at this stage is not critical. What is required here is to select a network architecture that can generate a minimum accuracy above 50 percent. Therefore, through trial and error, simple models capable of achieving an accuracy above 50 percent with all features are utilized. The chosen architecture can vary for each dataset based on the dataset's inherent characteristics. Table \begin{table} \begin{tabular}{l c c c} \hline Dataset & Instances & Features & Classes \\ \hline Recognition of Human Activities & 10299 & 561 & 6 \\ Failure at Scania Trucks & 76000 & 171 & 2 \\ Epileptic seizure Recognition & 11500 & 179 & 2 \\ Parkinson’s Disease (PD) classification & 756 & 755 & 2 \\ \hline \end{tabular} \end{table} TABLE I: Summary of datasets III depicts the architecture employed for each dataset. For all models, the epoch is set to 100. An observation that can be made from this table is that most models are very lightweight, which contributes to reducing the problem's complexity. The subsequent parameter, denoted as 'n', serves as the standard deviation multiplier during the perturbation of parameters to assess the degree of accuracy reduction in the model. In this study, an 'n' value of 50 has been adopted. ## IV Discussion and Results This section presents the numerical results and discussion. As mentioned in the previous sections, the evaluation of the model is based on the F1 score, which provides a more robust measure in scenarios involving imbalanced datasets. Each of the datasets has been randomly divided into training and testing sets at a ratio of 70% to 30%, with stratified sampling from the target variable. The proposed method and the original Boruta algorithm are each run on the training dataset, with a maximum iteration limit of 100 times. The selected features are then used for the final evaluation of the selected features. To this end, the training set used in feature selection is filtered down to the selected features and retrained, and then evaluated on the test set. It is important to note that the test set is never exposed to the model at any stage of the training. It should be noted that in the proposed method, similarly ANN -or to be more specific multi-layer perceptron (MLP) - is employed for evaluation of the selected features on the test set. In this stage, against the feature selection stage, it is necessary to fine-tune the neural network for evaluating the derived feature set. Table V shows the architecture of the tuned network for each dataset. Here, the epoch is set to 1000. The evaluation results are shown in Table IV for the proposed method after fine-tuning MLP. From an initial observation of the results, it can be inferred that the Noise-augmented Boruta consistently outperforms the standard Boruta in terms of F1 score across all datasets. Notably, this improvement is achieved with a significantly reduced number of selected features in most cases, indicating a more efficient feature selection by the Noise-augmented Boruta. To gain a more robust understanding of performance variability - considering the inherent randomness in MLP (e.g., weight initialization) and RF (e.g., random subsamples) - each model was run 100 times. In each run, the entire dataset, filtered to include only the selected features, was randomly split into training and testing subsets. This procedure can be likened to K-fold cross-validation but with a higher number of repetitions. It allows the model to experience various potential distributions within the dataset, thus bolstering its robustness against unseen distributions. Furthermore, it captures the effects of variability originating from inherent randomness within the algorithm's performance. Table VI summarizes the performance results from the 100 runs, while Figure 4 illustrates the distribution of the evaluation metric (F1 score) for both the proposed method and Boruta across the four datasets. The comparative analysis displayed in Table VI convincingly establishes the remarkable superiority of the Noise-augmented Boruta method over the conventional Boruta approach. Examining each dataset, it becomes apparent that the proposed method is more effective in the elimination of redundant or non-essential features, consistently selecting a significantly smaller feature set. Fewer feature set leads to simpler, less complex models that offer more interpretable results and reduce computational demand. Most impressively, this winnowing process does not compromise model performance. In fact, the Noise-augmented Boruta method equals \begin{table} \begin{tabular}{l l} \hline Dataset & \multicolumn{1}{c}{MLP} \\ \hline Smartphone-Based Recognition of Human Activities & (512,512,256) \\ Failure at Scania Trucks & (64,256,64) \\ Epileptic Seizure Recognition & (128, 512, 128) \\ Parkinson’s Disease (PD) classification & (1024, 1024, 512) \\ \hline \end{tabular} \end{table} TABLE V: MLP optimal architect \begin{table} \begin{tabular}{l l} \hline Dataset & Random Forest \\ \hline Smartphone-Based Recognition of Human Activities & (200, None) \\ Failure at Scania Trucks & (200, None) \\ Epileptic Seizure Recognition & (200, None) \\ Parkinson’s Disease (PD) classification & (200, 10) \\ \hline \end{tabular} Note:The sequence of numbers (\(i_{1}\),\(i_{2}\)) presents the number of estimators and max depth represtively. \begin{table} \begin{tabular}{l l l l l l} \hline Dataset & \multicolumn{3}{c}{Boruta} & \multicolumn{3}{c}{Noise-augmented Boruta} \\ \cline{2-5} & Sel. Feat. & F1score (\%) & Sel. Feat. & F1score (\%) \\ \hline SB-RHAPT & 479 & 98.0886a.030684 & 104 & 98.8012a0.2209 \\ APSF & 55 & 87.2672a0.79722 & 22 & 87.6904a0.9294 \\ ESR & 178 & 95.07744.4941 & 138 & 95.85504a0.4358 \\ PDC & 78 & 80.2250a3.0855 & 29 & 81.1630a2.9937 \\ \hline \end{tabular} \end{table} TABLE VI: Comparison result of the proposed method with Boruta - 100 times run \begin{table} \begin{tabular}{l l} \hline Dataset & Random Forest \\ \hline Smartphone-Based Recognition of Human Activities & (200, None) \\ Failure at Scania Trucks & (200, None) \\ Epileptic Seizure Recognition & (200, None) \\ Parkinson’s Disease (PD) classification & (200, 10) \\ \hline \end{tabular} Note:The sequence of numbers (\(i_{1}\),\(i_{2}\)) presents the number of estimators and max depth represtively. \begin{table} \begin{tabular}{l l} \hline Dataset & Shallow learner \\ \hline Smartphone-Based Recognition of Human Activities & (5) \\ Failure at Scania Trucks & (16) \\ Epileptic Seizure Recognition & (5,8,5) \\ Parkinson’s Disease (PD) classification & (5) \\ \hline \end{tabular} Note: Each number signifies the size of neurons in a layer. In the cases where a sequence of numbers is presented, such as (\(i_{1}\),\(i_{2}\),\(i_{3}\)), these correspond to multiple hidden layers within the network. \end{table} TABLE II: Boruta estimator configuration \begin{table} \begin{tabular}{l l l l l} \hline Dataset & \multicolumn{3}{c}{Baruta} & \multicolumn{3}{c}{Noise-augmented Boruta} \\ \cline{2-5} & Sel. Feat. & F1score (\%) & Sel. Feat. & F1score (\%) \\ \hline SB-RHAPT & 479 & 98.0886a.030684 & 104 & 98.8012a0.2209 \\ APSF & 55 & 87.2672a0.79722 & 22 & 87.6904a0.9294 \\ ESR & 178 & 95.07744.4941 & 138 & 95.85504a0.4358 \\ PDC & 78 & 80.2250a3.0855 & 29 & 81.1630a2.9937 \\ \hline \end{tabular} \end{table} TABLE IV: Comparison result of the proposed method with Boruta \begin{table} \begin{tabular}{l l l l} \hline Dataset & \multicolumn{3}{c}{Baruta} & \multicolumn{3}{c}{Noise-augmented Boruta} \\ \cline{2-5} & Sel. Feat. & F1score (\%) & Sel. Feat. & F1score (\%) \\ \hline SB-RHAPT & 479 & 98.0886a.030684 & 104 & 98.8012a0.2209 \\ APSF & 55 & 87.2672a0.79722 & 22 & 87.6904a0.9294 \\ ESR & 178 & 95.07744.4941 & 138 & 95.85504a0.4358 \\ PDC & 78 & 80.2250a3.0855 & 29 & 81.1630a2.9937 \\ \hline \end{tabular} \end{table} TABLE VII: Statistical comparison of proposed method with Boruta \begin{table} \begin{tabular}{l l} \hline Dataset & Shallow learner \\ \hline Smartphone-Based Recognition of Human Activities & (5) \\ Failure at Scania Trucks & (16) \\ Epileptic Seizure Recognition & (5,8,5) \\ Parkinson’s Disease (PD) classification & (5) \\ \hline \end{tabular} Note: Each number signifies the size of neurons in a layer. In the cases where a sequence of numbers is presented, such as (\(i_{1}\),\(i_{2}\),\(i_{3}\)), these correspond to multiple hidden layers within the network. \end{table} TABLE III: Shallow learner configuration \begin{table} \begin{tabular}{l l} \hline Dataset & Shallow learner \\ \hline Smartphone-Based Recognition of Human Activities & (5) \\ Failure at Scania Trucks & (16) \\ Epileptic Seizure Recognition & (5,8,5) \\ Parkinson’s Disease (PD) classification & (5) \\ \hline \end{tabular} Note: Each number signifies the size of neurons in a layer. In the cases where a sequence of numbers is presented, such as (\(i_{1}\),\(i_{2}\),\(i_{3}\)), these correspond to multiple hidden layers within the network. or surpasses the F1 score achieved by the traditional Boruta across all tested datasets. The improvement is clear, from an increase in the F1 score on the SB-RHAPT dataset from 98.0886% to 98.8012%, and a rise on the PDC dataset from 80.2250% to 81.1630%. Even in instances like the APSF and ESR datasets, where the F1 score sees only slight growth, the proposed method proves its robustness, maintaining competitive performance despite the substantial reduction in the number of features. It is worth mentioning that the Noise-augmented Boruta method yielded lower variance in the F1 scores compared to the traditional Boruta method. Building on this comparative analysis, Table VII presents the results of a rigorous statistical analysis comparing the proposed method and the Boruta method. This is based on the outcomes of 100 runs for each method across four distinct datasets: SB-RHAPT, APSF, ESR, and PDC. The Shapiro-Wilk test was first applied to check for normality in the distribution of results. For three out of the four datasets--SB-RHAPT, APSF, and PDC--the p-values observed in the Shapiro-Wilk test for both methods exceeded the 0.05 threshold, indicating a reasonable assumption of normal distribution. Therefore, the two-sample t-test was employed for these datasets. However, for the ESR dataset, the proposed method's results deviated from a normal distribution, as evidenced by its p-value of 0.0002063. As a result, the Mann-Whitney U test was employed as an appropriate non-parametric alternative for this dataset. Across all datasets, the p-values resulting from the comparative tests were significantly below the 0.05 level, reinforcing that the performance of the two methods is distinct and statistically significant. Furthermore, as the proposed method consistently yielded higher accuracies, it can be concluded that the proposed method outperforms the Boruta method in the considered datasets. For a deeper understanding of the comparison between the two models, Figure 3 offers detailed insights into the models' confidence levels, as assessed by their prediction entropy. The prediction entropy for every instance \(x_{j}\) in test set is calculated as 1 [25]: \[H(x_{j})=-\sum_{i=1}^{n}p(c_{i}|x_{j})\log_{2}p(c_{i}|x_{j}) \tag{1}\] where: \(H(x_{j})\) represents the entropy for the \(j^{th}\) instance, \(p(c_{i}|x_{j})\) and denotes the probability that instance \(x_{j}\) belongs to class \(c_{i}\) (\(i=1,...,n\)). The prediction entropy is normalized by dividing by the maximum possible entropy, making the results range between 0 (indicating complete certainty in prediction) and 1 (indicating complete uncertainty). A prediction with lower entropy means the model is more certain of its decision, while a higher entropy suggests the opposite. To elucidate the relationship between prediction confidence and accuracy, Figure 3 separates and visualizes the distribution of entropies for both correctly and incorrectly predicted samples. This segregation provides a valuable perspective: if, for instance, incorrect predictions predominantly have high entropy, it indicates that the model is generally unsure when it errs. On the other hand, if incorrect predictions have low entropy, it suggests that the model is confidently making those mistakes. Figure 3 provides evidence of the proposed method's superiority, showcasing its enhanced confidence across all four datasets compared to the Boruta algorithm. ### _Ablation study_ As mentioned previously, the \(n\) (multiplier of \(\sigma\)) has been fixed at a value of 50 in this investigation. This particular selection is motivated by the fact that the normalized data often yields a diminutive standard deviation. By assigning a larger figure for standard deviation, it ensures sufficient perturbation of the features. Nonetheless, it raises a pertinent question about the influence of this coefficient on the efficacy of the proposed methodology. Table VIII and Figure 4 displays the results of the proposed method with 'n' values set to 5, 20, and 50. It should be noted that during this analysis the other parameters, including the shallow learner structure, the number of iterations, and epochs, were kept constant throughout the analysis. The result obtained from 100 evaluations after feature selection (similar to previous section). From the results of the ablation study, it can be inferred that increasing 'n', and therefore the perturbation, influences the feature selection process and subsequently the ANN's performance in different ways across the datasets. For instance, the 'Smartphone-Based Recognition of Human Activities' dataset demonstrates that a greater perturbation might lead to more stringent feature selection, resulting in fewer features being selected while maintaining similar performance levels. This could suggest that larger perturbations help to highlight only the most influential features, as minor ones might be 'washed out' due to the higher noise levels. In the 'Failure at Scania Trucks' and 'Epileptic Seizure Recognition' datasets, an increase in 'n' appears to reveal more features that contribute to the performance of the ANN, indicating that a greater degree of perturbation might be useful in uncovering hidden or complex relationships in the data. However, the results from the 'Parkinson's Disease (PD) classification' dataset provide a nuanced view, suggesting that there might not be a linear relationship between the magnitude of perturbation and the performance of the ANN. Here, the number of selected features and the F1 score do not Fig. 2: Box-plots of f1 score for the proposed method and Boruta. demonstrate a consistent trend with increasing perturbation, highlighting the intricacies of the ANN's response to perturbations in this context. Thus, while the perturbation multiplier 'n' clearly impacts the ANN's behavior, the nature and extent of this impact can vary greatly based on the dataset's inherent properties. This underscores the importance of fine-tuning 'n' based on specific dataset characteristics to optimize the ANN's performance. Overall, the proposed method has proven to be capable of selecting crucial features even with variations in \(n\). The comparison with the number of features selected by the Boruta algorithm also demonstrates that it continues to select fewer features. In other words, the proposed method consistently outperforms the Boruta algorithm with respect to the quantity of selected features. This aligns with the objective of feature selection, which is to select the minimum possible number of features while still maintaining adequate performance in modeling the response variable. ## V Conclusion The innovation of this method lies in the intentional modification of shadow features' characteristics, differing from traditional approaches where shadow features retain the same statistical properties as their original counterparts. Therefore, this study proposed a new variant of the Boruta, called Nonise-augmented Boruta. In light of the comprehensive evaluation conducted in this study, it can be conclusively stated that the proposed noise-augmented Boruta methodology offers substantial improvements over the classic Boruta algorithm, where the proposed model consistently outperforms in terms of selecting fewer but more essential features across multiple datasets. This performance adheres to the fundamental principle of feature selection: reducing model complexity while preserving predictive power. Moreover, the conducted ablation study provides valuable insights into the role and impact of the standard deviation multiplier 'n' within the proposed methodology. The multiplier, by influencing the perturbation, demonstrates substantial control over the feature selection process and subsequent performance of the Artificial Neural Network. Importantly, this relationship is not linear, and the specific characteristics of the dataset Fig. 4: Ablation study results for n 5, 20, 50. Fig. 3: Histogram graphs of the predictive entropy results. B and NB indicate Boruta and Noised-augmented Boruta respectively. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Dataset & Number of all Features & \multicolumn{2}{c}{n = 5} & \multicolumn{2}{c}{n = 20} & \multicolumn{2}{c}{n = 50} \\ \cline{2-7} & \multicolumn{1}{c}{Sel. Feat.} & \multicolumn{1}{c}{F1 score (\%)} & \multicolumn{1}{c}{Sel. Feat.} & \multicolumn{1}{c}{F1 score (\%)} & \multicolumn{1}{c}{Sel. Feat.} & \multicolumn{1}{c}{F1 score (\%)} \\ \hline SB-RHAPT & 561 & 119 & 99.0087\(\pm\)0.2428 & 117 & 98.8782\(\pm\)0.2288 & 104 & 98.8012\(\pm\)0.2209 \\ APSF & 171 & 8 & 79.8743\(\pm\)2.0264 & 23 & 86.7719\(\pm\)1.2292 & 22 & 87.6904\(\pm\)0.9294 \\ ESR & 178 & 31 & 95.56894\(\pm\)0.4350 & 112 & 95.8388\(\pm\)0.4324 & 138 & 95.8550\(\pm\)0.4358 \\ PDC & 755 & 36 & 81.7975\(\pm\)2.6048 & 25 & 79.3420\(\pm\)2.6942 & 29 & 81.1630\(\pm\)2.9937 \\ \hline \hline \end{tabular} \end{table} TABLE VIII: Ablation analysis results on \(n\) strongly influence the optimal value for 'n'. In conclusion, the proposed noise-augmented Boruta methodology presents a promising advance in the domain of feature selection. Its superior performance, coupled with the insightful findings from the ablation study, demonstrates its potential for broad applicability across various machine-learning tasks. However, careful tuning of its perturbation parameter 'n' is critical to ensure optimal results, emphasizing the need for a context-specific approach when applying this technique. The possible direction to extend this work is incorporating uncertainty metrics. This would pivot the focus towards not just discerning features that decrease model performance when perturbed, but also understanding the model's certainty regarding such perturbations.
2310.03033
Benchmarking Local Robustness of High-Accuracy Binary Neural Networks for Enhanced Traffic Sign Recognition
Traffic signs play a critical role in road safety and traffic management for autonomous driving systems. Accurate traffic sign classification is essential but challenging due to real-world complexities like adversarial examples and occlusions. To address these issues, binary neural networks offer promise in constructing classifiers suitable for resource-constrained devices. In our previous work, we proposed high-accuracy BNN models for traffic sign recognition, focusing on compact size for limited computation and energy resources. To evaluate their local robustness, this paper introduces a set of benchmark problems featuring layers that challenge state-of-the-art verification tools. These layers include binarized convolutions, max pooling, batch normalization, fully connected. The difficulty of the verification problem is given by the high number of network parameters (905k - 1.7 M), of the input dimension (2.7k-12k), and of the number of regions (43) as well by the fact that the neural networks are not sparse. The proposed BNN models and local robustness properties can be checked at https://github.com/ChristopherBrix/vnncomp2023_benchmarks/tree/main/benchmarks/traffic_signs_recognition. The results of the 4th International Verification of Neural Networks Competition (VNN-COMP'23) revealed the fact that 4, out of 7, solvers can handle many of our benchmarks randomly selected (minimum is 6, maximum is 36, out of 45). Surprisingly, tools output also wrong results or missing counterexample (ranging from 1 to 4). Currently, our focus lies in exploring the possibility of achieving a greater count of solved instances by extending the allotted time (previously set at 8 minutes). Furthermore, we are intrigued by the reasons behind the erroneous outcomes provided by the tools for certain benchmarks.
Andreea Postovan, Mădălina Eraşcu
2023-09-25T01:17:14Z
http://arxiv.org/abs/2310.03033v1
Benchmarking Local Robustness of High-Accuracy Binary Neural Networks for Enhanced Traffic Sign Recognition ###### Abstract Traffic signs play a critical role in road safety and traffic management for autonomous driving systems. Accurate traffic sign classification is essential but challenging due to real-world complexities like adversarial examples and occlusions. To address these issues, binary neural networks offer promise in constructing classifiers suitable for resource-constrained devices. In our previous work, we proposed high-accuracy BNN models for traffic sign recognition, focusing on compact size for limited computation and energy resources. To evaluate their local robustness, this paper introduces a set of benchmark problems featuring layers that challenge state-of-the-art verification tools. These layers include binarized convolutions, max pooling, batch normalization, fully connected. The difficulty of the verification problem is given by the high number of network parameters (905k - 1.7 M), of the input dimension (2.7k-12k), and of the number of regions (43) as well by the fact that the neural networks are not sparse. The proposed BNN models and local robustness properties can be checked at [https://github.com/ChristopherBrix/vnncomp2023_benchmarks/tree/main/benchmarks/traffic_signs_recognition](https://github.com/ChristopherBrix/vnncomp2023_benchmarks/tree/main/benchmarks/traffic_signs_recognition). The results of the 4th International Verification of Neural Networks Competition (VNN-COMP'23) revealed the fact that 4, out of 7, solvers can handle many of our benchmarks randomly selected (minimum is 6, maximum is 36, out of 45). Surprisingly, tools output also wrong results or missing counterexample (ranging from 1 to 4). Currently, our focus lies in exploring the possibility of achieving a greater count of solved instances by extending the allotted time (previously set at 8 minutes). Furthermore, we are intrigued by the reasons behind the erroneous outcomes provided by the tools for certain benchmarks. ## 1 Introduction Traffic signs play a crucial role in ensuring road safety and managing traffic flow, both in urban and highway driving. For autonomous driving systems, the accurate recognition and classification of traffic signs, known as _traffic sign classification (recognition)_, are essential components. This process involves two main tasks: firstly, isolating the traffic sign within a bounding box, and secondly, classifying the sign into a specific traffic category. The focus of this work lies on the latter task. Creating a robust traffic sign classifier is challenging due to the complexity of real-world traffic scenes. Common issues faced by classifiers include a lack of _robustness_ against _adversarial examples_[20] and occlusions [22]. _Adversarial examples_ are inputs that cause classifiers to produce erroneous outputs, and _occlusions_ occur naturally due to various factors like weather conditions, lighting, and aging, which make traffic scenes unique and diverse. To address the lack of robustness, one approach is to formally verify that the trained classifier can handle both adversarial and occluded examples. Binary neural networks (BNNs) have shown promise in constructing traffic sign classifiers, even in devices with limited computational resources and energy constraints, often encountered in autonomous driving systems. BNNs are neural networks (NNs) with binarized weights and/or activations constrained to \(\pm 1\), reducing model size and simplifying image recognition tasks. The long-term goal of this work is to provide formal guarantees of specific properties, like robustness, that hold for a trained classifier. This objective leads to the formulation of the _verification problem_: given a trained model and a property to be verified, does the model satisfy that property? The verification problem is translated into a constrained satisfaction problem, and existing verification tools can be employed to solve it. However, due to its NP-complete nature [15], this problem is experimentally challenging for state-of-the-art tools. In our previous work [17], we proposed high-accuracy BNN models explicitly for traffic sign recognition, with a thorough exploration of accuracy, model size, and parameter variations for the produced architectures. The focus was on BNNs with high accuracy and compact model size, making them suitable for devices with limited computation and energy resources, while also reducing the number of parameters to facilitate the verification task. The German Traffic Sign Recognition Benchmark (GTSRB) [6] was used for training, and testing involved similar images from GTSRB, as well as Belgian [2] and Chinese [5] datasets. This paper builds upon the models with the best accuracy from the previous study [17] and presents a set of benchmark problems to verify local robustness properties of these models. The novelty of the proposed benchmarks lies in the fact that traffic signs recognition is done using binarized neural networks. To the best of our knowledge this was not done before [9, 19]. Compared to existing benchmarks. The types of layers used determine a complex verification problem and include _binarized convolution layers_ to capture advanced features from the image dataset, _max pooling layers_ for model size reduction while retaining relevant features, _batch normalization layers_ for scaling, and _fully connected (dense) layers_. The difficulty of the verification problem is given by the high number of network parameters (905k - 1.7 M), of the input dimension (2.7k-12k), and of the number of regions (42) as well by the fact that the neural networks are not sparse. Discussions with organizers and competitors in the Verification of Neural Network Competition (VNN-COMP)1 revealed that no tool competing in 2022 could handle the proposed benchmark. Additionally, in VNN-COMP 2023 [4], the benchmark was considered fairly complex by the main developer of the winning solver \(\alpha,\beta\)-CROWN2. Footnote 1: [https://github.com/stanleybak/vnncomp2023/issues/2](https://github.com/stanleybak/vnncomp2023/issues/2) Footnote 2: [https://github.com/Verified-Intelligence/alpha-beta-CROWN](https://github.com/Verified-Intelligence/alpha-beta-CROWN) We publicly released our benchmark in May 2023. In the VNN-COMP 2023, which took place in July 2023, our benchmark was used in scoring, being nominated by at least 2 competing tools. 4, out of 7, tools were able to find an answer for the randomly generated instances. Most instances were solved by \(\alpha,\beta\)-CROWN (39 out of 45) but it received penalties for 3 results due to either incorrect answer or missing counterexample. Most correct answers were given by Marabou3 (19) with only 1 incorrect answer. Footnote 3: [https://github.com/NeuralNetworkVerification/Marabou](https://github.com/NeuralNetworkVerification/Marabou) Currently, we are investigating the reasons why the tools were not able to solve all instances and why incorrect answers were given. Additionally, more tests will be performed on randomly generated answers and we will examine the particularities of the input images and of the trained networks which can not be handled by solvers due to timeout or incorrect answer. The rest of the paper is organized as follows. In Section 2 we present related work focusing on comparing the proposed benchmark with others competing in VNN-COMP. Section 3 briefly describes deep neural networks, binarized neural networks and formulates the robustness property. In Section 4 we describe the anatomy of the trained neural networks whose local robustness is checked. In Section 5.1 we introduce the verification problem and its canonical representation (VNN-LIB and ONNX formats). Section 6 presents the methodology for benchmarks generation and the results of the VNN-COMP 2023. ## 2 Related Work There exist many approaches for the verification of neural networks, see [21] for a survey, however few are tackling the verification of binarized neural networks. Verifying properties using boolean encoding [16] is an alternative approach to validate characteristics of a specific category of neural networks, known as binarized neural networks. These networks possess binary weights and activations. The proposed technique involves reducing the verification problem from a mixed integer linear programming problem to a Boolean satisfiability. By encoding the problem in Boolean logic, they exploit the capabilities of modern SAT solvers, combined with a counterexample-guided search method, to verify various properties of these networks. A primary focus of their research is assessing the networks' resilience against adversarial perturbations. The experimental outcomes demonstrate the scalability of this approach when applied to medium-sized deep neural networks employed in image classification tasks. However their neural networks do not have convolution layers and can handle only a simple dataset like MNIST where images are black and white and there are just 10 classes to classify. Also, no tool implementing the approach was realeased to be tested. Paper [7] focuses on verification of binarized neural network, extended the Marabou [15] tool to support _Sign Constrains_ and verified a network that uses both binarized and non-binarized layers. For testing they used Fashion-MNIST dataset which was trained using XNOR-Net architecture and obtained the accuracy of only 70.97%. This extension could not be used in our case due to the fact that we have binarized convolution layers which the tool can not handle. In the verification of neural networks competition (VNN-COMP), in 2022, there are various benchmarks subject to verification [3], however, there is none involving traffic signs. To the best of our knowledge there is only one paper which deals with traffic signs datasets [12] that is GTSRB. However, they considered only subsets of the dataset and their trained models consist of only fully connected (FC) layers with ReLU activation functions, not convolutions, ranging from 70 to 1300 neurons. Furthermore they do not mention the accuracy of their trained models to be able to compare it with ours. Moreover, the benchmarks from VNN-COMP 2022 [10] used for image classification tasks have are in Table 1. As one could observe, no benchmarks use binarized convolutions and batch normalization layers. Discussions with competition organizers revealed the fact that no tool from 2022 competition could handle our benchmark4. Footnote 4: See [https://github.com/stanleybak/vnncomp2023/issues/2](https://github.com/stanleybak/vnncomp2023/issues/2) intervention from user stanleybak on May 17, 2023 The report of this year neural networks verification competition (VNN-COMP 2023) is in the draft version, but we present here the differences between our benchmark and the others. Table 2 taken \begin{table} \begin{tabular}{c c c c c} **Category** & **Benchmark** & **Network Types** & **\#Neurons** & **Input Dimension** \\ \hline \multirow{5}{*}{CNN \& ResNet} & Cifar Bias Field & Conv. + ReLU & 45k & 16 \\ & Large ResNets & ResNet (Conv. + ReLU) & 55k - 286k & 3.1k - 12k \\ & Oval21 & Conv. + ReLU & 3.1k - 6.2k & 3.1k \\ & SRI ResNet A/B & ResNet (Conv. + ReLU) & 11k & 3.1k \\ & VGGNet16 & Conv. + ReLU + MaxPool & 13.6M & 1 - 95k \\ \hline Fully-Connected & MNIST FC & FC. + ReLU & 512 - 1.5k & 784 \\ \end{tabular} \end{table} Table 1: Benchmarks proposed in the VNN-COMP 2022 for image classification tasks from the draft report presents all the scored benchmarks, i.e. benchmarks which were nominated by at least 2 competing tools and are used in their ranking. The column Network Type presents the types of layers of the trained neural network, the column # of Params represent the number of parameters of the trained neural network, the column Input Dimension represents the dimension of the input (for example, for an image with dimension 30x30 pixels and RGB channel the dimension is 30x30x3 which means that the verification problem contains 30x30x3 variables), the Sparsity column represents the degree of sparsity of the trained neural network and, finally, the column # of Regions represents the number of regions determined by the verification problem (for example, for our German Traffic Sign Recognition Benchmark there are 43 traffic signs classes). Our proposed benchmark, Traffic Signs Recognition, is more complex as the others as it involves cumulatively a high number of parameters, input dimension, number of regions and no sparsity. ## 3 Theoretical Background ### Deep Neural Networks _Neural networks_, inspired by the human brain, are computational models composed of interconnected nodes called artificial neurons. These networks have gained attention for their ability to learn and perform complex tasks. The nodes compute outputs using _activation functions_, and synaptic _weights_ determine the strength of connections between nodes. Training is achieved through optimization algorithms, such as _backpropagation_, which adjust the weights iteratively to minimize the network's error. A _deep neural network (DNN)_[7] can be conceptualized as a directed graph, where the nodes, also known as neurons, are organized in _layers_. The input layer is responsible for receiving initial values, such as pixel intensities in the case of image inputs, while the output layer generates the final predictions or results. Hidden layers, positioned between the input and output layers, play a crucial role in extracting and transforming information. During the evaluation or inference process, the input values propagate through the network, layer by layer, using connections between neurons. Each neuron applies a specific mathematical operation to the inputs it receives, followed by the _activation function_ that introduces _nonlinearity_ to the network. The activation function determines the neuron's output based on the weighted sum of its inputs and an optional bias term. Different layer types are employed in neural networks to compute the values of neurons based on the preceding layer's neuron values. Those relevant for our work are introduced in Section 3.2. \begin{table} \begin{tabular}{c c c c c c} **Name** & **Network Type** & **\# of Params** & \begin{tabular}{c} **Input** \\ **Dimension** \\ \end{tabular} & **Sparsity** & **\# of Regions** \\ \hline nn4sys & Conv, FC, Residual + ReLU, Sigmoid & 33k - 37M & 1-308 & 0-66\% & 1 - 11k \\ \hline VGGNet16 & Conv + ReLU + MaxPool & 138M & 150k & 0-99\% & 1 \\ \hline Collins Rul CNN & Conv + ReLU, Dropout & 60k - 262k & 400-800 & 50-99\% & 2 \\ \hline TLL Verify Bench & FC + ReLU & 17k - 67M & 2 & 0\% & 1 \\ \hline Acas XU & FC + ReLU & 13k & 5 & 0-20\% & 1-4 \\ \hline \multirow{2}{*}{cGAN} & FC, Conv, Conv/Transpose, & \multirow{2}{*}{500k-68M} & \multirow{2}{*}{5} & \multirow{2}{*}{0-40\%} & \multirow{2}{*}{2} \\ & Residual + ReLU, BatchNorm, AvgPool & & & & \\ \hline Dist Shift & FC + ReLU, Sigmoid & 342k-855k & 792 & 98.9\% & 1 \\ \hline ml4acopf & FC, Residual + ReLU, Sigmoid & 4k-680k & 22-402 & 0-7\% & 1-600 \\ \hline Traffic Signs Recogn & Conv+Sign+MaxPool+BatchNorm, FC, & 905k-1.7M & 2.7k-12k & 0\% & 43 \\ \hline ViT & Conv, FC, Residual + ReLU, Softmax, BatchNorm & 68k-76k & 3072 & 0\% & 9 \\ \hline \end{tabular} \end{table} Table 2: Benchmarks proposed in the VNN-COMP 2023 ### Binarized Neural Networks A BNN [12] is a feedforward network where weights and activations are mainly binary. [15] describes BNNs as sequential composition of blocks, each block consisting of linear and non-linear transformations. One could distinguish between _internal_ and _output blocks_. There are typically several _internal blocks_. The layers of the blocks are chosen in such a way that the resulting architecture fulfills the requirements of accuracy, model size, number of parameters, for example. Typical layers in an internal block are: _1)_ linear transformation (LIN) _2)_ binarization (BIN) _3)_ max pooling (MP) _4)_ batch normalization (BN). A linear transformation of the input vector can be based on a fully connected layer or a convolutional layer. In our case is a convolution layer since our experiments have shown that a fully connected layer can not synthesize well the features of traffic signs, therefore, the accuracy is low. The linear transformation is followed either by a binarization or a max pooling operation. Max pooling helps in reducing the number of parameters. One can swap binarization with max pooling, the result would be the same. We use this sequence as Larq [9], the library we used in our experiments, implements convolution and binarization in the same function. Finally, scaling is performed with a batch normalization operation [13]. There is _one output block_ which produces the predictions for a given image. It consists of a dense layer that maps its input to a vector of integers, one for each output label class. It is followed by function which outputs the index of the largest entry in this vector as the predicted label. We make the observation that, if the MP and BN layers are omitted, then the input and output of the internal blocks are binary, in which case, also the input to the output block. The input of the first block is never binarized as it drops down drastically the accuracy. ### Properties of (Binarized) Neural Networks: Robustness _Robustness_ is a fundamental property of neural networks that refers to their ability to maintain stable and accurate outputs in the presence of perturbations or adversarial inputs. Adversarial inputs are intentionally crafted inputs designed to deceive or mislead the network's predictions. As defined by [15], _local robustness_ ensures that for a given input \(x\) from a set \(\chi\), the neural network \(F\) remains unchanged within a specified perturbation radius \(\epsilon\), implying that small variations in the input space do not result in different outputs. The output for the input \(x\) is represented by its label \(l_{x}\). We consider \(L_{\infty}\) norm defined as \(||x||_{\infty}=\sup\limits_{n}|x_{n}|\), but also other norms can be used, e.g. \(L_{0}\)[17]. **Definition 3.1** (Local robustness.).: A feedforward neural network \(F\) is locally \(\epsilon\)-robust for an input \(x,x\in\chi\), if there does not exist \(\tau,||\tau||_{\infty}\leq\epsilon\), such that \(F(x+\tau)\neq l_{x}\). Figure 1: A fully connected DNN with 4 input nodes, 3 output nodes and 3 hidden layers _Global robustness_[16] is an extension of the local robustness and it is defined as the expected maximum safe radius over a given test dataset, representing a collection of inputs. **Definition 3.2** (Global robustness.).: A feed-forward neural network \(F\) is globally \(\epsilon\)-robust if for any \(x,x\in\chi\), and \(\tau,||\tau||_{\infty}\leq\epsilon\), we have that \(F(x+\tau)=l_{x}\). The definitions above can not be used in a computational setting. Hence, [15] proposes Definition 3.3 for local robustness which is equivalent to Definition 3.1. **Definition 3.3** (Local robustness.).: A network is \(\epsilon\)-locally robust in the input \(x\) if for every \(x^{\prime}\), such that \(||x-x^{\prime}||_{\infty}\leq\epsilon\), the network assigns the same label to \(x\) and \(x^{\prime}\). For our setting, the input is an image represented as a vector with values represented by the pixels. Hence, the inputs are the vector \(x\) and the perturbation \(\epsilon\). This formula can also be applied to all inputs simultaneously (all images from test set of the dataset), in that case _global robustness_ is addressed. However, the number of parameters involved in checking _global robustness_ property increases enormously. Hence, in this paper, the benchmarks propose verification of local robustness only. ## 4 Anatomy of the Binarized Neural Networks For benchmarking, we propose the two BNNs architectures for which we obtained the best accuracy [17], as well an additional one. More precisely, the best accuracy for GTSRB and Belgium datasets is \(96,45\%\) and \(88,17\%\), respectively, and was obtained for the architecture from Figure 2, with input size \(64\times\)\(64\) (see Table 3). The number of parameters is almost \(2\)M and the model size \(225,67\) KiB (for the binary model) compared to \(6932,48\) KiB (for the Float-32 equivalent). The best accuracy for Chinese dataset (\(83,9\%\)) is obtained by another architecture, namely from Figure 3, with input size \(48\times\)\(48\) (see Table 4). This architecture is more efficient from the point of view of computationally limited devices and formal verification having \(900\)k parameters and \(113,64\) KiB (for the binary model) and \(3532,8\) KiB (for the Float-32 equivalent). Also, the second architecture gave the best average accuracy and the decrease in accuracy for GTSRB and Belgium is small, namely \(1,17\%\) and \(0,39\%\), respectively. One could observe that the best architectures were obtained for input size images \(48\times\)\(48\) and \(64\times\)\(64\) pixels with max pooling and batch normalization layers which reduce the number of neurons, namely perform scaling which leads to good accuracy. We also propose for benchmarking an XNOR architecture, i.e. containing only binary parameters, (Figure 4) for which we obtained the best results for input size images \(30\)x\(30\) pixels and GTSRB (see Table 5). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Input size**} & \multirow{2}{*}{**\#Neur**} & \multicolumn{3}{c|}{**Accuracy**} & \multicolumn{3}{c|}{**\#Params**} & \multicolumn{1}{c|}{**Model Size (in KiB)**} \\ \cline{3-9} & & **German** & **China** & **Belgium** & **Binary** & **Real** & **Total** & **Binary** & **Float-32** \\ \hline 64px \(\times\) 64px & \(1024\) & **96.45** & **81.50** & **88.17** & 1772896 & 2368 & 1775264 & 225.67 & 6932.48 \\ \hline \end{tabular} \end{table} Table 3: Best results for the architecture from Figure 2. Dataset for train: GTSRB. Figure 2: Accuracy Efficient Architecture for GTSRB and Belgium dataset ## 5 Model and Property Specification: VNN-LIB and ONNX Formats The VNN-LIB (Verified Neural Network Library) format [10] is a widely used representation for encoding and exchanging information related to the verification of neural networks. It serves as a standardized format that facilitates the communication and interoperability of different tools and frameworks employed in the verification of neural networks. The VNN-LIB format typically consists of two files that provide a detailed specification of the neural network model (see Section 5.1), along with relevant properties and constraints (see Section 5.2). These files encapsulate important information, including the network architecture, weights and biases, input and output ranges, and properties to be verified. ### Model Representation In machine learning, the representation of models plays a vital role in facilitating their deployment and interoperability across various frameworks and platforms. One commonly used format is the H5 format, which is an abbreviation for _Hierarchical Data Format version 5_. The H5 format provides a structured and efficient means of storing and organizing large amounts of data, including the parameters and architecture of machine learning models. It is widely supported by popular deep learning frameworks, such as TensorFlow and Keras, allowing models to be saved, loaded, and shared in a standardized manner. However, while the H5 format serves as a convenient model representation for specific frameworks, it may lack compatibility when transferring models between different frameworks or performing model verification. This is where the _Open Neural Network Exchange_ (ONNX) format comes into play. ONNX offers a vendor-neutral, open-source alternative that allows models to be represented in a standardized format, enabling seamless exchange and collaboration across multiple deep learning frameworks. The VNN-LIB format, which is used for the formal verification of neural network models, leverages ONNX as its underlying model representation. ### Property specification For property specification, VNN-LIB standard uses the SMT-LIB format. The SMT-LIB (Satisfiability Modulo Theories-LIBrary) language [7] is a widely recognized formal language utilized for the formalization of Satisfiability Modulo Theories (SMT) problems. A VNN-LIB file is structured as follows5 and the elements involved have the following semantics for \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Input size**} & \multirow{2}{*}{**\#Neur**} & \multicolumn{3}{c|}{**Accuracy**} & \multicolumn{3}{c|}{**\#Params**} & \multicolumn{1}{c|}{**Model Size (in KiB)**} \\ \cline{3-10} & & **German** & **China** & **Belgium** & **Binary** & **Real** & **Total** & **Binary** & **Float-32** \\ \hline 48px \(\times\) 48px & 256 & **95.28** & **83.90** & **87.78** & 904288 & 832 & 905120 & 113.64 & 3532.80 \\ \hline \end{tabular} \end{table} Table 4: Best results for the architecture from Figure 3. Dataset for train: GTSRB. Figure 3: Accuracy Efficient Architecture for Chinese dataset the considered image classification task: 1. definition of input variables representing the values of the pixels \(X_{i}\) (\(i=\overline{1,P}\), where \(P\) is the dimension of the input image: \(N\times M\times 3\) pixels). For the file above, there are 2700 variables as the image has dimension \(30\times 30\) and the channel used is RGB. 2. definition of the output variables representing the values \(Y_{j}\) (\(j=\overline{1,L}\), where \(L\) is the number of classes of the images in the dataset). For the file above, there are 43 variables as the GTSRB categorises the traffic signs images into 43 classes. 3. bounding constraints for the variables input variables. Definition 5.1 is used for generating the property taking into account that vector \(x\) (its elements are the values of the pixels of the image) and \(\epsilon\) (perturbation) are known. For example, if \(\epsilon=10\) and the value of the pixel \(X^{\prime}_{2699}\) of the image with index 1678 from GTSRB is 24, the generated constraints for finding the values of the perturbed by \(\epsilon\) pixel \(X_{2699}\) for which the predicted label still holds is: (assert (<= X_2699 34.0000000)) (assert (>= X_2699 14.0000000)) 4. constraints involving the output variables assessing the value of the output label. For example, if the verification problem is formulated as: _Given the image with index \(1678\), the perturbation \(\epsilon~{}=~{}10\) and the trained model, find if the perturbed images are in class \(38\)_, the generated constraints are as follows which actually represents the negation of the property to be checked: (assert (or (>= Y_0 Y_38)... (>= Y_37 Y_38) (>= Y_39 Y_38)... (>= Y_42 Y_38))) ## 6 Benchmarks Proposal and Experimental Results of the VNN-COMP 2023 To meet the requirements of the VNN-COMP 2023, the benchmark datasets must conform to the ONNX format for defining the neural networks, while the problem specifications are expected to adhere to the Figure 4: XNOR(QConv) architecture \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Model description** & **Acc** & \begin{tabular}{c} **\#Binary** \\ **Params** \\ \end{tabular} & \begin{tabular}{c} **Model Size (in KiB)** \\ **Binary** \\ \end{tabular} \\ \hline QConv(16, 3\(\times\)3), QConv(32, 2\(\times\)2), D(43) & 81.54 & 1005584 & 122.75 & 3932.16 \\ \hline \end{tabular} \end{table} Table 5: XNOR(QConv) architecture. Image size: 30px \(\times\) 30px. Dataset for train and test: GTSRB. VNN-LIB format. Therefore, we have prepared a benchmark set comprising the BNNs introduced in Section 4 that have been encoded in the ONNX format. In order to assess the adversarial robustness of these networks, the problem specifications encompassed perturbations within the infinity norm around zero, with radius denoted as \(\epsilon=\{1,3,5,10,15\}\). To achieve this, we randomly selected three distinct images from the test set of the GTSRB dataset for each model and have generated the VNNLIB files for each epsilon in the set, in the way we ended up having 45 VNNLIB files in total. We were constrained to generate the small benchmark which includes just 45 VNNLIB files because of the total timeout which should not exceed 6 hour, this is the maximum timeout for a solver to address all instances, consequently a timeout of 480 seconds was allocated for each instance. For checking the generated VNNLIB specification files for submitted in the VNNCOMP 2023 as specified above as well as to generate new ones you can check [https://github.com/apostovan21/vnncomp2023](https://github.com/apostovan21/vnncomp2023). Our benchmark was used for scoring the competing tools. The results for our benchmark, as presented by the VNN-COMP 2023 organizers, are presented in Table 6. The meaning of the columns is as follows. Verified is number of instances that were UNSAT (no counterexample) and proven by the tool. Falsifieid is number that were SAT (counterexample was found) and reported by the tool. Fastest is the number where the tool was fastest (this did not impact the scoring in this year competition). Penalty is the number where the tool gave the incorrect result or did not produce a valid counterexample. Score is the sum of scores (10 points for each correct answer and \(-150\) for incorrect ones). Percent is the score of the tool divided by the best score for the benchmark (so the tool with the highest score for each benchmark gets 100) and was used to determine final scores across all benchmarks. Currently, we are investigating if the number of solved instances could be higher if the time is increased (the deadline used was 8 minutes). Also, it is interesting why the tools gave incorrect results for some benchmarks. ## 7 Conclusions Building upon our prior study that introduced precise binarized neural network models for traffic sign recognition, this study presents standardized challenges to gauge the resilience of these networks to local variations. These challenges were entered into the VNN-COMP 2023 evaluation, where 4 out of 7 tools produced results. Our current emphasis is on investigating the potential for solving more instances by extending the time limit (formerly set at 8 minutes). Additionally, we are keen to comprehend the factors contributing to incorrect outputs from the tools on specific benchmark tasks. ## Acknowledgements This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS/CCCDI - UEFISCDI, project number PN-III-P1-1.1-TE-2021-0676, within PNCDI III. \begin{table} \begin{tabular}{l c c c c c c} \# & **Tool** & **Verified** & **Falsified** & **Fastest** & **Penalty** & **Score** & **Percent** \\ \hline 1 & Marabou & 0 & 18 & 0 & 1 & 30 & 100\% \\ 2 & PyRAT & 0 & 7 & 0 & 1 & -80 & 0\% \\ 3 & NeuralSAT & 0 & 31 & 0 & 4 & -290 & 0\% \\ 4 & alpha-beta-CROWN & 0 & 39 & 0 & 3 & -60 & 0\% \\ \end{tabular} \end{table} Table 6: VNN-COMP 2023 Results for Traffic Signs Recognition Benchmark
2310.03755
Physics Informed Neural Network Code for 2D Transient Problems (PINN-2DT) Compatible with Google Colab
We present an open-source Physics Informed Neural Network environment for simulations of transient phenomena on two-dimensional rectangular domains, with the following features: (1) it is compatible with Google Colab which allows automatic execution on cloud environment; (2) it supports two dimensional time-dependent PDEs; (3) it provides simple interface for definition of the residual loss, boundary condition and initial loss, together with their weights; (4) it support Neumann and Dirichlet boundary conditions; (5) it allows for customizing the number of layers and neurons per layer, as well as for arbitrary activation function; (6) the learning rate and number of epochs are available as parameters; (7) it automatically differentiates PINN with respect to spatial and temporal variables; (8) it provides routines for plotting the convergence (with running average), initial conditions learnt, 2D and 3D snapshots from the simulation and movies (9) it includes a library of problems: (a) non-stationary heat transfer; (b) wave equation modeling a tsunami; (c) atmospheric simulations including thermal inversion; (d) tumor growth simulations.
Paweł Maczuga, Maciej Sikora, Maciej Skoczeń, Przemysław Rożnawski, Filip Tłuszcz, Marcin Szubert, Marcin Łoś, Witold Dzwinel, Keshav Pingali, Maciej Paszyński
2023-09-24T07:08:36Z
http://arxiv.org/abs/2310.03755v2
Physics Informed Neural Network Code for 2D Transient Problems (PINN-2DT) Compatible with Google Colab ###### Abstract We present an open-source Physics Informed Neural Network environment for simulations of transient phenomena on two-dimensional rectangular domains, with the following features: (1) it is compatible with Google Colab which allows automatic execution on cloud environment; (2) it supports two dimensional time-dependent PDEs; (3) it provides simple interface for definition of the residual loss, boundary condition and initial loss, together with their weights; (4) it support Neumann and Dirichlet boundary conditions; (5) it allows for customizing the number of layers and neurons per layer, as well as for arbitrary activation function; (6) the learning rate and number of epochs are available as parameters; (7) it automatically differentiates PINN with respect to spatial and temporal variables; (8) it provides routines for plotting the convergence (with running average), initial conditions learnt, 2D and 3D snapshots from the simulation and movies (9) it includes a library of problems: (a) non-stationary heat transfer; (b) wave equation modeling a tsunami; (c) atmospheric simulations including thermal inversion; (d) tumor growth simulations. **Keywords:** Physics Informed Neural Networks, 2D non-stationary problems, Google Colab, Wave equations, Atmospheric simulations, Tumor growth simulations ## 1 Program summary _Program Title:_ PINN-2DT _Licensing provisions:_ MIT license (MIT) _Programming language:_ Python _Nature of problem:_ Solving non-stationary problems in 2D _Solution method:_ Physics Informed Neural Networks. The implementation requires definition of PDE loss, initial conditions loss, and boundary conditions loss _Additional comments including Restrictions and Unusual features:_ The code is prepared in a way to be compatible with Google Colab ## 2 Introduction The goal of this paper is to replace the functionality of the time-dependent solver we published using isogeometric analysis and fast alternating directions solver [5, 6, 7] with the Physics Informed Neural Network (PINN) python library that can be easily executed on Colab. The PINN proposed in 2019 by Prof. Karniadakis revolutionized the way in which neural networks find solutions to initial-value problems described using partial differential equations [1] This method treats the neural network as a function approximating the solution of the given partial differential equation \(u(x)=PINN(x)\). After computing the necessary differential operators, the neural network and its appropriate differential operators are inserted into the partial differential equation. The residuum of the partial differential equation and the boundary-initial conditions are assumed as the loss function. The learning process involves sampling the loss function at different points by calculating the PDE residuum and the initial boundary conditions. The PINN methodology has had exponential growth in the number of papers and citations since its creation in 2019. It has multiple applications, from solid mechanics [15], geology [4], medical applications [11], and even the phase-field modeling of fracture [14]. Why use PINN solvers instead of classical or higher order finite element methods (e.g., isogeometric analysis) solvers? PINN/VPINN solvers have affordable computational costs. They can be easily implemented using pre-existing libraries and environments (like Pytorch and Google Colab). They are easily parallelizable, especially on GPU. They have great approximation capabilities, and they enable finding solutions to a family of problems. With the introduction of modern stochastic optimizers such as ADAM [3], they easily find high-quality minimizers of the loss functions employed. In this paper, we present the PINN library with the following features * It is implemented in Pythorch and compatible with Google Colab. * It supports two-dimensional problems defined on a rectangular domain. * It is suitable for smooth problems without singularities resulting from large contrast material data. * It enables the definition of the PDE residual loss function in the space-time domain. * It supports the loss function for defining the initial condition. * It provides loss functions for Neumann and Dirichlet boundary conditions. * It allows for customization of the loss functions and their weights. * It allows for defining an arbitrary number of layers of the neural network and an arbitrary number of neurons per layer. * The learning rate, the kind of activation function, and a number of epochs are problem-specific parameters. * It automatically performs differentiation of the PINN with respect to spatial and temporal variables. * It provides tools for plotting the convergence of all the loss functions, together with the running average. * It enables the plotting of the exact and learned initial conditions. * It plots 2D or 3D snapshots from the simulations. * It generates gifs with the simulation animation. We illustrate our PINN-2DT code with four numerical examples. The first one concerns the model heat transfer problem. The second one presents the solution to the wave equation. The third one is the simulation of the thermal inversion, and the last one is the simulation of brain tumor growth. There are the following available PINN libraries. First and most important is the DeepXDE library [12] by the team of Prof. Karniadakis. It is an extensive library with huge functionality, including ODEs, PDEs, complex geometries, different initial and boundary conditions, and forward and inverse problems. It supports several tensor libraries such as TensorFlow, PyTorch, JAX, and PaddlePaddle. Another interesting library is IDRLnet [13]. It uses pytorch, numpy, and Matplotlib. This library is illustrated on four different examples, namely the wave equation, Allan-Cahn equations, Volterra integrodifferential equations, and variational minimization problems. What is the novelty of our library? Our library is very simple to use and compatible with Google Colab. It is a natural "copy" of the functionality of the IGA-ADS library [5] into the PINN methodology. It contains a simple, straightforward interface for solving different time-dependent problems. Our library can be executed without accessing the HPC center just by using the Colab functionality. The structure of the paper is the following. In Section 2, we recall the general idea of PINN on the example of the heat transfer problem. Section 3 is devoted to our code structure, from Colab implementation, model parameters, basic Python classes, how we define initial and boundary conditions, loss functions, how we run the training, and how we process the output. Section 4 provides four examples from heat transfer, wave equation, thermal inversion, and tumor growth simulations. We conclude the paper in Section 5. ## 3 Physics Informed Neural Network for transient problems on the example of heat transfer problem Let us consider a strong form of the exemplary transient PDE, the heat transfer problem. Find \(u\in C^{2}(0,1)\) for \((x,y)\in\Omega=[0,1]^{2}\), \(t\in[0,T]\) such that \[\underbrace{\frac{\partial u(x,y,t)}{\partial t}}_{\text{temperature evolution}}-\underbrace{\varepsilon} _{\text{diffusion term}}^{2}-\varepsilon\frac{\partial^{2}u(x,y,t)}{\partial x^{2}} -\varepsilon\frac{\partial^{2}u(x,y,t)}{\partial y^{2}}=f\underbrace{(x,y,t)} _{\text{focing}},(x,y,t)\in\Omega\times[0,T], \tag{1}\] with initial condition \[u(x,y,0)=u_{0}(x,y) \tag{2}\] and zero-Neumann boundary condition \[\frac{\partial u}{\partial n}=0\ (x,y)\in\partial\Omega \tag{3}\] In the Physics Informed Neural Network approach, the neural network is the solution, namely \[u(x,y,t)=PINN(x,y,t)=A_{n}\sigma\left(A_{n-1}\sigma(...\sigma(A_{1}[x,y,t]+B_{ 1})...+B_{n-1}\right)+B_{n} \tag{4}\] where \(A_{i}\) are matrices representing DNN layers, \(B_{i}\) represent bias vectors, and \(\sigma\) is the non-linear activation function, e.g., sigmoid, which as we have shown in [2], is the best choice for PINN. We define the loss function as the residual of the PDE \[LOSS_{PDE}(x,y,t)=\left(\frac{\partial PINN(x,y,t)}{\partial t}-\epsilon \frac{\partial^{2}PINN(x,y,t)}{\partial x^{2}}-\epsilon\frac{\partial^{2}PINN (x,y,t)}{\partial y^{2}}-f(x,y,t)\right)^{2} \tag{5}\] We also define the loss for training the initial condition as the residual of the initial condition \[LOSS_{Init}(x,y,0)=\left(PINN(x,y,0)-u_{0}(x,y)\right)^{2} \tag{6}\] as well as the loss of the residual of the boundary condition \[LOSS_{BC}(x,y,t)=\left(\frac{\partial PINN(x,y,t)}{\partial n}(x,y,t)-0\right) ^{2} \tag{7}\] The sketch of the training procedure is the following * Select points \((x,y,t)\in\Omega\times[0,T]\) randomly * Correct the weights using the strong loss \[A_{i,j}^{k}=A_{i,j}^{k}-\eta\frac{\partial LOSS_{PDE}(x,y,t)}{\partial A_{i, j}^{k}}\] (8) \[B_{i}^{k}=B_{i}^{k}-\eta\frac{\partial LOSS_{PDE}(x,y,t)}{\partial B_{i}^{k}}\] (9) where \(\eta\in(0,1)\) is the training rate. * Select point \((x,y)\in\partial\Omega\) randomly \[A_{i,j}^{k}=A_{i,j}^{k}-\eta\frac{\partial LOSS_{BC}(x,y,t)}{\partial A_{i, j}^{k}}\] (10) \[B_{i}^{k}=B_{i}^{k}-\eta\frac{\partial LOSS_{BC}(x,y,t)}{\partial B_{i}^{k}}\] (11) where \(\eta\in(0,1)\). * Select point \((x,y,0)\in\Omega\times\{0\}\) randomly \[A_{i,j}^{k}=A_{i,j}^{k}-\eta\frac{\partial LOSS_{Init}(x,y,0)}{\partial A_{i, j}^{k}}\] (12) \[B_{i}^{k}=B_{i}^{k}-\eta\frac{\partial LOSS_{Init}(x,y,0)}{\partial B_{i}^{k}}\] (13) where \(\eta\in(0,1)\). * Until \(w_{PDE}LOSS_{PDE}+w_{BC}LOSS_{BC}+w_{Init}LOSS_{Init}\leq\delta\) In practice, this simple stochastic gradient method is replaced by a more sophisticated e.g., ADAM method [3]. ## 4 Structure of the code ### Colab implementation Our code is available at [https://github.com/pmaczuga/pinn-notebooks](https://github.com/pmaczuga/pinn-notebooks) The code can be downloaded, opened in Google Colab, and executed in the fully automatic mode. The code has been created to be compatible with Google Colab, and it employs the pytorch library. ``` fromtypingimportCallable importmatplotlib.pyplotasplt importnumpyasmp importtorch... ``` The code can automatically run on a cluster of GPUs, as provided by the Google Colab computing environment ``` device=torch.device("cuda"iftorch.cuda.is_available()else"cpu") ``` ### Parameters There are the following model parameters that the user can define * LENGTH, TOTAL_TIME. The code works in the space-time domain, where the training is performed by selecting point along \(x\), \(y\) and \(t\) axes. The LENGTH parameter defines the dimension of the domain along \(x\) and \(y\) axes. The domain dimension is [0,LENGTH]x[0,LENGTH]x[0,TOTAL_TIME]. The TOTAL_TIME parameter defines the length of the space-time domain along the \(t\) axis. It is the total time of the transient phenomena we want to simulate. * N_POINTS. This parameter defines the number of points used for training. By default, the points are selected randomly along \(x\), \(y\), and \(t\) axes. It is easily possible to extend the code to support different numbers of points or different distributions of points along different axes of the coordinate system. * N_POINTS_PLOT. This parameter defines the number of points used for probing the solution and plotting the output plots after the training. * WEIGHT_RESIDUAL, WEIGHT_INITIAL, WEIGHT_BOUNDARY. These parameters define the weights for the training of residual, initial condition, and boundary condition loss functions. * LAYERS, NEURONS_PER_LAYER. These parameters define the neural network by providing the number of layers and number of neurons per neural network layer. * EPOCHS, and LEARNING_RATE provide a number of epochs and the training rate for the training procedure. Below we provide the exemplary values of the parameters as employed for the wave equation simulations ``` #ParametersLENGTH=2. TOTAL_TIME=.5 N_POINTS=15 N_POINTS_PLOT=150WEIGHT_RESIDUAL=0.03 WEIGHT_INITIAL=1.0 WEIGHT_BOUNDARY=0.0005 LAYERS=10 NEURONS_PER_LAYER=120 EPOCHS=150.000 LEARNING_RATE=0.00015 GRAVITY=9.81 ``` ### PINN class The PINN class defines the functionality for a simple neural network accepting three features as input, namely the values of \((x,y,t)\) and returning a single output, namely the value of the solution \(u(x,y,t)\). We provide the following features: * The f routine compute the values of the approximate solution at point \((x,y,t)\). * The routines dfdt, dfdx, dfdy compute the derivatives of the approximate solution at point \((x,y,t)\) with respect to either \(x\), \(y\), or \(t\) using the pytorch autograd method. ``` classPINN(nn.Module): def__init__(self,num_hidden:int,dim_hidden:int,act=nn.Tanh()): deff(pinn:PINN,x:torch.Tensor,y:torch.Tensor,t:torch.Tensor)->torch.Tensor: returnpinn(x,y,t) defdf(output:torch.Tensor,input:torch.Tensor,order:int=1)->torch.Tensor: df_value=output for_inrange(order): df_value=torch.autograd.grad( diff_value, input, grad_outputs=torch.ones_like(input), create_graph=True, retain_graph=True, )[0] returndf_value defdfdt(pinn:PINN,x:torch.Tensor,y:torch.Tensor,t:torch.Tensor,order:int=1): f_value=f(pinn,x,y,t) returndf(f_value,t,order=order) defdfdx(pinn:PINN,x:torch.Tensor,y:torch.Tensor,t:torch.Tensor,order:int=1): f_value=f(pinn,x,y,t) returndf(f_value,y,order=order) ``` ### Processing initial and boundary conditions Since the training is performed in the space-time domain [0,LENGTH]x[0,LENGTH]x[0,TOTAL_TIME], we provide in * get_interior_points the functionality to identify the points from the training of the residual loss, in * get_initial_points the functionality to identify points for the training of the initial loss, and in * get_boundary_points the functionality for training the boundary loss. ``` defget_boundary_points(x_domain,y_domain,t_domain,m_points, / device=torch.device("cpu"),requires_grad=True): """.+-----.+.'/.'/ / / / / / / /.'/.'/.' =.*.' = x_linspace=torch.linspace(x_domain[0],x_domain[1],n_points) y_linspace=torch.linspace(y_domain[0],y_domain[1],n_points) t_linspace=torch.linspace(t_domain[0],t_domain[1],n_points) x_grid,t_grid=torch.meshgrid(x_linspace,t_linspace,indexing="ij") y_grid,-=torch.meshgrid(y_linspace,t_linspace,indexing="ij") x_grid=x_grid.reshape(-1,1).to(device) x_grid.requires_grad=requires_grad y_grid=y_grid.reshape(-1,1).to(device) y_grid.requires_grad=requires_grad t_grid=t_grid.reshape(-1,1).to(device) t_grid.requires_grad=requires_grad x0=torch.full_like(t_grid,x_domain[0],requires_grad=requires_grad x1=torch.full_like(t_grid,x_domain[1],requires_grad=requires_grad y0=torch.full_like(t_grid,y_domain[0],requires_grad=requires_grad y1=torch.full_like(t_grid,y_domain[1],requires_grad=requires_grad) down=(x_grid,y0,t_grid) up=(x_grid,y1,t_grid left=(x0,y_grid,t_grid right=(x1,y_grid,t_grid) returndown,up,left,right defget_initial_points(x_domain,y_domain,t_domain,n_points,\ device=torch.device("cpu"),requires_grad=True); x_linspace=torch.linspace(x_domain[0],x_domain[1],n_points) yl_linspace=torch.linspace(y_domain[0],y_domain[1],n_points) x_grid,y_grid=torch.meshgrid(x_linspace,y_linspace,indexing="ij") x_grid=x_grid.reshape(-1,1).to(device) x_grid.requires_grad=requires_grad y_grid.requires(-1,1).to(device) y_grid.requires_grad=requires_grad t0=torch.full_like(x_grid,t_domain[0],requires_grad=requires_grad) return(x_grid,y_grid,t0) defget_interior_points(x_domain,y_domain,t_domain,n_points,\ device=torch.device("cpu"),requires_grad=True): x_raw=torch.linspace(x_domain[0],x_domain[1],steps=n_points,requires_grad=requires_grad) y_raw=torch.linspace(y_domain[0],y_domain[1],steps=n_points,requires_grad= requires_grad) t_raw=torch.linspace(t_domain[0],t_domain[1],steps=n_points,requires_grad= requires_grad) grids=torch.meshgrid(x_raw,y_raw,t_raw,indexing="ij") x=grids[0].reshape(-1,1).to(device) y=grids[1].reshape(-1,1).to(device) t=grids[2].reshape(-1,1).to(device) returnx,y,t ### Loss functions Inside the Loss class, we provide interfaces for the definition of the loss functions. Namely, we define the residual_loss, initial_loss and boundary_loss. Since the initial and boundary loss is universal, and residual loss is problem specific, we provide fixed implementations for the initial and boundary losses, assuming that the initial state is prescribed in the initial_condition routine and that the boundary conditions are zero Neumann. The code can be easily extended to support different boundary conditions. ``` classLoss:... defresidual_loss(self,pinn:PINN): x,y,t=get_interior_points(self.x_domain,self.y_domain,self.t_domain,\ self.n_points,pinn.device()) u=f(pinn,x,y,t) z=self.floor(x,y) loss=#HEHEDEFINE RESIDULLOSS returnloss.pow(2).mean definitial_loss(self,pinn:PINN): x,y,t=get_initial_points(self.x_domain,self.y_domain,self.t_domain,\ self.n_points,pinn.device()) pinn_init=self.initial_condition(x,y) loss=f(pinn,x,y,t)-pinn_init returnloss.pow(2).mean() defboundary_loss(self, pinn: PINN): down, up, left, right = get_boundary_points(self.x_domain, self.y_domain, self.t_domain, \ self.n_points, pinn.device()) x_down, y_down, t_down = down x_up, y_up, t_up = up x_left, y_left, t_left = left x_right, y_right, t_right = right loss_down = dfdy( pinn, x_down, y_down, t_down ) loss_up = dfdy( pinn, x_up, y_up, t_up ) loss_left = dfdx( pinn, x_left, y_left, t_left ) loss_right = dfdx( pinn, x_right, y_right, t_right ) return loss_down.pow(2).mean() + \ loss_up.pow(2).mean() + \ loss_left.pow(2).mean() + \ loss_right.pow(2).mean() The initial condition is defined in the initial_condition routine, which returns a value of the initial condition at point \((x,y,0)\). ``` #Initialcondition definitial_condition(x:torch.Tensor,y:torch.Tensor)->torch.Tensor:... res=#HEREDEFINETHEIINITIALCOBDITION#1(z,y,0) returnres ``` ### Training During the training, we select the Adam [3] optimizer, and we prescribe that for every 1000 epochs of training, we will write the summary of the values of the residual, initial, and boundary losses. The user can modify this optimizer and the reporting frequency. ``` deftrain_model( nn_approximator:PINN, loss_fn:Callable, learning_rate:int=0.01, max_epochs:int=1_000 )->PINN: optimizer=torch.optim.Adam(nn_approximator.parameters(),lr=learning_rate) loss_values=[] residual_loss_values=[] initial_loss_values=[] boundary_loss_values=[] start_time=time.time() forepochinrange(max_epochs): try: loss:torch.Tensor=loss_fn(nn_approximator) optimizer.zero_grad() loss[0].backward() optimizer.step() loss_values.append(loss[0].item()) residual_loss_values.append(loss[1].item()) initial_loss_values.append(loss[2].item()) boundary_loss_values.append(loss[3].item()) if(epoch+1)%1000==0: epoch_time=timetime.time()-start_time start_time start_time=time.time() print(f^Epoch:{epoch+1}-Loss:{float(loss[0].item()):>7f}, \ ResidualLoss:{float(loss[i].item()):>7f}, \ InitialLoss:{float(loss[2].item()):>7f}, \ BoundaryLoss:{float(loss[3].item()):>7f}=) exceptKeyboardInterrupt: break returnnn_approximator, np.array(loss_values), \ np.array(residual_loss_values), \ np.array(initial_loss_values), \ np.array(boundary_loss_values) ``` ### Output We provide several routines for plotting the convergence of the loss function (see Fig. 1, ``` #Plotting #Lossfunction average_loss=running_average(loss_values,window=100) fig,ax=plt.subplots(figsize=(8,6),dp1=100) ax.set_title('Lossfunction(runningaverage)') ax.set_xlabel("Epoch") ax.set_label("Loss") ax.plot(average_loss) ax.set_yscale('log') ``` for plotting the running average of the loss (see Fig. 2), ``` average_loss=running_average(initial_loss_values,window=100) fig,ax=plt.subplots(figsize=(8,6),dp1=100) ax.set_title('Initial_lossfunction(runningaverage)') ax.set_xlabel("Epoch") ax.set_ylabel("Loss") ax.plot(average_loss) ax.set_yscale('log') ``` base_dir='.' x,y,-=get_initial_points(x_domain,y_domain,t_domain,N_POINTS_PLOT,requires_grad=False) z=initial_condition(x,y) fig=plot_color(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,"Initialcondition-exact") t_value=0.0 t=torch.full_like(x,t_value) z=pinn(x,y,t) fig=plot_color(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,"Initialcondition-PINN") for plotting the initial conditions in 3D (see Fig. 4) ``` #Plotting#Initialcondition x,y,-=get_initial_points(x_domain,y_domain,t_domain,N_POINTS_PLOT,requires_grad=False) z=initial_condition(x,y) fig=plot_3D(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,"Initialcondition-exact") z=pinn(x,y,t) fig=plot_3D(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,f"Initialcondition-pinn") for plottingthesnapshots of the solution (see Fig. 5) ``` defplot(idx,t_value): t=torch.full_like(x,t_value) z=pinn(x,y,t) fig=plot_color(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,f"PINNfort=(t_value)") fig=plot_3D(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,f"PINNfort=(t_value)") #plt.savefig(base_dir+'/img.('034).png'.format(idx)) time_values=np.arange(0,TOTAL_TIME,0.01) Figure 4: Heat equation. Initial conditions in 3D. Figure 3: Heat equation. Initial conditions in 2D. foridx,t_valinenumerate(time_values): plot(idx,t_value) z=pimx(y,y,t) fig=plot.color(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,f^PINNfort={t_value}") fig=plot.3D(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,f^PINNfort={t_val}") #plt.savefig(base_dir+'/img/img.(:03d).png'.format(idx)) time_values=np.arange(0,TOTAL_TIME,0.01) foridx,t_valinenumerate(time_values): plot(idx,t_val) and for the generation of the animated gif with the simulation results. ``` fromgoogle.colabimportdrive drive.mount('/content/drive') importimages frames=[] foridxinrange(len(time_values)): image=image.v2.imread(base_dir+'/img/img.(:03d).png'.format(idx)) frames.append(image) imageio.minsave('./tsunami_wave12.gif',#outputgif frames,#arrayofinputframes duration=0.1)#optional:framespersecon ``` ## 5 Examples of the instantiation ### Heat transfer In this section, we present the numerical results for the model heat transfer problem described in Section 2. The residual loss function \(\textit{LOSS}_{PDE}(x,y,t)=\left(\frac{\partial PINN(x,y,t)}{\partial t}- \frac{\partial^{2}PINN(x,y,t)}{\partial x^{2}}-\frac{\partial^{2}PINN(x,y,t)} {\partial y^{2}}-f(x,y,t)\right)^{2}\) translates into the following code ``` defresidual_loss(self,pinn:PINN): x,y,t=get_interior_points(self.x_domain, self.y_domain, self.t_domain,self.n_points,pinn.device()) u=f(pinn,x,y,t) z=self.floor(x,y) loss=dft(pinn,x,y,t,order=1)-\ dfdx(pinn,x,y,t)**2-\ dfdy(pinn,x,y,t)**2 We employ the manufactured solution technique, where we assume the solution of the following form \[u(x,y,t)=\exp^{-2\Pi\Sigma^{2}t}\sin\Pi\chi\sin\Pi y \tag{14}\] over \(\Omega=[0,1]^{2}\) To obtain this particular solution, we set up the zero Dirichlet boundary conditions, which require the following code ``` defboundary_loss_dirichlet(self,pinn:PINN): down,up,left,right=get_boundary_points(self.x_domain, self.y_domain,self.t_domain,self.t_domain,self.n_points,pinn.device()) x_down,y_down,t_down=down x_up,y_up,t_up=up x_left,y_left,t_left=left x_right,y_right,t_right=right loss_down=f(pinn,x_down,y_down,t_down) loss_up=f(pinn,x_up,y_up,t_up) loss_left=f(pinn,x_left,y_left,t_left) loss_right=f(pinn,x_right,y_right,t_right) returnloss_down:pow(2).mean() + } loss_up.pow(2).mean() + \ loss_left.pow(2).mean() + \ loss_right.pow(2).mean() ``` We also setup the initial state \[u_{0}(x,y)=\sin\left(\Pi x\right)\sin\left(\Pi y\right) \tag{15}\] which translates into the following code ``` definitial_condition(x:torch.Tensor,y:torch.Tensor)->torch.Tensor: res=torch.sin(torch.pi*x)*torch.sin(torch.pi*y) returns ``` The default setup of the parameters for this simulation is the following: ``` LENGTH=1. TOTAL_TIME=1. N.POINTS=15 N.POINTS_PLOT=150 WEIGHT_RESIDUAL=1.0 WEIGHT_INITIAL=1.0 WEIGHT_BOUNDARY=1.0 LATERS=4 NEWRONS_PER_LAYER=80 EPOCHS=20_00 LEARNING_RATE=0.002 ``` The convergence of the loss function is presented in Fig. 1. The running average of the loss is presented in Fig. 2. The comparison of exact and trained initial conditions is presented in Fig. 3 in 2D and Fig. 4 in 3D. The snapshot from the simulation is presented in Fig. 5 for time moment \(t=0.1\). The mean square error of the computed simulation is presented in Fig. 6. We can see the high accuracy of the trained PINN results. In our simulation, we run the wave propagation in the "swimming pool"; thus, we assume \(z(x,y)=0\). It implies some simplifications in the PDE \[\frac{\partial^{2}u(x,y,t)}{\partial t^{2}}-\left(g\left(\frac{ \partial u(x,y,t)}{\partial x}-\frac{\partial z(x\mathcal{J})}{\partial x} \right)\frac{\partial u(x,y,t)}{\partial x}\right)-\left(g\left(u(x,y,t)-z(x,y )\right)\frac{\partial^{2}u(x,y,t)}{\partial x^{2}}\right)\] \[-\left(g\left(\frac{\partial u(x,y,t)}{\partial y}-\frac{ \partial z(x\mathcal{J})}{\partial y}\right)\frac{\partial u(x,y,t)}{\partial y }\right)=0-\left(g\left(u(x,y,t)-z(x,y)\right)\frac{\partial^{2}u(x,y,t)}{ \partial y^{2}}\right)=0 \tag{19}\] In the Physics Informed Neural Network approach, the neural network represents the solution, \[u(x,y,t)=PINN(x,y,t)=A_{n}\sigma\left(A_{n-1}\sigma(...\sigma(A_{1}[x,y,t]+B_{ 1})...+B_{n-1})+B_{n}\right. \tag{20}\] with \(A_{i}\) being the matrices representing layers, \(B_{i}\) are vectors representing biases, and \(\sigma\) is sigmoid activation function [2]. We define the loss function as the residual of the PDE \[LOSS_{PDE}(x,y,t)=\left(\frac{\partial^{2}PINN(x,y,t)}{\partial t ^{2}}-g\left(\frac{\partial PINN(x,y,t)}{\partial x}\right)^{2}-g\left(PINN(x,y,t)-z(x,y)\right)\frac{\partial^{2}PINN(x,y,t)}{\partial x^{2}}\right.\] \[\left.-g\left(\frac{\partial PINN(x,y,t)}{\partial y}\right)^{2}- g\left(PINN(x,y,t)-z(x,y)\right)\frac{\partial^{2}PINN(x,y,t)}{\partial y^{2}} \right)^{2} \tag{21}\] This residual translates into the following code ``` defresidual_loss(self,pinn:PINN): x,y,t=get_interior_points(self.x_domain,self.y_domain,self.t_domain,self.n_points,pinn.device()) u=f(pinn,x,y,t) z=self.floor(x,y) loss=dfdt(pinn,x,y,t,order=2)-GRAVITY*(dfdx(pinn,x,y,t)**2+(u-z)*dfdx(pinn,x,y,t,order=2)+dfdy(pinn,x,y,t)**2+(u-z)*dfdy(pinn,x,y,t,order=2)) returnloss.pow(2).mean() ``` We also define the loss for training of the initial condition. It is defined as the residual of the initial condition \[LOSS_{Init}(x,y,0)=(PINN(x,y,0)-u_{0}(x,y))^{2} \tag{22}\] Figure 6: Heat equation. Numerical error of the trained PINN solution to the heat transfer problem with manufactured solution. Similarly, we define the loss of the residual of the boundary conditions \[LOSS_{BC}(x,y,t)=\left(\frac{\partial PINN(x,y,t)}{\partial n}(x,y,t)-0\right)^{2} \tag{23}\] We do not have to change the code for the initial and boundary conditions, we just provide an implementation of the initial state ``` definitial_condition(x:torch.Tensor,y:torch.Tensor)->torch.Tensor: r=torch.sqrt((x-LENGTH/2)**2+(y-LENGTH/2)**2) res=2*torch.exp(-(r)**2*30)+2 returnres ``` The convergence of the loss is summarized in Fig. 7. The snapshots of the simulation are presented in Fig. 8. ### Thermal inversion In this example, we aim to model the thermal inversion effect. The numerical results presented in this section are the PINN version of the thermal inversion simulation performed using isogeometric finite element method code [5] described in [9]. The scalar field \(u\) in our simulation represents the water vapor forming a cloud. The source represents the evaporation of the cloud evaporation of water particles near the ground. The thermal inversion effect is obtained by introducing the advection field as the gradient of the temperature. Following [10] we define \(\frac{\partial T}{\partial y}=-2\) for lower half of the domain (\(y<0.5\)), and \(\frac{\partial T}{\partial y}=2\) for upper half of the domain (\(y>0.5\)). We focus on advection-diffusion equations in the strong form. We seek the cloud vapor concentration field \([0,1]^{2}\times[0,1]\ni(x,y,t)\to u(x,y,t)\in\mathcal{R}\) \[\frac{\partial u(x,y,t)}{\partial t}+\left(b(x,y,t)\cdot\nabla \right)u(x,y,t)-\nabla\cdot\left(K\nabla u(x,y,t)\right)=f(x,y,t)\ (x,y,t)\in\Omega\times(0,T] \tag{24}\] \[\nabla u\cdot n=0\ \text{in}\ \partial\Omega\times(0,T]\] (25) \[u(x,y,0)=u_{0}(x,y)\ \text{in}\ \Omega\times 0 \tag{26}\] This PDE translates into \[\frac{\partial u(x,y,t)}{\partial t}+\frac{\partial T(y)}{ \partial y}\frac{\partial u(x,y,t)}{\partial y}-0.1\frac{\partial u(x,y,t)}{ \partial x^{2}}-0.01\frac{\partial u(x,y,t)}{\partial y^{2}}=f(x,y,t)\ (x,y,t)\in\Omega \times(0,T] \tag{28}\] \[\nabla u\cdot n=0\ \text{in}\ \partial\Omega\times(0,T]\] (29) \[u(x,y,0)=u_{0}(x,y)\ \text{in}\ \Omega\times 0 \tag{30}\] In PINN, the neural network represents the solution, \[u(x,y,t)=PINN(x,y,t)=A_{n}\sigma\left(A_{n-1}\sigma(...\sigma(A_{1}[x,y,t]+B_ {1})...+B_{n-1})+B_{n}\right) \tag{31}\] Figure 7: Wave equation. Convergence of the loss function. Figure 8: Wave equation simulation. where \(A_{i}\) are matrices representing DNN layers, \(B_{i}\) represent bias vectors, and \(\sigma\) is the sigmoid activation function. We define the loss function as the residual of the PDE \[\left(\frac{\partial PINN(x,y,t)}{\partial t}+\frac{\partial T(y)}{\partial y} \frac{\partial PINN(x,y,t)}{\partial y}-0.1\frac{\partial PINN(x,y,t)}{ \partial x^{2}}-0.01\frac{\partial PINN(x,y,t)}{\partial y^{2}}-f(x,y,t)\right)^ {2} \tag{33}\] This residual translates to the following code ``` defresidual_loss(self,pinn:PINN): x,y,t=get_interior_points(self.x_domain,self.y_domain,self.t_domain,self.n_points,pinn.device()) loss=dft(pinn,x,y,t).to(device) -self.dTy(y,t)*dfdy(pinn,x,y,t).to(device) -self.Kx*dfdx(pinn,x,y,t,order=2).to(device) -self.Ky*dfdy(pinn,x,y,t,order=2).to(device) -self.Ky*dfdy(pinn,x,y,t,order=2).to(device) returnloss.pow(2).mean ``` We add the definitions of the Kx and Ky variables into the Loss class. We do not change the implementation of the initial and boundary conditions, but we provide the definition of the initial state and forcing ``` defsource(self,y,t): d=0.7 res=torch.clamp((torch.cos(t*math.pi)-d)*1/(1-d),min=0) res2=(150-1200*y)*resresres3=torch.where(t<=0.3,res2,0) res4=torch.where(y<=0.125,res3,0) returnres4.to(device) ``` During the training, we use the following global parameters ``` LENGTH=1. TOTAL_TIME=1. N_POINTS=15 N_POINTS_PLOT=150 WEIGHT_RESIDUAL=20.0 WEIGHT_INITIAL=1.0 WEIGHT_BOUNDARY=10.0 LAYERS=2 NEURONS_PE_LAYER=600 EPOCHS=30_000 LEARNING_RATE=0.002 ``` The convergence of the loss function is summarized in Fig. 9. The snapshots from the simulations are presented in Fig. 10. In the thermal inversion, the cloud vapor that evaporated from the ground stays close to the ground, due to the distribution of the temperature gradients. ### Tumor growth The last example concerns the brain tumor growth model, as described in [11]. We seek the tumor cell density \([0,1]^{2}\times[0,1]\ni(x,y,t)\to u(x,y,t)\in\mathcal{R}\), such that \[\frac{\partial u(x,y,t)}{\partial t}=\nabla\cdot\left(D(x,y) \nabla u(x,y,t)\right)+\rho u(x,y,t)\left(1-u(x,y,t)\right)\ (x,y,t)\in\Omega \times(0,T] \tag{34}\] \[\nabla u\cdot n=0\ \text{in}\ \partial\Omega\times(0,T]\] (35) \[u(x,y,0)=u_{0}(x,y)\ \text{in}\ \Omega\times 0 \tag{36}\] which translates into \[\frac{\partial u(x,y,t)}{\partial t}-\frac{\partial D(x,y)}{ \partial x}\frac{\partial u(x,y,t)}{\partial x}-D(x,y)\frac{\partial^{2}u(x,y,t)}{\partial x^{2}}\] \[-\frac{\partial D(x,y)}{\partial y}\frac{\partial u(x,y,t)}{ \partial y}-D(x,y)\frac{\partial^{2}u(x,y,t)}{\partial x^{2}}-\rho u(x,y,t) \left(1-u(x,y,t)\right)=0 \tag{38}\] and Here, \(D(x,y)\) represents the tissue density coefficient, where \(D(x,y)=0.13\) for the white matter, \(D(x,y)=.013\) for the gray matter, and \(D(x,y)=0\) for the cerebrospinal fluid (see [11] for more details). Additionally, \(\rho=0.025\) denotes the proliferation rate of the tumor cells. We simplify the model, and remove the derivatives of the tissue density coefficient: \[\frac{\partial u(x,y,t)}{\partial t}-D(x,y)\frac{\partial^{2}u(x,y,t)}{ \partial x^{2}}-D(x,y)\frac{\partial^{2}u(x,y,t)}{\partial x^{y}}-\rho u(x,y,t )\left(1-u(x,y,t)\right)=0. \tag{39}\] As usual, in PINN, the neural network represents the solution, \[u(x,y,t)=PINN(x,y,t)=A_{n}\sigma\left(A_{n-1}\sigma(...\sigma(A_{1}[x,y,t]+B_{1 })...+B_{n-1})+B_{n}\right. \tag{40}\] with \(A_{i}\), and \(B_{i}\) representing matrices and bias vectors, and \(\sigma\) is the sigmoid activation function. We define the loss function as the residual of the PDE \[LOSS_{PDE}(x,y,t)=\] \[\left(\frac{\partial u(x,y,t)}{\partial t}-\frac{\partial D(x,y) }{\partial x}\frac{\partial u(x,y,t)}{\partial x}-D(x,y)\frac{\partial^{2}u(x,y,t)}{\partial x^{2}}\right.\] \[\left.-\frac{\partial D(x,y)}{\partial y}\frac{\partial u(x,y,t) }{\partial y}-D(x,y)\frac{\partial^{2}u(x,y,t)}{\partial x^{y}}-\rho u(x,y,t) \left(1-u(x,y,t)\right)\right)^{2} \tag{41}\] This translates into the following code: ``` defresidual_loss(self,pinn:PINN): x,y,t=get_interior_points( self.x_domain,self.y_domain, self.t_domain,self.n_points,pinn.device()) rho=0.025 defD_fun(x,y)->torch.Tensor: res=torch.zeros(x.shape,dtype=x.dtype,device=pinn.device()) dist=(x-0.5)*2+(y-0.5)*2 res[dist<0.25]=0.13 res[dist<0.02]=0.013 returnres D=D_fun(x,y) u=f(pinn,x,y,t) loss=dfdt(pinn,x,y,t) -D*dfdt(pinn,x,y,t,order=2) -D*dfdt(pinn,x,y,t,order=2) -rho*u=(1-u) returnloss.pow(2).mean() ``` The initial and boundary condition loss functions are unchanged. The initial state is given as follows: Figure 9: Thermal inversion. Convergence of the loss function. Figure 10: Thermal inversion simulation. We summarize in Fig. 11 the convergence of the loss function. We also show how the initial data has been trained in Fig. 12. Additionally, Fig. 13 presents the snapshots from the simulation. Figure 11: Tumor growth. Convergence of the loss function. Figure 12: Tumor growth. Convergence of the loss function. ## 5 Examples of the Instantiation Figure 13: Tumor growth. Snapshots from the simulation. ## 6 Conclusions We have created a code [https://github.com/pmaczuga/pinn-notebooks](https://github.com/pmaczuga/pinn-notebooks) that can be downloaded and opened in the Google Colab. It can be automatically executed using Colab functionality. The code provides a simple interface for running two-dimensional time-dependent simulations on a rectangular grid. It provides an interface to define residual loss, initial condition loss, and boundary condition loss. It provides examples of Dirichlet and Neumann boundary conditions. The code also provides routines for plotting the convergence, generating snapshots of the simulations, verifying the initial condition, and generating the animated gifs. We also provide four examples, the heat transfer, the wave equation, the thermal inversion from advection-diffusion equations, and the brain tumor model. ## 7 Acknowledgements The work of Maciej Paszynski, Witold Dzwinel, Pawel Maczuga, and Marcin Los was supported by the program "Excellence initiative - research university" for the AGH University of Science and Technology. The visit of Maciej Paszynski at Oden Institute was partially supported by J. T. Oden Research Faculty Fellowship.
2309.09469
Spiking-LEAF: A Learnable Auditory front-end for Spiking Neural Networks
Brain-inspired spiking neural networks (SNNs) have demonstrated great potential for temporal signal processing. However, their performance in speech processing remains limited due to the lack of an effective auditory front-end. To address this limitation, we introduce Spiking-LEAF, a learnable auditory front-end meticulously designed for SNN-based speech processing. Spiking-LEAF combines a learnable filter bank with a novel two-compartment spiking neuron model called IHC-LIF. The IHC-LIF neurons draw inspiration from the structure of inner hair cells (IHC) and they leverage segregated dendritic and somatic compartments to effectively capture multi-scale temporal dynamics of speech signals. Additionally, the IHC-LIF neurons incorporate the lateral feedback mechanism along with spike regularization loss to enhance spike encoding efficiency. On keyword spotting and speaker identification tasks, the proposed Spiking-LEAF outperforms both SOTA spiking auditory front-ends and conventional real-valued acoustic features in terms of classification accuracy, noise robustness, and encoding efficiency.
Zeyang Song, Jibin Wu, Malu Zhang, Mike Zheng Shou, Haizhou Li
2023-09-18T04:03:05Z
http://arxiv.org/abs/2309.09469v2
# Spiking-Leaf: A Learnable Auditory Front-End for ###### Abstract Brain-inspired spiking neural networks (SNNs) have demonstrated great potential for temporal signal processing. However, their performance in speech processing remains limited due to the lack of an effective auditory front-end. To address this limitation, we introduce Spiking-LEAF, a learnable auditory front-end meticulously designed for SNN-based speech processing. Spiking-LEAF combines a learnable filter bank with a novel two-compartment spiking neuron model called IHC-LIF. The IHC-LIF neurons draw inspiration from the structure of inner hair cells (IHC) and they leverage segregated dendritic and somatic compartments to effectively capture multi-scale temporal dynamics of speech signals. Additionally, the IHC-LIF neurons incorporate the lateral feedback mechanism along with spike regularization loss to enhance spike encoding efficiency. On keyword spotting and speaker identification tasks, the proposed Spiking-LEAF outperforms both SOTA spiking auditory front-ends and conventional real-valued acoustic features in terms of classification accuracy, noise robustness, and encoding efficiency. Zeyang Song\({}^{1}\), Jibin Wu\({}^{2,*}\), Malu Zhang\({}^{3}\), Mike Zheng Shou\({}^{1}\), Haizhou Li\({}^{1,4}\)+\({}^{1}\)National University of Singapore, Singapore \({}^{2}\)The Hong Kong Polytechnic University, Hong Kong SAR, China \({}^{3}\)University of Electronic Science and Technology of China, China \({}^{4}\)Shenzhen Research Institute of Big Data, School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China Spiking neural networks, speech recognition, learnable audio front-end, spike encoding Footnote †: This work was supported in part by National University of Singapore under FD-GhrbICS: Joint Lab for FD-SOI Always-on Intelligent & Connected Systems (Award L2001E0053) and the Hong Kong Polytechnic University under Project P0043563, P0046094, and P0046810. ## 1 Introduction Recently, the brain-inspired spiking neural networks (SNNs) have demonstrated superior performance in sequential modeling [1, 2]. However, their performance in speech processing tasks still lags behind that of state-of-the-art (SOTA) non-spiking artificial neural networks (ANNs) [3, 4, 5, 6, 7, 8, 9]. This is primarily due to the lack of an effective auditory front-end that can synergistically perform acoustic feature extraction and neural encoding with high efficacy and efficiency. The existing SNN-based auditory front-ends first extract acoustic features from raw audio signals, followed by encoding these real-valued acoustic features into spike patterns that can be processed by the SNN. For feature extraction, many works directly adopt the frequently used acoustic features based on the Mel-scaled filter-bank [3, 4, 5] or the GammaTone filter-bank [10]. Despite the simplicity of this approach, these handcrafted filter-bank are found to be suboptimal in many tasks when compared to learnable filter-bank [11, 12, 13, 14]. In another vein of research, recent works have also looked into the neurophysiological process happening in the peripheral auditory system and developed more complex biophysical models to enhance the effectiveness of feature extraction [15, 16]. However, these methods not only require fine-tuning a large number of hyperparameters but are also computationally expensive for resource-constrained neuromorphic platforms. For neural encoding, several methods have been proposed that follow the neurophysiological processes within the cochlea [15, 16]. For instance, Cramer et al. proposed a biologically inspired cochlear model with the model parameters directly taken from biological studies [15]. Additionally, other methods propose to encode the temporal variations of the speech signals that are critical for speech recognition. The Send on Delta (SOD) [17] and threshold coding methods [10, 18, 19], for instance, encode the positive and negative variations of signal amplitude into spike trains. However, these neural encoding methods lack many essential characteristics as seen in the human's peripheral auditory system that are known to be important for speech processing, such as feedback adaptation [20]. To address these limitations, we introduce a Spiking LEarable Audio front-end model, called Spiking-LEAF. The Spiking-LEAF leverages a learnable auditory filter-bank to extract discriminative acoustic features. Furthermore, inspired by the structure and dynamics of the inner hair cells (IHCs) within the cochlea, we further proposed a two-compartment neuron model for neural encoding, namely IHC-LIF neuron. Its two neuronal compartments work synergistically to capture the multi-scale temporal dynamics of speech signals. Additionally, the lateral inhibition mechanism along with spike regularization loss is incorporated to enhance the encoding efficiency. The main contributions of this paper can be summarized as follows: * We propose a learnable auditory front-end for SNNs, enabling the joint optimization of feature extraction and neural encoding processes to achieve optimal performance in the given task. * We propose a two-compartment spiking neuron model for neural encoding, called IHC-LIF, which can effectively extract multi-scale temporal information with high efficiency and noise robustness. * Our proposed Spiking-LEAF shows high classification accuracy, noise robustness, and encoding efficiency on both keyword spotting and speaker identification tasks. ## 2 Methods As shown in Fig. 1, similar to other existing auditory front-ends, the proposed Spiking-LEAF model consists of two parts responsible for feature extraction and neural encoding, respectively. For feature extraction, we apply the Gabor 1d-convolution filter bank along with the Per-Channel Energy Normalization (PCEN) to perform frequency analysis. Subsequently, the extracted acoustic feature is processed by the IHC-LIF neurons for neural encoding. Given that both the feature extraction and neural encoding parts are parameterized, they can be optimized jointly with the backend SNN classifier. ### Parameterized acoustic feature extraction In Spiking-LEAF, the feature extraction is performed with a 1d-convolution Gabor filter bank along with the PCEN that is tailored for dynamic range compression [21]. The Gabor 1d-convolution filters have been widely used in speech processing [22, 14], and its formulation can be expressed as per: \[\phi_{n}(t)=e^{i2\pi\eta_{n}t}\frac{1}{\sqrt{2\pi}\sigma_{n}}e^{-\frac{\sigma^ {2}}{2\sigma_{n}^{2}}} \tag{1}\] where \(\eta_{n}\) and \(\sigma_{n}\) denote learnable parameters that characterize the center frequency and bandwidth of filter n, respectively. In particular, for input audio with a sampling rate of 16 kHz, there are a total of 40 convolution filters, with a window length of 25ms ranging over \(t=-L/2,...,L/2\) (\(L=401\) samples), have been employed in Spiking-LEAF. These 1d-convolution filters are applied directly to the audio waveform \(x\) to get the time-frequency representation \(F\). Following the neurophysiological process in the peripheral auditory system, the PCEN [14, 21] has been applied subsequently to further compress the dynamic range of the obtained acoustic features: \[PCEN(F(t,n))=\left(\frac{F(t,n)}{(\varepsilon+M(t,n))^{\alpha_{n}}+\delta_{n}} \right)^{r_{n}}-\delta_{n}^{r_{n}} \tag{2}\] \[M(t,n)=(1-s)M(t-1,n)+sF(t,n) \tag{3}\] In Eqs. 2 and 3, \(F(t,n)\) represents the time-frequency representation for channel \(n\) at time step \(t\). \(r_{n}\) and \(\alpha_{n}\) are coefficients that control the compression rate. The term \(M(t,n)\) is the moving average of the time-frequency feature with a smoothing rate of \(s\). Meanwhile, \(\varepsilon\) and \(\delta_{n}\) stands for a positive offset introduced specifically to prevent the occurrence of imaginary numbers in PCEN. ### Two-compartment spiking neuron model The Leaky Integrate-and-Fire (LIF) neuron model [23], with a single neuronal compartment, has been widely used in brain simulation and neuromorphic computing [3, 4, 5, 7, 8]. The internal operations of a LIF neuron, as illustrated in Fig. 2 (a), can be expressed by the following discrete-time formulation: \[I[t]=\Sigma_{i}w_{i}S[t-1]+b \tag{4}\] \[U[t]=\beta*U[t-1]+I[t]-V_{th}S[t-1]\] (5) \[S[t]=\mathbb{H}(U[t]-V_{th}) \tag{6}\] Figure 1: The overall architecture of the proposed SNN-based speech processing framework. Figure 2: Computational graphs of LIF and IHC-LIF neurons. where \(S[t-1]\) represents the input spike at time step \(t\). \(I[t]\) and \(U[t]\) denote the transduced synaptic current and membrane potential, respectively. \(\beta\) is the membrane decaying constant that governs the information decaying rate within the LIF neuron. As the Heaviside step function indicated in Eq. 6, once the membrane potential exceeds the firing threshold \(V_{th}\), an output spike will be emitted. Despite its ubiquity and simplicity, the LIF model possesses inherent limitations when it comes to long-term information storage. These limitations arise from two main factors: the exponential leakage of its membrane potential and the resetting mechanism. These factors significantly affect the model's efficacy in sequential modeling. Motivated by the intricate structure of biological neurons, recent work has developed a two-compartment spiking neuron model, called TC-LIF, to address the limitations of the LIF neuron [24]. The neuronal dynamics of TC-LIF neurons are given as follows: \[I[t] =\Sigma_{i}w_{i}S[t-1]+b \tag{7}\] \[U_{d}[t] =U_{d}[t-1]+\beta_{d}*U_{s}[t-1]+I[t]\] (8) \[\quad-\gamma*S[t-1]\] \[U_{s}[t] =U_{s}[t-1]+\beta_{s}*U_{d}[t-1]-V_{th}S[t-1]\] (9) \[S[t] =\mathbb{H}(U[t]-V_{th}) \tag{10}\] where \(U_{d}[t]\) and \(U_{s}[t]\) represent the membrane potential of the dendritic and somatic compartments. The \(\beta_{d}\) and \(\beta_{s}\) are two learnable parameters that govern the interaction between dendritic and somatic compartments. Facilitated by the synergistic interaction between these two neuronal compartments, TC-LIF can retain both short-term and long-term information which is crucial for effective speech processing [24]. ### IHC-LIF neurons with lateral feedback Neuroscience studies reveal that lateral feedback connections are pervasive in the peripheral auditory system, and they play an essential role in adjusting frequency sensitivity of auditory neurons [25]. Inspired by this finding, as depicted in Figure 2 (b), we further incorporate lateral feedback components into the dendritic compartment and somatic compartment of the TC-LIF neuron, represented by \(I_{f}[t]\) and \(I_{LI}[t]\) respectively. Specifically, each output spike will modulate the neighboring frequency bands with learnable weight matrices \(ZeroDiag(W_{f})\) and \(ZeroDiag(W_{LI})\), whose diagonal entries are all zeros. The lateral inhibition feedback of hair cells within the cochlea is found to detect sounds below the thermal noise level and in the presence of noise or masking sounds [26, 27]. Motivated by this finding, we further constrain the weight matrix \(W_{LI}\geq 0\) to enforce lateral inhibitory feedback at the somatic compartment, which is responsible for spike generation. This will suppress the activity of neighboring neurons after the spike generation, amplifying the signal of the most activated neuron while suppressing other neurons. This results in a sparse yet informative spike representation of input signals. The neuronal dynamics of the resulting IHC-LIF model can be described as follows: \[I_{s}[t] =\Sigma_{i}w_{i}S[t-1]+b \tag{11}\] \[I_{f}[t] =ZeroDiag(W_{f})*S[t-1]\] (12) \[I_{LI}[t] =ZeroDiag(W_{LI})*S[t-1]\] (13) \[U_{d}[t] =U_{d}[t-1]+\beta_{d}*U_{s}[t-1]+I_{s}[t]\] (14) \[\quad-\gamma*S[t-1]+I_{f}[t]\] \[U_{s}[t] =U_{s}[t-1]+\beta_{s}*U_{d}[t-1]-V_{th}S[t-1])\] (15) \[\quad-I_{LI}[t]\] \[S[t] =\mathbb{H}(U_{s}[t]-V_{th}) \tag{16}\] To further enhance the encoding efficiency, we incorporate a spike rate regularization term \(L_{SR}\) into the loss function \(L\). It has been applied alongside the classification loss \(L_{cls}\) : \(L=L_{cls}+\lambda L_{SR}\) where \(L_{SR}=ReLU(R-SR)\). Here, \(R\) represents the average spike rate per neuron per timestep and \(SR\) denotes the expected spike rate. Any spike rate higher than \(SR\) will incur a penalty, and \(\lambda\) is the penalty coefficient. ## 3 Experimental Results In this section, we evaluate our model on keyword spotting (KWS) and speaker Dataset V2 [28], which contains 105,829 one-second utterances of 35 commands. For speaker identification (SI), we use the Voxceleb1 dataset [29] with 153,516 utterances from 1,251 speakers, resulting in a classification task with 1,251 classes. We focus our evaluations on the auditory front-end by keeping model architecture and hyper-parameters of the backend SNN classifier fixed. The source codes will be released to ensure reproducibility. ### Superior feature representation Table 1 compares our proposed Spiking-LEAF model with other existing auditory front-ends on both KWS and SI tasks. Our results reveal that the Spiking-LEAF consistently outperforms the SOTA spike encoding methods as well as the fbank features [3], demonstrating a superior feature representation power. In the following section, we validate the effectiveness of key components of Spiking-LEAF: learnable acoustic feature extraction, two-compartment LIF (TC-LIF) neuron model, lateral feedback \(I_{f}\), lateral inhibition \(I_{LI}\), and firing rate regulation loss \(L_{SR}\). ### Ablation studies **Learnable filter bank and two-compartment neuron.** As illustrated in row 1 and row 2 of Table 2, the proposed learnable filter bank achieves substantial enhancement in feature representation when compared to the widely adopted Fbank feature. Notably, further improvements in classification accuracy are observed (see row 3) when replacing LIF neurons with TC-LIF neurons that offer richer neuronal dynamics. However, it is important to acknowledge that this improvement comes at the expense of an elevated firing rate, which has a detrimental effect on the encoding efficiency. **Lateral feedback.** Row 4 and row 5 of Table 2 highlight the potential of lateral feedback mechanisms in enhancing classification accuracy, which can be explained by the enhanced frequency sensitivity facilitated by the lateral feedback. Furthermore, the incorporation of lateral feedback is also anticipated to enhance the neuron's robustness in noisy environments. To substantiate this claim, our model is trained on clean samples and subsequently tested on noisy test samples contaminated with noise from the NOISEX-92 [31] and CHiME-3 [32] datasets. Fig. 3 illustrates the results of this evaluation, demonstrating that both the learnable filter bank and lateral feedback mechanisms contribute to enhanced noise robustness. This observation aligns with prior studies that have elucidated the role of the PCEN in fostering noise robustness [14]. Simultaneously, Fig. 4 showcases how the lateral feedback aids in filtering out unwanted spikes. **Lateral inhibition and spike rate regularization loss.** As seen in Fig. 4 (b), when the spike regularization loss and lateral inhibition are not applied, the output spike representation involves a substantial amount of noise during non-speech periods. Introducing lateral inhibition or spike regularization loss alone can not suppress the noise that appeared during such periods (Figs. (b) and (c)). Particularly, introducing the spike regularization loss alone results in a uniform reduction in the output spikes (Fig. 4 (d)). However, this comes along with a notable reduction in accuracy as highlighted in Table 2 row 6. Notably, the combination of lateral inhibition and spike rate regularization (Fig. 4 (e)) can effectively suppress the unwanted spike during non-speech periods, yielding a sparse and yet informative spike representation. ## 4 Conclusion In this paper, we presented a fully learnable audio front-end for SNN-based speech processing, dubbed Spiking-LEAF. The Spiking-LEAF integrated a learnable filter bank with a novel IHC-LIF neuron model to achieve effective feature extraction and neural encoding. Our experimental evaluation on KWS and SI tasks demonstrated enhanced feature representation power, noise robustness, and encoding efficiency over SOTA auditory front-ends. It, therefore, opens up a myriad of opportunities for ultra-low-power speech processing at the edge with neuromorphic solutions. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Tasks} & \multirow{2}{*}{Front-end} & Classifier & Classifier & Test \\ & & Structure & Type & Accuracy (\%) \\ \hline \multirow{7}{*}{KWS} & Fbank [3] & 512-512 & Feedforward & 83.03 \\ & Fbank+LIF & 512-512 & Feedforward & 85.24 \\ & Heidelberg[15] & 512-512 & Feedforward & 68.14 \\ & **Spiking-LEAF** & 512-512 & Feedforward & **92.24** \\ & Speech2spike [30] & 256-265-256 & Feedforward & 88.5 \\ & **Spiking-LEAF** & 256-256-256 & Feedforward & **90.47** \\ \cline{2-5} & Fbank [3] & 512-512 & Recurrent & 93.58 \\ & Fbank+LIF & 512-512 & Recurrent & 92.04 \\ & **Spiking-LEAF** & 512-512 & Recurrent & **93.95** \\ \hline \multirow{7}{*}{SI} & Fbank & 512-512 & Feedforward & 29.42 \\ & Fbank+LIF & 512-512 & Feedforward & 27.23 \\ & **Spiking-LEAF** & 512-512 & Feedforward & **30.17** \\ \cline{1-1} \cline{2-5} & Fbank & 512-512 & Recurrent & 31.76 \\ \cline{1-1} & Fbank+LIF & 512-512 & Recurrent & 29.74 \\ \cline{1-1} & **Spiking-LEAF** & 512-512 & Recurrent & **32.45** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of different auditory front-ends on KWS and SI tasks. The bold color denotes the best model for each network configuration. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Acoustic features & Neuron type & \(I_{f}\) & \(I_{LI}\) & \(L_{SR}\) & Firing rate & Accuracy \\ \hline Fbank & LIF & - & - & - & 17.94\% & 85.24\% \\ Learnable & LIF & - & - & - & 18.25\% & 90.73\% \\ Learnable & TC-LIF & - & - & - & 34.21\% & 91.89\% \\ Learnable & TC-LIF & ✓ & - & - & 40.35\% & 92.24\% \\ Learnable & TC-LIF & ✓ & ✓ & - & 34.54\% & **92.43\%** \\ Learnable & TC-LIF & ✓ & - & ✓ & 15.03\% & 90.82\% \\ Learnable & TC-LIF & ✓ & ✓ & ✓ & **11.96\%** & 92.04\% \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation studies of various components of the proposed Spiking-LEAF model on the KWS task. Figure 4: This figure illustrates the Fbank feature and spike representation generated by Spiking-LEAF without and with lateral inhibition and spike rate regularization loss. Figure 3: Test accuracy on the KWS task with varying SNRs.
2310.20601
Functional connectivity modules in recurrent neural networks: function, origin and dynamics
Understanding the ubiquitous phenomenon of neural synchronization across species and organizational levels is crucial for decoding brain function. Despite its prevalence, the specific functional role, origin, and dynamical implication of modular structures in correlation-based networks remains ambiguous. Using recurrent neural networks trained on systems neuroscience tasks, this study investigates these important characteristics of modularity in correlation networks. We demonstrate that modules are functionally coherent units that contribute to specialized information processing. We show that modules form spontaneously from asymmetries in the sign and weight of projections from the input layer to the recurrent layer. Moreover, we show that modules define connections with similar roles in governing system behavior and dynamics. Collectively, our findings clarify the function, formation, and operational significance of functional connectivity modules, offering insights into cortical function and laying the groundwork for further studies on brain function, development, and dynamics.
Jacob Tanner, Sina Mansour L., Ludovico Coletta, Alessandro Gozzi, Richard F. Betzel
2023-10-31T16:37:01Z
http://arxiv.org/abs/2310.20601v1
# Functional connectivity modules in recurrent neural networks: function, origin and dynamics ###### Abstract Understanding the ubiquitous phenomenon of neural synchronization across species and organizational levels is crucial for decoding brain function. Despite its prevalence, the specific functional role, origin, and dynamical implication of modular structures in correlation-based networks remains ambiguous. Using recurrent neural networks trained on systems neuroscience tasks, this study investigates these important characteristics of modularity in correlation networks. We demonstrate that modules are functionally coherent units that contribute to specialized information processing. We show that modules form spontaneously from asymmetries in the sign and weight of projections from the input layer to the recurrent layer. Moreover, we show that modules define connections with similar roles in governing system behavior and dynamics. Collectively, our findings clarify the function, formation, and operational significance of functional connectivity modules, offering insights into cortical function and laying the groundwork for further studies on brain function, development, and dynamics. ## I Introduction The brain is a complex adaptive network with dynamically interacting parts that somehow come together to form the locus of control for our behavior [1; 2; 3]. In the quest to unravel the intricacies of brain function, one observation that has garnered considerable attention is the large-scale coordination of neurons [4; 5; 6]. The synchronization of neural activity across neuronal populations is a pervasive phenomenon, observed across multiple species and levels of neural organization. From the coordinated firing of neurons in cellular assemblies in fish [7; 8; 9], mice [10; 11; 12] and monkeys [13], to large-scale systems like the default mode network [14; 15; 16; 17], it is likely that synchronization plays a pivotal role in brain function. To explore this, correlation-based network approaches serve as an important tool for probing the statistical relationships between neural units at different spatiotemporal scales[18; 19]. Analyses of correlation-based networks often reveal modular structures, characterized by sets of neural units whose activity is highly correlated [20; 21; 22; 23; 24]. Previous studies have shown that the boundaries of modules in correlation networks circumscribe meta-analytic task co-activation patterns [25; 26] and early investigations suggest that correlation-based network modules emerge in the fetal brain and align with areas that will later support vision, movement, and language [27; 28]. In addition, research indirectly related to the origin and function of correlation-based modules at the micro-scale of neurons suggest that modules can form from activity-based plasticity mechanisms [11], and that stimulating these modules can elicit relevant task behavior [10]. Additionally, previous work has modeled this correlation structure as emerging from dynamic interaction with the underlying structural connectome [29; 30; 31]. These correlation-based networks are often referred to as _functional connectivity_ (FC) networks, a term that has sparked some controversy given that the name suggests that these correlations imply a functional relationship (e.g. [32; 33; 34]). This is a critique that can doubly be leveled at the analysis of modules in functional connectivity networks, given the temptation to relate them to the rich history of theory and research on the function of modules in structural networks [35; 36; 37; 38; 39; 40; 41; 42; 43]. In this way, the functional relationship implied by functional connectivity modules is contested. Additionally, although previous research has tracked modules in functional networks across development and, indeed, the entire human lifespan [44; 45; 46], the specific origin of functional connectivity modules as well as their specific relationship with the dynamical features of a system, remain unclear. Recurrent neural networks offer trainable systems that operate based on a network of dynamically interacting parts and as such are ideal model organisms for investigating such questions [47; 48; 49; 50; 51; 52; 53; 54; 55]. Not only do these artificial systems provide us with complete access to all the information important for their function, but they also offer a safe and ethically neutral platform for perturbation; an important tool for revealing the underlying causal relationships in a system. While the technology for recording and perturbing neural systems in living brains has been advancing at an increasing pace (e.g. [56; 57; 58; 59; 60]), complete access to _all_ of the information important to the function of a living brain is still an aspirational, and arguably distant goal, and the ethical implications of perturbing these systems remain complex. Our study leverages recurrent neural networks trained on canonical systems neuroscience tasks [61] to investigate three critical questions about functional connectivity modules: 1) What specific functional role do functional connectivity modules serve? 2) What drives their formation? 3) What insights can they provide into system dynamics? Importantly, our research suggests these modules are not merely statistical artifacts but are functional units that encapsulate and specify dynamics that uniquely transform similar types of information. Additionally, our findings indicate that one origin of empirically observed FC modules may lie in the asymmetries of input projections. Finally, this result hints that input projections, particularly from subcortical areas like the thalamus, could serve as a genetically efficient means to encode module specialization in the cortex. Collectively, our findings offer key insights into functional connectivity modules in recurrent neural networks and lay the groundwork for future studies to take advantage of these insights in studying biological brains. ## Results The following results are organized into three sections, speaking to the 1) function, 2) origin, and 3) dynamics of functional connectivity (FC) modules in recurrent neural networks. Each section contains subsections that support the general conclusions for that section. The conclusions for each section are as follows. The _function_ of modules in recurrent neural networks is to hold onto task relevant information. An important _origin_ of these functional modules involves asymmetries in the input projections. Finally, the _dynamics_ of these functional connectivity modules involves the transformation of input information into task relevant information. ### Function In this section, we ask the following question: what is the _function_ of FC modules in recurrent neural networks (RNNs)? That is, how do functional modules, specifically, contribute to the RNNs ability to perform its prescribed task? In answering this question, we show that FC modules in RNNs accumulate task relevant information, which the output layer then reads out in order for the network to make its decision. We also show that the organization of these modules can be used to identify semantically similar information in feed-forward neural networks. #### Functional connectivity modules take on task-relevant functional significance In this section we show that FC modules in the recurrent layer of RNNs take on task-relevant functional significance after training on a perceptual decision-making task and a go _vs_ no-go task. These two tasks are common systems neuroscience tasks used to probe information transformation, decision-making, and memory [10; 55; 61]. Additionally, we extend this result by showing a similar effect in feed-forward neural networks trained on the recognition of hand-written digits, and a transformer deep neural network previously trained by openAI (weights from GPT-2 [62]). The perceptual decision-making task presents the RNN with two stimuli drawn from normal distributions with different means (same variance). The task of the RNN is to compute which of the two stimuli come from the distribution with the greater mean. This task requires the RNN to track previous values of both stimuli and to compare them. After a fixation period, the RNN makes its decision based on the relative activity of two output nodes. The output node with the greatest activity corresponds to the RNNs decision about which stimuli came from the distribution with the greater mean (see Fig. 1a for a schematic of this task). Here, we found that FC modules in the recurrent layer of these RNNs store information about the difference between the means of both these stimuli across the fixation period. We found that, before the dynamics in the recurrent layer, the modules defined by the input layer carry information about the current value of each stimulus (Fig. 1e,g; \(r=0.77,p<10^{-15}\); \(r=0.54,p<10^{-15}\)). After the recurrence however, this information is transformed into FC modules that carry information about the current difference between the cumulative means of each stimulus during the fixation period (Fig. 1f,h; \(r=0.78,p<10^{-15}\); \(r=-0.86,p<10^{-15}\)). Indeed, the activity of the output nodes that make the decision for this network are highly correlated with the mean activity of these FC modules (Fig. 1b-d; \(r=0.77,p<10^{-15}\); \(r=0.80,p<10^{-15}\)) suggesting that these modules are causally involved in the decision-making process. To verify this, we performed a lesioning analysis wherein we selectively removed connections from the output layer that received information from neurons within a module ("output lesions"; Fig. 1i). This lesioning effectively removes the information within a functional connectivity module from consideration during the decision process. By performing an output lesion on module 1, we saw that task accuracy dropped to nearly 0% for trials where stimulus 1 was the correct decision, while trials where stimulus 2 was the correct decision were not effected (Fig. 1j; two-sample \(t\)-test \(p<10^{-15}\)). In con trast, when we performed an output lesion on module 2 task accuracy was high for trials where stimulus 1 was the correct decision, while dropping to nearly zero for trials where stimulus 2 was the correct decision (Fig. 1k; two-sample \(t\)-test \(p<10^{-15}\)). The specificity of the effects of these output lesions on different trial types confirms that the information from these FC modules is being used by the network to make its decisions. We find a similar result with an RNN trained to perform a go _vs_ no-go task. With this task, the RNN receives either a go signal, or a no-go signal, and after some delay must indicate which signal it received (see Fig. S1a for a schematic of this task). We find that the FC of the recurrent layer during this task maintains four modules (Fig. S1b) whose activity changes according the which signal was presented (Fig. S1c). Similar to the previous task, we found that the mean activity with the first two FC modules is highly correlated to the output activity in the RNN where the final decision is made by the network (Fig. S1d; \(r=0.72\); \(r=0.73\)). Finally, output lesions Figure 1: **Functional connectivity modules take on task-relevant functional significance**, (_a_) Schematic describing the perceptual decision-making task and the architecture of the RNN trained to perform it. The input nodes are given stimulus information and a fixation input. The stimuli come from two distributions with different means. The RNN must determine which of the two stimuli come from the distribution with the greater mean while the fixation input has a value of 1. When the fixation input is zero, the decision is made based on which of the output nodes have the greater activity. (_b_) Functional connectivity of the recurrent layer of an RNN trained on the perceptual decision-making task reorganized according to a modular partition found using modularity maximization with Louvain. This algorithm found four modules, two of which are labeled ’module 1’ and ’module 2’. The other two modules are associated with the fixation input (one activates at the beginning of the fixation period, and the other activates at the end of the fixation period). (_c_) Mean activity of each module for different fixation periods as well as the difference in the cumulative mean between the stimuli. Notice how the mean activity in each module tracks with this value. (_d_) Two plots showing the correlation between activity in two of the modules and the output activity of the neurons that make the decision for the RNN.(_e_) Input projections create functional connectivity modules. (_f_) Functional connectivity modules are transformed by the recurrent layer. (_g_) Two plots showing that functional connectivity modules created by input projections are related to the current input stimuli values. (_h_) Two plots showing that the recurrent layer of the RNN transforms the information in modules so that they represent the cumulative stimulus difference. (_i_) Schematic showing the process of information lesioning to modules. Information lesions to modules were performed by removing weights in the output layer that sent information from nodes in that module. (_j_/_k_) Two sets of boxplots showing the accuracy and loss on different trials following an information lesion to module 1/2. Accuracy is considered separately for trials where stimulus 1 had the higher mean than stimulus 2, and vice versa. to one of the modules significantly dropped the accuracy for trials of the task when the go signal was given, but not for trials where the no-go signal was given to the network. This suggests that this module holds information regarding the presence of the go signal(Fig. S1e-g; two-sample \(t\)-test \(p<10^{-15}\)). Taken together, these results suggest that FC modules track task-relevant information that is then used by the network to make decisions. With both RNNs trained to perform a perceptual decision-making task, as well as RNNs trained to perform a go _vs_ no-go task, the mean activity of FC modules was found to correlate highly with the activity of output neurons (where the decision of the network is made). In addition, in both networks output lesions to these modules had specific effects on decision-making related to the information they were tracking. In addition to our analysis of the function of FC modules in recurrent neural networks, we performed a supplemental analysis to investigate whether or not FC modules played a similar role in feed-forward neural networks. We found that the specific organization of neurons into FC modules in the feed-forward neural network would carry information about the categorical or semantic content of the input (see **Supplemental Section** 1, and Figs. S2 and S3 for more details on our analysis). ### Origin In the previous section we investigated the function of modules in RNNs, showing that modules carry task-relevant information and relay this information to the output layer. Here, we investigate the origin and development of these modules. First, we demonstrate that FC modules emerge spontaneously in random networks and show that these modules emerge due to asymmetries in the sign and weight of projections from the input layer (input projections) that can be approximated using cosine similarity. Furthermore, we use this approximation to demonstrate a relationship between FC modules in the cortex of mice and humans and the cosine similarity of connections from the thalamus. Finally, we investigate the potential developmental role of initial asymmetries in input projections to the cortex from areas like the thalamus showing that initial weights can be used to guide the development of FC modules across learning. Input projections influence the modular structure of functional connectivity in both synthetic and real brains In previous sections, we showed that FC modules in artificial neural networks carry task-relevant information, but what are the origins of these FC modules? Surprisingly, we found that FC modules are present prior to and throughout training these networks. In fact, we found that training these networks - as well as feed-forward neural networks - can be interpreted as a search for the correct set of modules. Indeed, the rate of exploration is related to network performance across training (\(r=-0.87\); see **Supplemental Section** 2 and Fig. S6). The presence of these FC modules before the networks were trained presented us with an insight about their origins. Before training, the weights of the neural network were randomly assigned. This meant that FC modules could be produced by randomly assigned weights. Upon further investigation, we found that these FC modules were being produced by the input layer of the RNN (Fig. S7b,c). That is, before the activity was transformed by the dynamics of the recurrent layer, this activity was already modular. Furthermore, we could modulate the number of FC modules created by the input layer by changing the number of input neurons (Fig. S7d, \(r=0.84\)). In the parlance of linear algebra, the input layer is simply a projection from an \(N\)-dimensional space to an \(M\)-dimensional space, where \(N\) is the number of input neurons, and \(M\) is the number of neurons in the next layer. For this reason, these FC modules could be recovering the low-dimensional structure of the \(N\) input dimensions. Although appealing, this suggestion is incomplete given that these modules are still found when \(N>M\) as in the case of our supplemental analyses on feed-forward neural networks (Fig. S2). Instead, we hypothesized that FC modules are created through weight-based "competition" for output activity. That is, some sets of outputs will receive more weight from input neuron 1, and another set will receive more weight from input neuron 2, and so on. Each of these sets will correspond to different modules (and each modules activity will be related to the "winning" input; see Fig. 1e). When the number of input neurons \(N\) get large enough, some of the input neurons do not "win" any of the output neurons. In a supplemental analysis, we demonstrate this, and show how the outcome of this competition, and therefore the structure of these modules, is also dependent upon the statistics of the input (see **Supplemental Section** 4 and Fig. S14). In other words, this "competition" involves asymmetries in the weights of the connections from input neurons. If one neuron has more weight than the other neuron, it "wins". These asymmetries come in two basic types: asymmetries in sign (+/-), and asymmetries in weight. In a supplemental analysis, we develop generative models of these asymmetries in sign and weight to show how they modulate the modularity of the resulting FC (see **Supplemental Section** 3 and Fig S4). In this analysis we also show that in a simple model system with only two input neurons, the modules produced by the input layer can be nearly perfectly circumscribed by six connectivity-based rules (Fig. S4n-p). Importantly, we find that these connectivity-based rules can be approximated for larger systems (where \(N>2\)) by taking the cosine similarity of the connection weights (Fig. S8). More specifically, for an \(N\times M\) weight matrix, we can compute the cosine similarity between the input weights of all \(i\) by \(j\) outputs to produce a connection similarity matrix of size \(M\times M\) (see Fig. 2a for a schematic). By clustering this matrix of connection similarity we find modules that can approximate the connectivity-based rules (Fig. 2b and Fig. S8d). Additionally, this similarity matrix is nearly a perfect estimate of the FC of activity produced by sending Gaussian noise through the input projections (Fig. 2c-d; \(r=0.99\)). That said, in RNNs this projection of activity from the input layer acts as a perturbation to the current state of the recurrent layer. This means that these functional connectivity modules will be altered by the _dynamics_ of the recurrent layer. As an example, we have seen how the input projection modules in an RNN trained on the perceptual decision-making task carry information about the current input stimulus value (Fig. 1e,g). However, as this input activity is transformed by the dynamics of the recurrent layer, the new FC modules begin to accumulate information about the current difference between the cumulative means of each stimulus (Fig. 1f,h). It is therefore an open question whether or not the FC modules defined by input projections (_before dynamics_) offer a reasonable partition of the modules found in the FC of the recurrent layer (_after dynamics_). Here, we directly tested if the modular partition of the connection similarity matrix can be used to partition the FC of the recurrent layer of an RNN trained on a perceptual decision-making task. We can estimate the quality of a partition of FC into modules using the measure of modularity \(Q\). By imposing the modular partition of the connection similarity matrix on the FC of the recurrent layer during task trials, we induced a modularity of \(Q=0.33\) (Fig. 3c). We tested if this value was greater than chance by generating a null distribution of induced \(Q\) values. Briefly, we randomly permuted the order of this partition 1000 times, while preserving the number and size of each community and calculated the \(Q\) induced by imposing the permuted partition on the network for every permutation. We found that the real induced \(Q\) value was significantly greater than expected by our null model (Fig. 3d; \(p<10^{-15}\)). Indeed, we found a strong positive relationship between the input projection's connection similarity and the FC of the recurrent layer (Fig. 3e; \(r=0.51,p<10^{-15}\)) suggesting that the input projections greatly influence both the overall FC and the modules that it forms even in the presence of recurrent dynamics. Taken together, these results suggest that input projections greatly influence the development of FC modules in recurrent neural networks, but it remains unclear whether or not the same can be said for the development of FC modules in _real_ brains. Here, we explore this using structural and functional connectivity data from the brains of both mice and humans. Given our interest in the development of FC modules in the cortex, we hypothesized that the thalamus could be modeled as the input projections to the cortex. In addition to its varied roles in information propagation and modulation [63, 64, 65], the thalamus is also the primary hub for relaying sensory information from the periphery to the cortex. As such, the thalamus might be a good analog of the input projections found in our model. Sensory information from the eyes, ears, and body often passes through the thalamus on its path towards the cortex [65, 66, 67]. First, we tested this in mice (Fig. 3f). We obtained weights for the structural connections from the thalamus to the cortex using publicly available tract tracing data from the Allen Brain Institute [68]. We took the cosine similarity of all in-weights from the thalamus to the left hemisphere of the cortex to create an \(M\times M\) matrix, where \(M\) is the number of cortical regions in the left hemisphere (\(M=2166\)). We then used modularity maximization to partition this similarity matrix into modules (Fig. 3g). We also collected functional magnetic resonance imaging (fMRI) data from lightly anaesthetized mice [69]. When we applied the modular parti Figure 2: **Before dynamics, functional connectivity modules are created from the input projections.** (_a_) Schematic showing how to create the cosine similarity matrix representing input connection similarity. The input weights of each \(i\times j\) output neuron are related to one another using cosine similarity. (_b_) The process described in the previous panel results in a cosine similarity matrix representing the similarity of the input weights between all pairs of output neurons. We used modularity maximization with the Louvain algorithm to find modules in this matrix. (_c_) Using the same modular partition from the cosine similarity matrix, we reordered the functional connectivity that emerges from the input projections when inputting Gaussian noise. The modules are a near perfect match. (_d_) The cosine similarity and the functional connectivity that emerges from the input layer are nearly identical. tion of the thalamocortical connections similarity matrix onto FC of the mouses cortex it induced a modularity of \(Q=0.17\) (Fig. 3h) We then generated a null distribution of induced \(Q\) values that we should expect given spatial autocorrelation in fMRI cortical data [70]. This involved randomly reordering nodes in a way that approximately preserved the variogram of the original data [70]. In this way, we produced 100 random partitions. We found that we could induce more modularity (\(Q\)) in mouse cortical FC using the connection similarity matrix partition than expected by chance (Fig. 3i; \(p<10^{-15}\)). We also found a strong positive relationship between thalamocortical similarity and FC values in the cortex (Fig. 3j; \(r=0.5,p<10^{-15}\)). We performed a similar analysis with other subcortical areas (subplate, pallidum, hypothalamus, pons, medulla, midbrain, hpf, striatum, olf, cerebellum) and we found that the relationship with the thalamus was the greatest (Fig. S15; two-sample \(t\)-test \(p=0.0057\)). This suggests that thalamocortical connections contribute to the FC modules found in the mouse cortex. Next, we tested this in humans using dense structural and functional connectivity data from the human connectome project [71; 72; 73]. Using the same method that we used with the mice data, we again found that when we applied the modular partition of the thalamocortical pro Figure 3: **After dynamics, input projections contribute to functional connectivity modules**, (_a_) Schematic of input projections onto the recurrent layer. (_b_) Input projection similarity matrix reordered using modularity maximization. (_c_) Functional connectivity of the recurrent layer reordered by the partition of the input projection similarity matrix. (_d_) Boxplot showing the null distribution of induced modularity (Q) values that we should expect by chance. This was produced by randomly permuting the partition labels from \(b\) and applying them to \(c\). The real induced modularity (Q) value is in blue. (_e_) Plot showing the relationship between the cosine similarity of input projections and the functional connectivity of the recurrent layer. Dot color and size indicates the number of points that fell in this bin. (_f_/_k_) Schematic showing the thalamus and the cortex in mice/humans. (_g_/_l_) Thalamocortical input projection similarity matrix reordered using modularity maximization (for mice/humans; human matrix down-sampled for plotting). (_h_/_n_) Functional connectivity of the cortex reordered by the partition of the thalamocortical input projection similarity matrix (for mice/humans; human matrix down-sampled for plotting). (_i_/_n_) Boxplot showing the null distribution of induced modularity (Q) values that we should expect by chance. This was produced by randomly permuting the partition labels from _g_/_l_ in a way that maintains the spatial autocorrelation in fMRI data and applying them to _h_/_m_. The real induced modularity (Q) value is in blue. (_j_/_o_) Plot showing the relationship between the cosine similarity of input projections and the functional connectivity of the recurrent layer. Dot color and size indicates the number of points that fell in this bin (for mice/humans). jections onto FC of the human cortex (left hemisphere; number of nodes \(M=29696\); Fig. 3k-m) we could induce more modularity (\(Q\)) than expected by chance (Fig. 3i). We also found a strong positive relationship between thalamocortical similarity and FC values in the cortex in humans (Fig. 3o; \(r=0.26,p<10^{-15}\)). In a supplemental analysis, we found that the thalamocortical connection similarity values were significantly concentrated in 7 brain systems defined based on the correlational structure of resting-state fMRI across 1000 subjects [15; 74] (Fig. S12a-b; two-sample \(t\)-test all \(p<10^{-15}\)), but the brain systems for which the concentrations were highest were primary sensory systems (visual, and somatomotor; Fig. S12c, two-sample \(t\)-test \(p<10^{-15}\)). Taken together these results suggest that thalamocortical connections contribute to the structure of FC modules found in the cortex of both mice and humans. But what - if anything - does this say about the developmental origin of FC modules in the cortex? We hypothesize that initial weights for the connections from the thalamus to the cortex can bias the development of FC modules. In this way, the thalamus might determine the broad placement of FC modules in the cortex. Importantly though, mice and humans learn, and this learning involves synaptic changes. Therefore, this relationship between initial weights and modules should be robust to weight changes that occur with learning. In a supplemental analysis we find evidence of this robustness of the development of FC modules to weight changes that occur with learning in recurrent neural networks (see **Supplemental Section**5 and Fig. S9). ### Dynamics In this section we report our findings investigating the intersection between functional connectivity (FC) modules and the dynamics of the recurrent layer in RNNs. In the previous sections, we have shown that FC modules can emerge from asymmetries in the weights of input projections (Fig. S4) and that these asymmetries also produce a relationship between the current value of stimulus input information and the activity of the FC modules that emerge from the input projection (see Fig. 1e,g & Fig. S5). That said, RNNs have _dynamics_ and the tasks that they are well-suited to perform involve the memory and/or transformation of information across time. Therefore, the information regarding the current value of the input stimulus must either be maintained or transformed by the dynamics depending on the nature of the task. Here, we investigate this process directly by comparing the structure of FC modules with the effects of lesioning neurons (or their connections) in the recurrent layer. In the first section on dynamics, we describe the result of lesioning connections within _input projection modules_ to show that the dynamics within these modules specialize in accumulating information within these modules. In the next section, we describe the result of lesioning neurons within _functional connectivity modules in the recurrent layer_ to show that lesions to neurons within the same FC module are likely to have similar effects on the phase portrait of the system. Lesioning recurrent connections within input projections modules has circumscribed effects on behavior In this section, we show that the input projection modules, while concentrating stimulus information into certain nodes in the recurrent layer also circumscribe the specific effects caused by lesions to the weights in the recurrent layer. In order to test the specificity of lesions to tracking task relevant information from each stimulus, we set up a perturbation paradigm where we artificially sent a large amount of input into only one of the input neurons (corresponding to one of the stimuli (see Fig. 4a) for a schematic). Given such a perturbation, the RNN will infer that the artificial stimulus comes from a distribution with a much larger mean. We tested the hypothesis that lesions to weights in the recurrent layer that were associated with each input projection module would have specific effects on this inference. One input projection module is defined by having more weight from one stimulus, and the other module is defined as having more weight from the other stimulus. Our hypothesis was that lesions to each module would have specific effects on tracking the stimulus that it was associated with. Previous work has shown that RNNs trained to perform this perceptual decision-making task set up a dynamical object referred to as a line attractor [55; 76]. A line attractor is a line formed by many fixed-point attractors such that the systems long term behavior will end up somewhere on this line in state space (Fig. 4b). When such a system is resting on this line attractor and is perturbed, the state of the system will return to another location along the line attractor. These line attractors have previously been shown to be used often by RNNs when they are trained to track continuous variables. In the case of this perceptual decision-making task, stimulus from each input neuron will perturb the state of the RNN in one of two general directions along the line attractor corresponding to accumulating information about the current difference between the stimulus means. Along the center of this line attractor is a decision-boundary (see Fig. 4c for a schematic). When the state of the system is on one side of this boundary it will make one decision, and it will make a different decision on the other side of this boundary. In this way, movement away from the decision boundary corresponds to increasing evidence for one decision and against another decision. Indeed, when we perturbed the RNN at the input for stimulus 1, the state of the system traveled away from the decision boundary and fixed itself on a leftward portion of the line attractor. In contrast, when we perturbed the RNN at the input for stimulus 2, the state of the system traveled away from the decision boundary in the other direction (Fig. 4c). For our lesioning analysis, we used distance from the decision boundary as a proxy for the network's ability to accumulate relevant evidence for each stimulus. We operationalized the decision boundary based on the difference between the activity of the output neurons that made the decision for the RNN. We then trained 100 RNNs and for each RNN ran it through four lesioning conditions: _1)_ lesion module 1 and perturb stimulus 1, _2)_ lesion module 1 and perturb stimulus 2, _3)_ lesion module 2 and perturb stimulus 1, and _4)_ lesion module 2 and perturb stimulus 2. For each condition, we gradually lesioned a larger number of positively weighted connections (ordered from largest to smallest). We found that lesions to module 1 moved the system closer to the decision boundary when stimulus 1 was perturbed, but had a limited effect on perturbations to stimulus 2 (sometimes causing the system to move further from the decision boundary; Fig. 4f). In contrast, lesions to module 2 moved the system closer to the decision boundary when stimulus 2 was perturbed, but had a limited effect on perturbations to stimulus 1 (Fig. 4g). We found this general feature of the specificity of lesioning to modules across all 100 RNN models (Fig. 4h and Fig. S10c; two-sample _t_-test between stimulus 1 distances and stimulus 2 distances after 200 lesions to module 1: \(p<10^{-15}\), and to module 2: \(p=5.25\times 10^{-13}\)). We also tested this effect on modules defined using the Figure 4: **Lesioning recurrent connections within module projections has circumscribed effects on behavior**, (_a_) Schematic showing how we perturb different input neurons during our lesioning trials. (_b_) Plot of the activity in the recurrent layer of a trained RNN projected into the first two principal components. In red, we plot the fixed/slow points approximated using a gradient-descent based method [75]. The activity trajectory is colored according to the correct decision about which stimulus came from the distribution with the greater mean. Note that these colors sometimes overlap given that the cumulative mean can be artificially higher for the incorrect stimulus early in the trial due to sampling variability. (_c_) Same as b, but instead of plotting the fixed points, we plot the decision boundary for the network (this is a visual estimate; see main text for how the boundary was calculated). (_d_) Same as b, but instead of plotting the fixed points, we plot the trajectories of two perturbation trials. In the green trial we perturbed stimulus 1. In the purple trial we perturbed stimulus 2. The state of the system starts at the black star and perturbations result in activity that stably rests at the colored stars. (_e_) Schematic showing how we lesioned the weights of the recurrent layer of the network based on the modules defined by the input projections. (_f_) Lesions to module 1 of the input projection cause the end points of stimulus 1 perturbations to move closer to the decision boundary, whereas the end points for the stimulus 2 perturbation in this example move further away from the decision boundary. (_g_) Showing a similar but opposite effect as j when lesioning module 2. (_h_) These plots show results from our four lesioning conditions (as described in main text) when applied based on modules defined by the input node giving the most weight to each output node in the input projection. When increasingly lesioning input projection module 1, stimulus 1 perturbations move closer to the decision boundary, but stimulus 2 perturbations do not move closer to the decision boundary. The opposite is shown for increasingly lesioning input projection module 2. These lines represent the average distance from the decision boundary across 100 trained RNNs. sign of the input projections and we found the same effect (Fig. S10a,b; two-sample _t_-test between stimulus 1 distances and stimulus 2 distances after 200 lesions to module 1: \(p<10^{-15}\), and to module 2: \(p=2.01\times 10^{-14}\)). Taken together, these results suggest that the FC modules defined by the input projections circumscribe functionally relevant effects of lesions to the weights of the recurrent layer. This further suggests that the symmetry breaking represented by these weight and sign asymmetries in the input layer are relevant for the development Figure 5: **Similarity of lesioning effects on flow is related to functional connectivity modules**, (_a_) Quiver plots showing the flow of the activity in the recurrent layer of an RNN trained on the go vs. no-go task (projected onto the first two principal components). recurrent activity states start in a grid of points arrayed within the total state space that is explored during task trials. Arrows show where the system will end up after a single time step (from the current time step). The arrows eventually settle onto a limit cycle. Blue and red trajectories in the first panel indicate trajectories for different task conditions. (_b_) Plot of the effect of lesions on the flow shown in panel \(a\). A scatterplot of colored points is superimposed on a quiver plot describing the flow. The colors indicate the Euclidean distance between the non-lesioned flow after one time step and the flow after lesioning a neuron in the recurrent layer. Although plot is shown in two principal component dimensions, the distances are calculated in the original dimensions of recurrent layer activity. (_c_) Functional connectivity of the recurrent layer (in an RNN which has _not_ been lesioned). Modularity maximization was used to find modules in this matrix. This partition is also used to reorder the matrix in the next panel representing lesion similarity. (_d_) We individually lesioned the weights from and to every _i_-th neuron in the recurrent layer. This produced a grid of flow distance values for every neuron describing the effects of lesions on the dynamic flow. We then flattened this grid and compared the flow distances between all \(i\times j\) neurons producing a matrix of similarity values telling us how similar the effects of lesions were between all neurons in the recurrent layer. This matrix was reordered by the partition of functional connectivity into modules found in the previous panel. (_e_) Boxplot comparing the induced modularity (Q) of the lesion similarity matrix when using the modular partition of functional connectivity to a null model that randomly permuted this partition 1000 times. (_f_) Plot showing the relationship between lesion similarity and the functional connectivity of the recurrent layer. Dot color and size indicates the number of points that fell in this bin. of task-based dynamics. ### Functional connectivity modules circumscribe sets of neurons with similar contributions to dynamics In the previous section, we showed that the FC modules defined by the input projections delimit dynamics that are specific to the transformation of the input information concentrated in each input projection module. However, these modules are defined before the recurrent layer transforms this input activity via dynamics. An important open question is how the FC modules defined _after dynamics_ relate to the dynamics produced by the weights in the recurrent layer. Here we use a novel lesioning analysis to show that FC modules defined _after dynamics_ circumscribe sets of neurons with similar contributions to the dynamics. We begin our analysis by visualizing the phase portrait of an RNN trained on the go _vs._ no-go task (Fig. 5a). A phase portrait can be used to visualize the dynamics of a system by initializing the state of that system in many different locations in state space(typically in a grid) and plotting the direction that each of these points moves in the phase space after some period of time. Here, we created a grid of initial points in the two-dimensional state space defined by the first two principal components of the activity in the recurrent layer. We then propagated this activity by stepping the RNN forward for a single time step and plotted the direction and magnitude of the resulting movement through state space using a quiver plot (Fig. 5a). After a sufficient period of time, all initial points fell onto a limit cycle (Fig. 5b,c). In order to quantify the effects of lesions to the weights of neurons in the recurrent layer, we analyzed how the dynamics shown in these phase portraits changed following each lesion. Briefly, after lesioning a single neuron, we created a new phase portrait for the lesioned RNN and took the Euclidean distance between the lesioned and non-lesioned location in state space after one time step. Note that this distance was calculated in the original dimensions of the recurrent layer, where \(N=100\)(we refer to this as _flow distance_, see Fig. 5d-f). After lesioning the input and output connection weights for every neuron in the recurrent layer separately, we had an array of flow distance values for lesions to each neuron. We then measured the similarity of the dynamical effects of each neuron on the phase portrait by taking the Pearson correlation between every \(i\times j\) pair of flow distance arrays, resulting in an \(M\times M\) matrix of similarity values, where \(M\) is the number of neurons in the recurrent layer. We found that this matrix of lesion similarity values describing the similarity of the dynamical contributions of different neurons to the phase portrait was highly related to the structure of the FC matrix _after dynamics_. Not only was there a positive linear relationship between lesion similarity and FC values (Fig. 5f, \(r=0.59,p<10^{-15}\)), but we also found that a modular partition of the FC matrix could also be used to identify modules in the lesion similarity matrix (Fig 5d,e; \(p<10^{-15}\)). We replicated these results in an RNN trained on the perceptual decision-making task (Fig. S11) where again we found a positive relationship between FC and lesion similarity (Fig. S11c,d,f; \(r=0.26,p<10^{-15}\)), and a significant relationship between the modules in the FC matrix and the modules in the lesion similarity matrix (Fig. S11d,e; \(p<10^{-15}\)). These results suggest that FC modules _after dynamics_ circumscribe sets of neurons with similar contributions to the dynamics of the system. Taken together with the results of lesioning input projection FC modules (_before dynamics_) these results suggest that FC modules can be used to identify meaningful task relevant dynamics that not only hold onto task-relevant information, but also circumscribe dynamics responsible for transforming input information into its task-relevant counterpart. ## Discussion Through our research, we offer preliminary answers to three fundamental inquiries about functional connectivity modules, addressing their 1) function, 2) origin, and 3) dynamics. In the ensuing section, we explore the broader implications of these results for understanding functional connectivity modules in brains, suggesting prospective research paths, while acknowledging the limitations of our study. ### The origins of functional connectivity modules In this paper we have shown that functional connectivity modules in recurrent neural networks are created by asymmetries in the weights of projections from the input layer. We found that these asymmetries concentrate input information into different areas of the recurrent layer in what we refer to as _"input projection modules"_. Although this information is then transformed via the dynamics of the recurrent layer, we found that we still see a correspondence between the modules defined by the weights of the input layer and the functional connectivity modules of the recurrent layer. We then replicated these results in empirical structural and functional data from both mice and humans, finding agreement between the asymmetries of thalamocortical projections and cortical FC modules. Here, we argue for the plausibility of these results and their potential implications for the development of functional connectivity modules in biological brains. If we model the cortex as the recurrent layer of this RNN, then a good analog for the input projections would be the thalamus. In tandem with its varied and complex role in information processing and modulation [64; 65; 77; 78], the thalamus is also a critical hub for the communication of sensory information from the periphery to the cortex [65; 66]. Visual information from the retina passes through the lateral geniculate nucleus of the thalamus before being passed on to the visual cortex [66; 67]. Auditory information from the ears first passes through the medial geniculate body before being passed to the auditory cortex [79]. The ventral posterolateral nucleus receives information about pain, crude touch and temperature from the spine before passing it on to the somatosensory cortex [67; 80]. The thalamus is crucial for relaying information from the senses to the cortex for further processing. Our results suggest that this important role of the thalamus as a relay also gives it a special place in the development of functional connectivity modules in the cortex. Given the clear functional roles these modules play in information processing and dynamics in these RNNs, and empirical results suggesting that functional connectivity modules in the cortex are highly related to the functional specialization of different regions (visual, somatosensory, etc.) [21; 25; 26], this might also have bearings on the development of functional specialization. By asymmetrically delivering sensory information to different areas of the cortex, the thalamus could perform an important role in _symmetry-breaking_ during the development of cortical functional specialization. Symmetry-breaking is a concept from dynamical systems theory wherein wells or basins form in a previously flat attractor landscape [81]. Where before every state was equally likely, symmetry-breaking biases the evolution (or development [82]) of the system into a certain area of state space. In the context of the development of functional specialization in the cortex, the symmetry-breaking performed by the thalamus could bias the development of areas like the visual, auditory, and somatosensory cortices by coding for asymmetries in the weights of thalamocortical connections. To developmental neurobiologists this is not an unfamiliar idea. Indeed, research from as early as the 1980s has suggested that redirecting thalamic projections can redirect functional specialization to new areas of the cortex [83; 84]. Indeed, more recent research involving transcriptomic identification of cell types in mice found that cortical modules identified by cell type were not only highly similar to those identified by connectivity-based methods, but thalamocortical connections refined cell type composition within these cortical modules [85] Our work enriches and supplies further evidence for this general hypothesis regarding the importance of thalamic connections for the development of functional specialization in the cortex. Not only do we find that the asymmetries in thalamocortical projections predict modules in the functional connectivity of the cortex, but we also find evidence that could suggest that coding for the weights of these connections genetically would result in similar modules following development and learning. More specifically, we demonstrate that when the weights of input projections in RNNs are initialized with the same template prior to training, the resulting trained RNNs exhibit similar modular partitions. This suggests a possible developmental pathway - from genes to phenotype - for functional connectivity modules. That is, if some simple genetically coded mechanism, perhaps a chemical gradient, could bias asymmetries in the weights of thalamocortical projections, it might also have an outsized effect on the formation and organization of functional specialization in the cortex. Indeed, recent empirical results involving the development of thalamocortical projections in premature human infants support this general idea. Thalamocortical connections develop from a transient brain area just below the cortex known as the cortical subplate [86; 87]. During a period known as the _"waiting period"_(between 20-38 weeks), thalamic projections embed themselves in the cortical subplate and _wait_ for their cortical postsynaptic neurons or target neurons [86; 87]. Because of this periods proximity to birth at 40 weeks, premature infants are particularly vulnerable to interruptions to this period [86; 88]. Many studies have found that premature birth can result in cognitive deficits (e.g.[89; 90]), but some work also suggests that these cognitive deficits are directly due to malformed thalamocortical projections [88; 91]. Perhaps one reason that malformation to these thalamocortical projections is detrimental to cognition is that it interrupts the development of functional specialization in the cortex. Indeed, more recent work from our lab has actually shown that the development of functional connectivity modules in premature human infants is a predictor of better cognitive outcomes one year later (unpublished). Together with recent work on the community structure of thalamocortical projections [92; 93], the recapitulation of cortical gradients by thalamic anatomy [63], recent work showing a genetic and connectomic axis of variation in the thalamus relates to an anterior-posterior pattern in cortical activity [94], and general theory about the relationship between the thalamus and cortical functional connectivity [95], our work supports an emerging theory about the importance of the thalamus for the development of functional connectivity modules, as well as the development of functional specialization in the cortex. Although we primarily focus our discussion here on the role of the thalamus in providing input projections to the cortex, it is important to note that our results on the origins of functional connectivity modules are not limited to the thalamus/cortex. Instead, our results could apply broadly to the connections impinging on any population of neurons and suggest that you could predict the functional connectivity modules that emerge in that population by measuring the asymmetries in the weights of those connections. This general result is consistent with recent work showing that low-rank recurrent neural networks produce low-dimensional dynamics [96]. Indeed, when our measure of similarity for grouping asymme tries in input projections (cosine similarity) is high, it suggests that the columns of the weight matrix are not independent and therefore exhibit lower rank. ### The function and dynamics of functional connectivity modules Functional connectivity modules are nearly universally present in neural activity. From calcium imaging data of single cell activity in the larval zebrafish [7], to fMRI imaging of regional activity in the cortex of human beings [15; 20; 21]. Although previous research has attempted to disambiguate the functional roles played by functional connectivity modules, their specific roles remain unclear. For example, meta-analytic work on functional connectivity modules during the resting-state suggests that they recapitulate the co-activation of brain regions during different tasks such that categories of tasks map onto different modules [25; 26]. Indeed, common parcellations of cortical activity into large scale systems using functional connectivity analysis suggest a mapping exists between FC modules and cytoarchitectural features of primary sensory cortices as well as functional systems defined by neuropsychological and animal ablation studies [21; 97]. Additionally, recent work has used task and resting-state fMRI in tandem to infer that large-scale functional connectivity modules can be sub-divided into smaller functional connectivity modules with task-specific functional domains [98; 99]. That said, it is unknown whether or not a complete picture of the function of these modules would require cell-level imaging (beyond the current spatial resolution capability of fMRI) and perturbation (beyond that which is ethically permissible in humans). At the microscale of neurons, some recent research involving imaging and perturbation in mice provides an even clearer picture of the possible function of functional connectivity modules. For example, after training mice on a go _vs._ no-go task researchers found cellular assemblies related to the go signal and no-go signal in layer 2/3 of the mouse primary visual cortex. This suggested that these assemblies might be keeping track of the presence of either signal. In fact, they found that optogenetic stimulation of the go signal assembly was sufficient to produce go behavior [10]. Taken together, this research is suggestive of various roles that functional connectivity modules _might_ play in functional specialization, but it leaves open the question of the _specific_ function of these modules. Recurrent neural networks provide a good platform for exploring such functional questions. Not only do recurrent neural networks maintain functional dynamics that emerge from network interactions in a manner similar to complex adaptive systems like the brain, but with recurrent neural networks we have complete access to all of the information responsible for behavior. Additionally, with recurrent neural networks there are no ethical issues with perturbing/lesioning the system to further probe the nature of its behavior. Here, we directly explored the _function_ of functional connectivity modules in recurrent neural networks. We found that the function of these modules is to hold onto, or accumulate task-relevant information. For example, by restricting the network from using information from these modules for its decision, we were able to show that the network exhibits specific behavioral deficits. Furthermore, these deficits could be predicted by its lack of access to the information found in these modules. Similarly, previous research has used the dynamical systems lens to probe the low-dimensional dynamics of neural data [4; 6; 100]. Such research has shown that axes in these low-dimensional spaces can be shown to represent task-relevant variables [6; 55; 100; 101] and that dynamics in these spaces are responsible for transforming this data and using it for behavior [55; 101]. While functional connectivity modules imply that a system is exhibiting low-dimensional behavior, it remains unclear how these dynamics are related to functional connectivity modules and their function. Here, we directly explored the _dynamics_ of functional connectivity modules in recurrent neural networks. We found that these functional connectivity modules delimit neurons with similar dynamic effects. For example, we found that lesions to neurons within a module produced similar changes to the system's phase space. Additionally, we found that lesions within input projection modules result in specific disruptions to the dynamics responsible for transforming input information from different stimuli. Taken together with the previous results on the origins of functional connectivity modules, our results suggest an intriguing story about their _function_ and _dynamics_. When the RNN begins it's task, input information is projected preferentially into different neurons in the recurrent layer. Where this information is preferentially sent can be determined by the input projections modules. Now, this information must be transformed or maintained by the dynamics of the system. In the case of the perceptual decision-making task, these initial modules hold information about the current values of each stimulus. The dynamics of the system then transform this information into an estimate of the difference between the cumulative mean's of the two input stimuli such that _after dynamics_ functional connectivity modules hold onto this task-relevant variant of the input information. So, what might these results suggest about functional connectivity modules when we see them in brains? At the scale of the entire cortex, our results suggest that the thalamus concentrates input information onto different areas of the cortex, and then the cortex transforms this information via dynamics. The functional connectivity modules of the cortex maintain a similar profile to the input projection modules defined by the thalamocortical projections, but they begin to reflect the dynamical signature of each node, and the information that they carry is likely some task-relevant transformation of sensory information. At the scale of neurons in cellular assemblies, these results suggest something similar. Input projections to a given population of neurons reflects how input information is concentrated in that population. Functional connectivity modules in this population could also delimit neurons with similar dynamical contributions to the phase space of the system. These functional connectivity modules also likely hold onto task-relevant information. One important open question is how these scales relate to one another. While the general dynamical and information-processing perspective described here might apply well to both micro- and macro-scales, the relationship between these scales is likely a hierarchical one. That is, functional connectivity modules at the cortical scale defined by regional BOLD activity are likely composed of the functional connectivity modules at the neural scale. Indeed, modules at the macro scale have been consistently shown to have a hierarchical structure [24; 102; 23]. Although a true bridge between these spatial scales has not been directly explored, this research suggests that large modules are composed of smaller modules, perhaps all the way down to the level of cell assemblies. Future research should explore how such a hierarchical structure might emerge in recurrent neural networks and how the dynamics and information-processing across these scales relate and interact. In closing, it's important to note that, although it is unlikely, functional connectivity modules could _just_ be the signature of dynamics in the system and play no specific functional role. In this way, modules in functional connectivity networks would be epiphenomenal, present as a result of the systems dynamics, but ultimately not used by the system. In our work, we diminished this possibility by allowing the system dynamics to persist while not allowing the information from these modules to be used for decision-making. Importantly, this destroyed the systems ability to make the correct decisions. ### Low-dimensional manifolds and functional connectivity modules Our results suggest that functional connectivity modules hold onto, or accumulate, task-relevant information. Importantly, this same thing can be said for low-dimensional manifolds in neural activity [100; 101; 6; 55]. Here, we take the opportunity to clarify that we see these as complementary perspectives on a similar underlying phenomenon. Both functional connectivity modules and low-dimensional manifolds share the feature of being about the statistical dependence between sets of neural units. Indeed, low-dimensional manifolds are often based on the eigendecomposition of a covariance matrix, which itself is a non-standardized version of a functional connectivity matrix. Although the mathematical relationship between the two is more difficult to work out given that functional connectivity modules are often found through optimization, the relationship is nonetheless non-trivial. Many of our results extend the literature on the dynamic behavior of recurrent neural networks by showing that the familiar properties of low-dimensional manifolds can also be seen at the level of functional connectivity modules. Additionally, we also show a link between such dynamics and network properties of the weights in these systems. As such, our work offers a bridge between various disciplines that model the brain as a network and those that model the brain as a dynamical system. It is our belief that these two important frameworks can be used together to tell a richer story about the function of recurrent neural networks, and hopefully also, brains. ### Limitations Our study has a number of important limitations to consider. First, recurrent neural networks are abstract models of brains, and as such remove some biological details from consideration in the model. This biological detail could prove to be fundamental to understand the system in question (e.g.[103; 104; 105; 106]). Additionally, these systems might solve problems differently than brains. As such, it is essential that the insights provided by recurrent neural networks be tested on empirical data from brains. We were able to test a portion of our results on the origin of functional connectivity modules, but future research should attempt to replicate our results on the function and dynamics of functional connectivity modules in biological brains. In addition, the tasks that these RNNs solved, although based in classic systems neuroscience tasks, are also highly simplified. Biological organisms evolve in a complex environment replete with uncertainty, many different ill-defined tasks, and serious stakes regarding survival and reproduction. Future work should explore the function, origin, and dynamics of functional connectivity modules in recurrent neural networks trained to solve more ethologically relevant tasks. Additionally, one of the primary claims made here is that modules carry information relevant to a task. One unaddressed concern relates to whether these two properties - modular structure and task-relevance - tend to be correlated. That is, in the universe of all possible RNNs trained on the tasks used here or elsewhere, if a network exhibits modules, must the modules also carry task-relevant information? Are there networks that solve the task equally well whose correlation structure is modular (note that correlation structure, itself, may artificially inflate estimates of modularity)? If so, how prevalent are they and how likely is it that RNNs trained using backpropagation will discover such networks as solutions to systems neuroscience tasks? An important direction for future work is to investigate the interrelationship of modular correlation structure and the task-relevance of the detected modules. Finally, one might argue that we see functional connectivity modules in these recurrent neural networks because the number of neurons in the recurrent layer is too large. Perhaps if the network only needs to keep track of three task-relevant variables, then three neurons would be sufficient to track this information. While we acknowledge that this could be the case, it is important to clarify that there is a difference between neurons holding onto information, and neurons producing dynamics that can transform input information into task-relevant information. In fact, it may be the case that the number of neurons needed to easily find dynamics to solve a task, might be greater than the number of neurons whose state could theoretically hold onto task-relevant information. Indeed, although the perceptual decision-making task only requires keeping track of roughly three variables, in a supplemental analysis we found that recurrent neural networks with a recurrent size of three had a more difficult time finding the dynamics that led to high accuracy with this task than networks with larger recurrent sizes (Fig. S13). Additionally, the same critique can be leveled at research on the low-dimensional manifolds found in recurrent neural networks [5; 101; 5], and yet we find both functional connectivity modules and low-dimensional manifolds in neural data from biological brains, suggesting again that the number of neurons required to easily find and produce task-related dynamics might be larger than the number of task-relevant variables being tracked. That being said, after these dynamics transform this information, it can then be read off of the relevant functional connectivity modules by reader cells similar to the way in which our output layer uses single read-out neurons to make decisions based on the current state of the recurrent layer. Indeed, this could also be the reason why neuroscience will continue to find single cells encoding complex variables (e.g.[107]). These cells then would be reading out information from upstream functional connectivity modules [12], but nonetheless the original functional connectivity modules would hold onto the information being transformed because they are the statistical signature of the dynamics transforming this information across time. ## Materials and Methods ### Recurrent Neural Network We used a specific type of recurrent neural network (RNN) referred to as continuous time recurrent neural networks (CTRNNs). CTRNNs are defined by the following equation [108; 109; 110; 47]: \[\tau\frac{d\mathbf{r}}{dt}=-\mathbf{R}(t)+f(W_{rec}\mathbf{R}(t)+W_{in}\mathbf{ Input}(t)+\mathbf{b}).\] Here, \(\tau\) is a time constant (set to 100), \(\mathbf{R}\) is the recurrent state, \(W_{rec}\) are the weights of the recurrent layer, \(W_{in}\) are the weights of the input layer (input projection), and \(\mathbf{b}\) is a bias. For all implementations of RNNs in this project we chose CTRNNs with a recurrent layer size of \(M=100\). Finally, a linear transformation of the recurrent state \(\mathbf{R}\) is used to create output activity: \[\mathbf{O}(t)=g(W_{out}\mathbf{R}(t)+\mathbf{b}_{out}).\] For more information on the design and implementation of this CTRNN see the github repository [https://github.com/gyyang/nn-brain/blob/master/RNN_tutorial.ipynb](https://github.com/gyyang/nn-brain/blob/master/RNN_tutorial.ipynb) [108; 47]. For more information on the dynamics and parameter space of CTRNNs generally, see [110; 50; 111]. ### System neuroscience tasks for recurrent neural networks We used a machine learning toolbox called neurogym: ([https://github.com/neurogym](https://github.com/neurogym)). The toolbox is built upon the popular machine learning framework _gym_ from OpenAI and provides a variety of systems neuroscience tasks that have been designed for easy presentation to recurrent neural networks (see [109]). We chose two tasks from this toolbox to focus on: 1) the perceptual decision-making task, and 2) the go vs no-go task. The perceptual decision-making task implements a simplified version of a random-dot motion task wherein the subject is presented with randomly moving dots that have coherent motion in some direction [112; 55]. The task of the subject is to indicate the direction of coherent motion during some trial period. In the neurogym version of this task, the trial period is determined by a fixation input to the RNN. Two stimuli are presented to the RNN during this fixation period and the task is to indicate which of the two stimuli has the larger mean value. Each of these stimuli comes from a Gaussian distribution with different means (equal standard deviation). The means of this distribution represent coherent motion, and the random distribution of values around the mean represent the random motion. The go _vs_ no-go task also has a trial structure that is determined by a fixation input. At the beginning of the fixation period, the RNN is presented with one of two signals: a go signal or a no-go signal. These two signals are represented by two different input neurons. Following presentation of this signal, there is a delay period wherein the RNN only receives the fixation signal. The delay period ends when the fixation signal is no longer present, and the RNN must now determine which of the two input signals it received before the delay period (the go signal, or the no-go signal). RNNs were trained on both of these tasks in PyTorch, an open-source machine learning library for the implementation and training of machine learning models using backpropagation [113]. Pytorch simplifies the training of such models by automatically tracking the gradient of all computations in the forward pass of a neural network, and storing them in a computational graph that can later be used to backpropagate (via the chain-rule). ### Modularity maximization In this paper we analyzed network modules in the functional connectivity network of RNNs. In order to detect these modules, we used a method referred to as _modularity maximization_. The method of modularity maximization[114] is based on a straightforward principle: comparing the observed connectivity data with what is expected by chance. This entails comparing the observed connectivity matrix \(A\), with another matrix of the same dimensions \(P\). The elements \(P_{ij}\) of matrix \(P\) represent the expected weight of the connection between nodes \(i\) and \(j\) under a null model. From these two matrices we define the modularity matrix, \(B\), as: \[B=A-P.\] Each element \(B_{ij}\) in the modularity matrix represents whether the observed connection between nodes \(i\) and \(j\) is stronger (\(B_{ij}>0\)) or weaker (\(B_{ij}<0\)) than expected under the null model. Modularity maximization then uses the modularity matrix to assess the quality or "goodness" of a modular partition, which is a division of the network's nodes into non-overlapping modules or communities. The quality of a partition is quantified using the modularity function, \(Q\), calculated as: \[Q=\sum_{ij}B_{ij}\delta(\sigma_{i},\sigma_{j}).\] where \(\delta\) is the Kronecker delta function, and \(\sigma_{i}\) and \(\sigma_{j}\) are the community labels of nodes \(i\) and \(j\), respectively. In addition to simply assessing the quality of a given partition, the variable \(Q\) can be optimized outright to identify high-quality partitions of a network's nodes into modules. We used the Louvain algorithm to optimize \(Q\). For a tutorial on using this method, see Esfahlani _et al._[115]. For our specific implementation, we used a variant of the modularity function (\(Q\)) that treats the contribution of positive and negative edges asymmetrically [116]. For an implementation of this method, see the **community_louvain** function in the brain connectivity toolbox: [https://sites.google.com/site/bctnet/](https://sites.google.com/site/bctnet/) ### Output lesions In our **Results** section on the _Function_ of FC modules in recurrent neural networks, we use a lesioning method that we refer to as "output lesions" to test if the RNN is using information from its modules to perform the task. These output lesions are lesions to the weights of the output layer of the RNN. The output layer performs a linear transformation of activity in the recurrent layer such that the \(M\)-dimensional activity of the recurrent layer is translated into the activity of \(O\) output neurons. The output lesions were performed in the following way: We implemented the modularity maximization method optimized using the Louvain algorithm to partition the functional connectivity of the recurrent activity into \(modules\). We then use these modules to lesion the connections from a given module to the output nodes: \[\text{for selected module }m,\in modules,\,W_{out}[:,m]=0.\] here, \(W_{out}\) be the weight matrix for connections from the recurrent layer to the output layer ### Generative models In the **Supplemental Section** on the origin of functional connectivity modules in RNNs (**Section** 3), we developed three generative models of functional connectivity modules. Each of these generative models involved creating a new input weight matrix, and the parameters of the models allowed us to explore how modularity (\(Q\)) changed as we varied parameters in the model. The first generative model was the sign-based model. This model creates an input weight matrix \(W\) of size \(N\times M\) out of two values \((+1,-1)\). That is: \[W_{ij}=\begin{cases}-1&\text{if }i\in S\\ 1&\text{otherwise}\end{cases}\] \[S=\{i_{1},i_{2},\dots,i_{k}\}.\] \[k=\left\lceil N\times p\right\rceil.\] where \(S\) is a random subset of row indices up to \(k\), and \(k\) is defined based on the percentage \(p\) of negative weights to be added to the weight matrix \(W\). All columns of this matrix \(M\) are given the same value. The second generative model was the difference-based generative model. This model creates an input weight matrix \(W\) of size \(N\times 2\) by randomly assigning one columns value from each row to \(\alpha\) and assigning the other columns value to \(\alpha+\gamma\): \[W_{i1},W_{i2}=\begin{cases}(\alpha_{i},\alpha_{i}+\gamma)&\text{with probability }0.5\\ (\alpha_{i}+\gamma,\alpha_{i})&\text{with probability }0.5\end{cases}\] \[\alpha_{i}=\left|x_{i}\right|,\text{ where }x_{i}\sim\mathcal{N}(0,1).\] \(\alpha\) is the absolute value of a random variable drawn from a Gaussian distribution with a mean of zero and a variance of 1. Our third and final generative model included both sign and difference elements and is likewise refered to as the sign-difference generative model. The only difference between the previous model and this model is the definition of \(\alpha\). \[\alpha_{i}\sim\mathcal{N}(0,1).\] That is, whereas the previous model took the absolute value of the randomly drawn variable, this model does not, allowing approximately half of these values to be negative. ### Fixed point approximation In our **Results** section on _Dynamics_, we illustrate that the perceptual decision-making task results in an approximate line attractor (Fig. 4b). In order to approximate the attractors that form this line, we used a simple gradient based method. For the implementation of the fixed-point approximation method that we used (also showing an approximate line attractor with the perceptual decision-making task) see: [https://github.com/gyyang/nn-brain/blob/master/RNNY2BDynamicalSystemAnalysis.ipynb](https://github.com/gyyang/nn-brain/blob/master/RNNY2BDynamicalSystemAnalysis.ipynb). Briefly, this process involves optimizing the hidden/recurrent activity of the RNN such that the mean squared error (MSE) between that activity and the hidden/recurrent activity one step forward in time is minimized. Attractors occur where the derivative of a system is equal to zero. Intuitively then, if the difference between a current state and the next state is brought to zero, the system is in an attractor state. Alternatively, the loss could be very low but not zero, a case that is sometimes referred to as a slow-point [75]. This optimization is implemented by randomly initializing the recurrent state of the RNN in many different states, and then using back-propagation to minimize the MSE between current activity and the future activity. ### Input projection lesions In our **Results** section on _Dynamics_, we explored the effects of lesions to the weights of the recurrent layer of RNNs trained on the perceptual decision-making tasks. We started by using sign-based partitons or difference-based partitions to define the input projection modules (Fig. S4d and i respectively). Briefly, sign-based partitions were determined by taking the sign of weights from one of the input neurons. Neurons receiving positive weights were in one module. Neurons receiving negative weights were in another. Difference-based partitions were determined by comparing the weights of the two input neurons onto each recurrent neuron. If \(inputweight1>inputweight2\) then the recurrent neuron was placed in module 1, and if \(inputweight1<inputweight2\) then the recurrent neuron was placed in module 2. Then, we tested the effects of lesions to RNNs that were"perturbed" with a large continual input value of 4. More specifically, we lesioned positively weighted connections in the recurrent layer that were associated with the same input projection module: \[W^{\prime}_{ij}=\begin{cases}0&\text{if }i,j\in M_{k}\text{ and }W_{ij}=\max_{a,b\in M_{k}}(W_{ab}),\\ W_{ij}&\text{otherwise}.\end{cases}\] Here, \(W\) is the weight matrix of the recurrent layer, \(W^{\prime}\) is the modified weight matrix after the lesioning, \(M_{k}\) is a module in the network, and \(i\) and \(j\) are neurons in the recurrent layer. If neurons \(i\) and \(j\) belong to the same module \(M_{k}\) and the weight \(W_{ij}\) is the maximum weight among all the connections within the module, then the weight is set to zero. Otherwise, the weight remains the same. We iterated this equation such that \(W^{\prime}\) became \(W\) on future iterations. In this way, we lesioned an increasing number positively weighted connections within the same module and on each iteration we tested the distance of the perturbed activity from the decision boundary. Given that these RNNs were trained on cross entropy loss, the output node with the greatest activity corresponded to the RNNs "choice". For this reason, we defined the decision boundary as the difference between the output activity of the two output neurons that correspond to decision 1 and decision 2. ### Lesion similarity In our **Results** section on _Dynamics_, we explored the effects of lesions to neurons in the recurrent layer of the RNN by quantifying how they changed the phase space of the system. First, we defined the phase space of the RNN by performing principal component analysis (PCA) on the activity of its recurrent layer across many task trials. Then, we defined a grid of \(25\times 25\) points (for a total of 625) in the 2-dimensional space defined by the first two principal components. These points are spread equally so that they span the entire activity space explored during task trials. Then, we use each of these points as an initial recurrent state for the RNN (projecting it back into the original activity space) and step it forward in time by one time step. This results in a new set of 625 points corresponding to the location of the recurrent state after one time step from each point on the original grid. We stored these points as the original points \(Orig\) (importantly, we stored each of these points in the original \(M\)-dimensional activity space). For the lesioning analysis, we would lesion a node and then use the same grid of initial recurrent states to initiate the system and run it for a single time step. This resulted in a new set of forward stepped points \(L\) for the lesioned model. We created \(M\) separate lesioned models where \(M\) is the number of neurons in the recurrent layer. For each model, we lesioned a different neuron, and then took the Euclidean distance between each point in \(L_{i}\) and the corresponding point in \(Orig\), resulting in a 625 length array of lesion distance values for every lesioned model. Next, in order to quantify how similar the effects of different lesions were, we calculated the similarity of lesions by correlating the lesion distances from each lesioned model, resulting in an \(M\times M\) matrix of similarity values that we refer to as the _lesion similarity_ matrix. ### Mouse resting state fMRI data All in vivo experiments were conducted in accordance with the Italian law (DL 2006/2014, EU 63/2010, Ministero della Sanita, Roma) and the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. Animal research protocols were reviewed and consented by the animal care committee of the Italian Institute of Technology and Italian Ministry of Health. The rsfMRI dataset used in this work consists of \(n=19\) scans in adult male C57BL/6J mice that are publicly available [117; 118]. Animal preparation, image data acquisition, and image data preprocessing for rsfMRI data have been described in greater detail elsewhere [119]. Briefly, mice were anesthetized with isoflurane (5% induction), intubated and artificially ventilated (2%, surgery). The left femoral artery was cannulated for continuous blood pressure monitoring and terminal arterial blood sampling. At the end of surgery, isoflurane was discontinued and substituted with halothane (0.75%). Functional data acquisition commenced 45 minutes after isoflurane cessation. Mean arterial blood pressure was recorded throughout imaging sessions. Arterial blood gasses (\(paCO_{2}\) and \(paO_{2}\)) were measured at the end of the functional time series to exclude non-physiological conditions. rsfMRI data were acquired on a 7.0-T scanner (Bruker BioSpin, Ettlingen) equipped with BGA-9 gradient set, using a 72-mm birdcage transmit coil, and a four-channel solenoid coil for signal reception. Single-shot BOLD echo planar imaging time series were acquired using an echo planar imaging sequence with the following parameters: repetition time/echo time, 1200/15 ms; flip angle, 30\({}^{\circ}\); matrix, \(100\times 100\); field of view, \(2\times 2cm^{2}\); 18 coronal slices; slice thickness, 0.50 mm; 1500 volumes; and a total rsfMRI acquisition time of 30 minutes. Timeseries were despicked, motion corrected, skull stripped and spatially registered to an in-house EPI-based mouse brain template. Denoising and motion correction strategies involved the regression of mean ventricular signal plus 6 motion parameters. The resulting time series were band-pass filtered (0.01-0.1 Hz band) and then spatially smoothed with a Gaussian kernel of 0.5 mm full width at half maximum. After preprocessing, mean regional time-series were extracted for 15314 regions of interest (ROIs) derived from a voxelwise version of mouse structural connectome [120; 68; 121]. ### Mouse Anatomical Connectivity Data The mouse anatomical connectivity data used in this work were derived from a voxel-scale model of the mouse connectome made available by the Allen Brain Insitute [120; 68] and recently made computationally tractable [121]. Briefly, the structural connectome was obtained from imaging enhanced green fluorescent protein (eGFP)-labeled axonal projections derived from 428 viral microinjection experiments, and registered to a common coordinate space [122]. Under the assumption that structural connectivity varies smoothly across major brain divisions, the connectivity at each voxel was modeled as a radial basis kernel-weighted average of the projection patterns of nearby injections [120]. Leveraging the smoothness induced by the interpolation, neighboring voxels were aggregated according to a Voronoi diagram based on Euclidean distance, resulting in a \(15314\times 15314\) whole brain weighted and directed connectivity matrix [121]. ### Human connectomic data Structural, diffusion, and functional human brain magnetic resonance imaging (MRI) data was sourced from the Human Connectome Project's (HCP) young adult cohort (S1200 release). This contained structural MRI (T1w), resting-state functional MRI (rs-fMRI), and diffusion weighted imaging (DWI) data from 1000 adult participants (53.7% female, mean age = 28.75, standard deviation of age = 3.7, age range = 22-37). A comprehensive report of imaging acquisition and preprocessing is available elsewhere [123]. In brief, imaging was acquired with a Siemens 3T Skyra scanner with a 32-channel head coil. The rs-fMRI data was collected with a gradient-echo echo-planar imaging (EPI) sequence (run duration = 14:33 min, TR = 720 ms, TE = 33.1 ms, flip angle = 52\({}^{\circ}\), 2-mm isotropic voxel resolution, multi-band factor = 8) with eyes open and instructions to fixate on a cross [124]. DWI data was acquired using a spin-echo planar imaging sequence (TR= 5520 ms, TE = 89.5 ms, flip angle = 78\({}^{\circ}\), 1.25 mm iso- tropic voxel resolution, b-values = 1000, 2000, 3000 s/mm2, 90 diffusion weighed volumes for each shell, 18 b = 0 volumes) [125]. HCP minimal processing pipeline was used to preprocess the functional and diffusion imaging data [123]. In particular, rs-fMRI underwent gradient distortion correction, motion correction, registration to template space, inten sity normalization and ICA-FIX noise removal [123; 126]. The diffusion preprocessing pipeline consisted of b0 intensity normalization, EPI distortion correction, eddy-current-induced distortion correction, registration to native structural space, and skull stripping [123]. #### Diffusion tractography A probabilistic streamline tractography pipeline tailored for the computation of high-resolution human connectomes [73] was implemented in MRtrix3 [127] which also adopted recent recommendations detailed elsewhere [73]. In particular, an unsupervised heuristic was used to estimate macroscopic tissue response functions for white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) [128]. Multi-shell, multi-tissue constrained spherical deconvolution was used to estimate fiber orientation distributions (FODs) [129]. This information was used to apply combined intensity normalization and bias field correction [130]. Liberal and conservative brain masks were respectively utilized in the last two steps to mitigate the detrimental effects of an imperfect mask on the respective procedures [73]. The normalized FODs were used to perform anatomically constrained tractography (ACT) [131]. A tissue-type segmentation was used to create a mask at the GM-WM boundary to enable brain-wide streamline seeding. Whole-brain tractography was conducted using 2nd-order integration over FODs (iFOD2) [132]. A total of five million streamlines (per participant) were generated that satisfied length (minimum length = 4mm) and ACT constraints. #### Human structural connectome The high-resolution structural connectivity network was constructed from the whole-brain tractograms [73]. Notably, streamline endpoints were mapped onto HCP's CIFTI space comprising two surface meshes of the cortex, in addition to volumetric delineations of several subcortical structures. The surface meshes included a subset of the fs-LR template mesh after removal of the medial wall with respectively 29696 and 29716 vertices for the left and right cortex. In addition, 31870 voxels from the MNI template space were included for subcortical brain regions, resulting in a total of 91282 high-resolution network nodes. Euclidean distance was used to assign each streamline to its closest node pair via nearest neighbor mapping. Streamlines with endpoints falling more than 2mm away from all nodes were discarded. Connectome spatial smoothing with a 6mm FWHM kernel was performed to account for accumulated errors in streamline endpoint location and to increase intersubject reliability of connectomes [72]. A group-level consensus connectome was constructed by aggregating individual connectivity matrices across all participants. To minimize the pipeline's computational complexity, group aggregation was applied before smoothing, considering that both aggregation and connectome spatial smoothing entailed linear operations, and the order of implementation does not impact the resulting connectomes. #### Human thalamic projections We next sought to estimate the thalamocortical structural connectivity projections from the smoothed high-resolution group-level human connectome model. To this end, the cifi structures for the left and right thalamus were combined to form a binary mask of the thalamus. This mask was subsequently used to aggregate group level thalamic projections to the left cortex. #### Human functional network The dense group-level functional connectivity network was provided by HCP. Specifically, this network was constructed from a pipeline combining high-resolution individual rs-fMRI data across all participants. First, the minimally preprocessed individual rs-fMRI data were aligned by a multimodal surface matching algorithm (MSMAll) [133]. Next, all timeseries were temporally demeaned followed by a variance normalization [134] and were passed to a MELODIC's Incremental Group-PCA [135]. The group-PCA outputs were renormalized, eigenvalue reweighted, and correlated to create a dense functional connectivity matrix (\(91282\times 91282\)). Finally, a subset of this dense connectome was extracted to denote left cortical functional connectivity (\(29696\times 29696\)).
2309.08171
Unveiling Invariances via Neural Network Pruning
Invariance describes transformations that do not alter data's underlying semantics. Neural networks that preserve natural invariance capture good inductive biases and achieve superior performance. Hence, modern networks are handcrafted to handle well-known invariances (ex. translations). We propose a framework to learn novel network architectures that capture data-dependent invariances via pruning. Our learned architectures consistently outperform dense neural networks on both vision and tabular datasets in both efficiency and effectiveness. We demonstrate our framework on multiple deep learning models across 3 vision and 40 tabular datasets.
Derek Xu, Yizhou Sun, Wei Wang
2023-09-15T05:38:33Z
http://arxiv.org/abs/2309.08171v1
# Unveiling Invariances via Neural Network Pruning ###### Abstract Invariance describes transformations that do not alter data's underlying semantics. Neural networks that preserve natural invariance capture good inductive biases and achieve superior performance. Hence, modern networks are handcrafted to handle well-known invariances (ex. translations). We propose a framework to learn novel network architectures that capture data-dependent invariances via pruning. Our learned architectures consistently outperform dense neural networks on both vision and tabular datasets in both efficiency and effectiveness. We demonstrate our framework on multiple deep learning models across 3 vision and 40 tabular datasets. ## Introduction Preserving invariance is a key property in successful neural network architectures. Invariance occurs when the semantics of data remains unchanged under a set of transformations [1]. For example, an image of a cat can be translated, rotated, and scaled, without altering its underlying contents. Neural network architectures that represent data passed through invariant transformations with the same representation inherit a good inductive bias [20, 17, 18] and achieve superior performance [21, 22]. Convolutional Neural Networks (CNNs) are one such example. CNNs achieve translation invariance by operating on local patches of data and weight sharing. Hence, early CNNs outperform large multilayer perceptrons (MLP) in computer vision [14, 15]. Recent computer vision works explore more general spatial invariances, such as rotation and scaling [23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. Other geometric deep learning works extend CNNs to non-Euclidean data by considering additional data-type specific invariances, such as permutation invariance [33, 25, 26]. Designing invariant neural networks requires substantial human effort: both to determine the set of invariant transformations and to handcraft architectures that preserve said transformations. In addition to being labor-intensive, this approach has not yet succeeded for all data-types [24, 25, 26, 27]. For example, designing neural architectures for tabular data is especially hard because the set of invariant tabular transformations is not clearly-defined. Thus, the state-of-the-art deep learning architecture on tabular data remains MLP [1, 26]. Existing invariance learning methods operate at the data augmentation level [10, 25, 26, 27], where a model is trained on sets of transformed samples rather than individual samples. This makes the network resilient to invariant transformations at test time. Contrastive learning (CL) is shown to be an effective means of incorporating invariance [25], and has seen success across various tasks [24, 25, 26, 27, 28, 29], including tabular learning [1]. While these approaches train existing neural networks to capture new data-dependent invariances, the model architecture itself still suffers from a weak inductive bias. In contrast, existing network pruning works found shallow MLPs can automatically be compressed into sparse subnetworks with good inductive bias by pruning the MLP itself [26]. Combining pruning and invariance learning has largely been unsuccessful [27]. Furthermore, pruning for invariance does not scale to deep MLPs, possibly due to issues in the lazy training regime [24, 25] where performance improves while weights magnitudes stay near static over training. Combining invariance learning with network pruning remains an open question. We propose Invariance **U**nveiling **N**eural **N**etworks, IUNet, a pruning framework that discovers invariance-preserving subnetworks from deep and dense supernetworks. We hypothesize pruning for invariances fails on deep networks due to the lazy training issue [12], where performance improvement decouples from weight magnitudes. We address this with a proactive initialization scheme (PIs), which prevents important weights from being accidentally pruned by assigning low magnitudes to major ity of weights. To capture useful invariances, we propose a novel invariance learning objective (ILO), that successfully combines CL with network pruning by regularizing CL with the supervised objective. To the best of our knowledge, we are the first to automatically design deep architectures that incorporate invariance using pruning. We summarize our contributions below: * Designing architectures from scratch is difficult when desired invariances are either unknown or hard to incorporate. We automatically discover an invariance-preserving subnetwork that outperforms an invariance-agnostic supernetwork on both computer vision and tabular data. * Network pruning is used to compress models for mobile devices. Our approach consistently improves compression performance for existing vision and tabular models. * Contrastive learning traditionally fails with network pruning. We are the first to successfully combine contrastive learning with network pruning by regularizing it in our invariance learning objective. * In the lazy training regime, performance improves drastically while weight magnitudes stay relatively constant, hence weights important for downstream performance may not have large magnitudes and hence be falsely pruned. We provide a simple yet effective approach that encourages only important weights to have large magnitudes before the lazy training regime begins. ## Related Work ### Learning Invariances Most invariant networks are handcrafted for spatial invariances [1, 1, 1, 2, 3, 4, 16, 17, 18, 19]. Interestingly, subnetworks rarely outperform the original supernetwork, which has been dubbed the "Jackpot" problem [13]. In contrast to existing works, we successfully combine OMP with contrastive learning, alleviate the lazy learning issue, and outperform the original supernetwork. ## Proposed Method: IUNet ### Problem Setting We study the classification task with inputs, \(x\in\mathcal{X}\), class labels, \(y\in\mathcal{Y}\), and hidden representations, \(h\in\mathcal{H}\). Our neural network architecture, \(f(x,\theta):\mathcal{X}\rightarrow\mathcal{Y}\) is composed of an encoder, \(f_{\mathcal{E}}(\cdot,\theta):\mathcal{X}\rightarrow\mathcal{H}\) and decoder, \(f_{\mathcal{D}}(\cdot,\theta):\mathcal{H}\rightarrow\mathcal{Y}\) Figure 1: Overview for the IUNet Framework. The supernetwork, \(f^{P}(\cdot,\theta_{M})\), is initialized using PIS and trained on the ILO objective to obtain \(\theta_{M}^{(T)}\). Magnitude-based pruning is used to get a new architecture \(f^{P}=\mathcal{P}(\theta_{M}^{(T)})\). The new architecture, \(f^{P}(\cdot,\theta_{P})\), is initialized via lottery ticket reinitialization and finetuned with supervised maximum likelihood loss. where \(\theta\in\Theta\) are the weights and \(f=f_{\mathcal{E}}\circ f_{\mathcal{D}}\). In the context of training, we denote the weights after \(0<t<T\) iterations of stochastic gradient descent as \(\theta^{(t)}\). First, we define our notion of invariance. Given a set of invariant transformations, \(\mathcal{S}\), we wish to discover a neural network architecture \(f^{*}(x,\theta)\), such that all invariant input transformations map to the same representation, shown in Equation 1. We highlight our task focuses on the discovery of novel architectures, \(f^{*}(\cdot,\theta)\), not weights, \(\theta\), because good architectures improves the inductive bias [20]. \[f^{*}_{\mathcal{E}}(x,\theta)=f^{*}_{\mathcal{E}}(g(x),\theta),\forall g\in \mathcal{S},\forall\theta\in\Theta. \tag{1}\] ### Framework We accomplish this by first training a dense supernetwork, \(f^{M}(\cdot,\theta_{M})\), with enough representational capacity to capture the desired invariance properties, as shown in Equation 2. A natural choice for \(f^{M}(\cdot,\theta_{M})\) is a deep MLP, which is a universal approximator [17]. \[\exists\theta^{*}_{M}\in\Theta_{M}:f^{M}_{\mathcal{E}}(x,\theta^{*}_{M})=f^{M }_{\mathcal{E}}(g(x),\theta^{*}_{M}),\forall g\in\mathcal{S}. \tag{2}\] Next, we initialize the supernetwork's weights, \(\theta^{(0)}_{M}\), using our Proactive Initialization Scheme, PIS, and train the supernetwork with our Invariance Learning Objective, ILO, to obtain \(\theta^{(T)}_{M}\). We discuss both PIS's and ILO's details in following sections. We construct our new untrained subnetwork, \(f^{P}(\cdot,\theta^{(0)}_{P})\), from the trained supernetwork, \(f^{M}(\cdot,\theta^{(T)}_{M})\), where the subnetwork contains a fraction of the supernetwork's weights, \(\theta^{(0)}_{P}\subset\theta^{(T)}_{M}\) and \(|\theta^{(0)}_{P}|\ll|\theta^{(T)}_{M}|\), and is architecturally different from the supernetwork, \(f^{P}(\cdot,\cdot)\neq f^{M}(\cdot,\cdot)\). For this step, we adopt standard One-shot Magnitude-based Pruning (OMP), where the smallest magnitude weights and their connections in the supernetwork architecture are dropped. We adopt OMP because of its success in neural network pruning [16, 1]. We represent this step as an operator mapping supernetwork weights into subnetwork architectures \(\mathcal{P}:\Theta_{M}\rightarrow\mathcal{F}_{P}\), where \(\mathcal{F}_{P}\) denotes the space of subnetwork architectures. We hypothesize the trained subnetwork, \(f^{P}(\cdot,\theta^{(T)}_{P})\), can outperform the trained original supernetwork, \(f^{M}(\cdot,\theta^{(T)}_{M})\), if it learns to capture the right invariances and hence achieving a better inductive bias. The ideal subnetwork, \(f^{P*}(\cdot,\theta_{P*})\), could capture invariances without training, as shown in Equation 3. \[f^{P*}_{\mathcal{E}}(x,\theta_{P*})=f^{P*}_{\mathcal{E}}(g(x),\theta_{P*}), \forall g\in\mathcal{S},\forall\theta_{P*}\in\Theta_{P*} \tag{3}\] Finally, we re-initialize the subnetwork's weights, \(\theta^{(0)}_{P}\), using the Lottery Ticket Re-initialization scheme [16] then finetune the subnetwork with maximum likelihood loss to obtain \(\theta^{(T)}_{P}\). We call this framework, including the ILO objective and PIS initialization scheme, IUNet1, as shown in Figure 1 Footnote 1: IUNet prunes an ineffective supernetwork into an efficient effective subnetwork. OMP prunes an inefficient effective supernetwork into an efficient but slightly less effective subnetwork. \begin{table} \begin{tabular}{|c|c|c c c|} \hline Dataset & \(\text{MLP}_{\text{vis}}\) & \(\text{OMP}^{(\text{MLP}_{\text{vis}})}\) & \(\beta\text{-LASSO}^{(\text{MLP}_{\text{vis}})}\) & IUNet (\(\text{MLP}_{\text{vis}}\)) \\ \hline CIFAR10 & 59.266 \(\pm\) 0.050 & 59.668 \(\pm\) 0.171 & 59.349 \(\pm\) 0.174 & **64.847 \(\pm\) 0.121** \\ CIFAR100 & 31.052 \(\pm\) 0.371 & 31.962 \(\pm\) 0.113 & 31.234 \(\pm\) 0.354 & **32.760 \(\pm\) 0.288** \\ SVHN & 84.463 \(\pm\) 0.393 & 85.626 \(\pm\) 0.026 & 84.597 \(\pm\) 0.399 & **89.357 \(\pm\) 0.156** \\ \hline Dataset & ResNet & OMP (ResNet) & \(\beta\text{-LASSO}^{(\text{ResNet})}\) & IUNet (ResNet) \\ \hline CIFAR10 & 73.939 \(\pm\) 0.152 & 75.419 \(\pm\) 0.290 & 74.166 \(\pm\) 0.033 & **83.729 \(\pm\) 0.153** \\ CIFAR100 & 42.794 \(\pm\) 0.133 & 44.014 \(\pm\) 0.163 & 42.830 \(\pm\) 0.412 & **53.099 \(\pm\) 0.243** \\ SVHN & 90.235 \(\pm\) 0.127 & 90.474 \(\pm\) 0.192 & 90.025 \(\pm\) 0.201 & **94.020 \(\pm\) 0.291** \\ \hline \end{tabular} \end{table} Table 1: Comparing different pruning approaches to improve the inductive bias of \(\text{MLP}_{\text{vis}}\) and ResNet on computer vision datasets. Notice, IUNet performs substantially better than existing pruning-based methods by discovering novel architectures that better capture the inductive bias. IUNet flexibly boosts performance of off-the-shelf models. \begin{table} \begin{tabular}{|c|c|c c|} \hline Dataset & \(g(\cdot)\) & \(\text{MLP}_{\text{vis}}\) & IUNet (\(\text{MLP}_{\text{vis}}\)) \\ \hline \multirow{4}{*}{CIFAR10} & resize. & 44.096 \(\pm\) 0.434 & **97.349 \(\pm\) 4.590** \\ & horiz. & 80.485 \(\pm\) 0.504 & **99.413 \(\pm\) 1.016** \\ & color. & 56.075 \(\pm\) 0.433 & **98.233 \(\pm\) 3.060** \\ & graysc. & 81.932 \(\pm\) 0.233 & **99.077 \(\pm\) 1.598** \\ \hline Dataset & \(g(\cdot)\) & \(\text{MLP}_{\text{TAB}}\) & IUNet (\(\text{MLP}_{\text{TAB}}\)) \\ \hline mfeat. & feat. & 46.093 \(\pm\) 1.353 & **51.649 \(\pm\) 4.282** \\ \hline \end{tabular} \end{table} Table 2: Comparing the pruned IUNet (\(\text{MLP}_{\text{vis}}\)) model to an equivalent CNN. Although IUNet (\(\text{MLP}_{\text{vis}}\)) cannot outperform CNN, it bridges the gap between MLP and CNN architectures without any human design intervention. \begin{table} \begin{tabular}{|c|c|c c|} \hline Dataset & \(\text{MLP}_{\text{vis}}\) & OMP (\(\text{MLP}_{\text{vis}}\)) & \(\beta\text{-LASSO}^{(\text{MLP}_{\text{vis}})}\) & IUNet (\(\text{MLP}_{\text{vis}}\)) \\ \hline CIFAR10 & 59.266 \(\pm\) 0.050 & 59.668 \(\pm\) 0.171 & 59.349 \(\pm\) 0.174 & **64.847 \(\pm\) 0.121** \\ CIFAR100 & 31.052 \(\pm\) 0.371 & 31.962 \(\pm\) 0.113 & 31.234 \(\pm\) 0.354 & **32.760 \(\pm\) 0.288** \\ SVHN & 84.463 \(\pm\) 0.393 & 85.626 \(\pm\) 0.026 & 84.597 \(\pm\) 0.399 & **89.357 \(\pm\) 0.156** \\ \hline \end{tabular} \end{table} Table 1: Comparing different pruning approaches to improve the inductive bias of \(\text{MLP}_{\text{vis}}\) and ResNet on computer vision datasets. Notice, IUNet performs substantially better than existing pruning-based methods by discovering novel architectures that better capture the inductive bias. IUNet flexibly boosts performance of off-the-shelf models. Invariance Learning Objective: ILOThe goal of supernetwork training is to create a subnetwork, \(f^{P}(\cdot,\theta_{P}^{(0)})\), within the supernetwork, \(f^{M}(\cdot,\theta_{M}^{(T)})\), such that: 1. \(\mathcal{P}(\theta_{M}^{(T)})\) achieves superior performance on the classification task after finetuning. 2. \(\mathcal{P}(\theta_{M}^{(T)})\) captures desirable invariance properties as given by Equation 3. 3. \(\theta_{P}^{(0)}\) has higher weight values than \(\theta_{M}^{(T)}\setminus\theta_{P}^{(0)}\). Because subnetworks pruned from randomly initialized weights, \(\mathcal{P}(\theta_{M}^{(0)})\), are poor, they include harmful inductive biases that hinders training. Thus, we optimize the trained supernetwork, \(f^{M}(\cdot,\theta_{M}^{(T)})\), on goals (1) and (2) as a surrogate training objective. Goal (3) is handled by PIs, described in the next section. To achieve (1), we maximize the log likelihood of the data. To achieve (2), we minimize a distance metric, \(\phi(\cdot,\cdot)\), between representations of inputs under invariant perturbations and maximize the metric between different input samples, given by Equation 9. We prove this is equivalent to Supervised Contrastive Learning (SCL) in the Supplementary Material. Hence, (2) can be achieved through SCL. \[\theta_{M}^{\star}=\underset{\theta_{M}}{argmax}\underset{\begin{subarray}{c}x_ {j}\sim\mathcal{X}\\ g\sim\mathcal{S}\end{subarray}}{\mathbb{E}}\left[\frac{\phi(f_{\mathcal{E}}^{M} (x_{i},\theta_{M}),f_{\mathcal{E}}^{M}(x_{j},\theta_{M}))}{\phi(f_{\mathcal{E}} ^{M}(x_{i},\theta_{M}),f_{\mathcal{E}}^{M}(g(x_{i}),\theta_{M}))}\right] \tag{4}\] Our final Invariance Learning Objective (ILO) loss function combines these two ideas as shown in Equation 6, where \(\mathcal{L}_{SUP}\) is standard maximum likelihood loss, \(\mathcal{L}_{NCE}\) is a contrastive loss (described in Appendix), \(D_{tr}\) is a labelled training dataset of \((x,y)\) pairs, and \(\lambda\) is a hyperparameter. \[\mathcal{L}(\theta_{M};\mathcal{S})= \tag{5}\] \[\underset{x,y\sim D_{tr}}{\mathbb{E}}\left[\mathcal{L}_{SUP}(x,y, \theta_{M})+\lambda\mathcal{L}_{NCE}(x,y,\theta_{M};\mathcal{S})\right]\] Both loss components are crucial to IUNet. With just \(L_{NCE}\), the supernetwork will overfit the contrastive objective [10, 11], causing weights critical for finetuning the supervised objective to be pruned. With just \(L_{SUP}\), the architecture is not explicitly optimized to capture desired invariances. Proactive Initialization Scheme: PIsDeep neural networks often enter the lazy training regime [13, 12], where the loss steadily decreases while weights barely change. This is particularly harmful to neural networks pruning [12], especially when low-magnitude weights contribute to decreasing the loss and hence should not be pruned. We propose a simple solution by scaling the weight initialization by a small multiplier, \(\kappa\). We find this alleviates the aforementioned issue by forcing the model to assign large values only to important weights prior to lazy training. Because lazy training is only an issue for pruning, we only apply \(\kappa\)-scaling to the pre-pruning training stage, not the fine-tuning stage. This is done by scaling the initial weights \(\theta_{M}^{(0)}=\kappa\theta_{M^{\dagger}}^{(0)}\), where \(\theta_{M^{\dagger}}^{(0)}\) follows the Kaiming [10] or Glorot [11] initialization. ## Experiment Setup ### Datasets IUNet is evaluated on _image_ and _tabular_ classification 2: Footnote 2: More details are provided in the Supplementary. * **Vision**: Experiments are run on CIFAR10, CIFAR100, and SVHN [13, 12], following baseline work [2]3. * **Tabular**: Experiments are run on 40 tabular datasets from a benchmark paper [1], covering a diverse range of problems. The datasets were collected from OpenML [10], UCI [1], and Kaggle. Figure 2: Effect of PIs and ILO on pruned models. The y-axis is the validation accuracy (%) and x-axis is the compression ratio. PIs experiments only alter the supernetwork’s initialization. \(\kappa=1.0\) means normal initialization. ILO experiments only alter the training objective during supernetwork training. After supernetwork training, subnetworks are pruned under different compression ratios, then finetuned. Validation accuracy of trained pruned models are reported. ### Model Setup IUnet is compared against One-shot Magnitude Pruning (OMP) [1], and \(\beta\)-Lasso pruning [20] on all datasets. We denote the supernetwork used by each pruning method with a superscript. Unless otherwise specified, models are trained via maximum likelihood. In addition, we compare against the following dataset-specific supernetworks (\(\text{MLP}_{\text{VIS}}\), \(\text{MLP}_{\text{TAB}}\), ResNet) and models: * **Vision**: We consider ResNet[16], \(\text{MLP}_{\text{VIS}}\), a MLP that contains a CNN subnetwork [20], and the aforementioned CNN subnetwork. * **Tabular**: We consider \(\text{MLP}_{\text{TAB}}\), a 9-layer MLP with hidden dimension 512 [14], XGB [2], TabN[1], a handcrafted tabular deep learning architecture, and \(\text{MLP}_{\text{TAB+C}}\)[15], the state-of-the-art MLP, which was heavily tuned from a cocktail of regularization techniques. ### Considered Invariances Because contrastive learning is successful on both vision and tabular datasets, our invariant transformations, \(\mathcal{S}\), come from existing works. For computer vision, SimCLR [2] transformations are used: (1) resize crops, (2) horizontal flips, (3) color jitter, and (4) random grayscale. For tabular learning, SCARF [1] transformations are used: (5) randomly corrupting features by drawing the corrupted versions from its empirical marginal distribution. ## Results ### On Inductive Bias In this section, we compare the effectiveness of the trained subnetwork discovered by IUNet, \(f^{P}(\cdot,\theta_{P}^{(T)})\), against the trained supernetwork, \(f^{M}(\cdot,\theta_{M}^{(T)})\). As seen in Tables 1 and 5, the pruned subnetwork outperforms the original supernetwork, even though the supernetwork has more representational capacity. This supports our claim that IUNet prunes subnetwork architectures with better inductive biases than the supernetwork. Importantly, IUNet substantially improves upon existing pruning baselines by explicitly including invariances via ILO and alleviating the lazy learning issue [11] via PIs. On _vision_ datasets: As seen in Figure 1, IUNet is a general and flexible framework that improves the inductive bias of not only models like \(\text{MLP}_{\text{VIS}}\) but also specialized architectures like ResNet. As seen in Figure 2, IUNet bridges the gap between MLPs and CNNs. Unlike previous work [16], IUNet does this in an entirely automated procedure. While CNN outperforms IUNet (\(\text{MLP}_{\text{VIS}}\)), we can also apply IUNet to specialized networks, \(\text{IUNet}\) (\(\text{ResNet}\)), which achieves the best overall performance. Figures 1 and 2 show IUNet is a useful tool for injecting inductive bias into arbitrary neural architectures. On _tabular_ datasets: As seen in Figure 5, the subnetworks derived from MLPs outperform both the original \(\text{MLP}_{\text{TAB}}\) and hand-crafted architectures: TabN and XGB. Unlike vision, how to encode invariances for tabular data is highly nontrivial, making IUNet particularly effective. Note, although IUNet performs competitively against \(\text{MLP}_{\text{TAB+C}}\), they are orthogonal approaches. \(\text{MLP}_{\text{TAB+C}}\) focuses on tuning regularization hyperparameters during finetuning, whereas IUNet improves the model architecture. Note, \(\text{IUNet}\) (\(\text{MLP}_{\text{TAB}}\)) did not use the optimal hyperparameters found by \(\text{MLP}_{\text{TAB+C}}\)[15]. ### Ablation Study To study the effectiveness of (1) pruning, (2) PIs, and (3) ILO, each one is removed from the optimal model. As seen in Table 4, each is crucial to IUNet. Pruning is necessary to encode the inductive bias into the subnetwork's neural architecture. PIs and ILO improves the pruning policy by ensuring weights crucial to finetuning and capturing invariance are not pruned. Notice, without pruning, IUNet No-pune performs worse than the original supernetwork. This highlights an important notion that PIs aims to improve the pruning policy, not the unpruned performance. By sacrificing unpruned performance, PIs ensures important weights are not falsely pruned. PIs is less effective on tabular datasets where the false pruning issue seems less severe. Combining pruning, ILO, and PIs, IUNet most consistently achieves the best performance. upon existing pruning policies. First, our results support existing findings that (1) OMP does not produce subnetworks that substantially outperform the supernetwork [1] and (2) while unpruned models trained with SCL can outperform supervised ones, pruned models trained with SCL perform substantially worse [14]. PIS flips the trend from (1) by slightly sacrificing unpruned performance, due to poorer initialization, IUNet discovers pruned models with better inductive biases, which improves downstream performance. ILO fixes the poor performance of SCL in (2) by preserving information pathways for both invariance and max likelihood over training. We highlight both these findings are significant among the network pruning community. Finally, Figure 2 confirms IUNet achieves the best performance by combining both PIS and ILO. In addition to being more effective that the supernetwork, \(f^{M}(\cdot,\theta_{M}^{(T)})\), the pruned network, \(f^{P}(\cdot,\theta_{p}^{(T)})\), is also more efficient. Figure 2 shows IUNet can reach 8-16\(\times\) compression while still keeping superior performance. ### Effect of Proactive Initialization To further study the role of PIS, the histogram of weight magnitudes is monitored over the course of training. As shown in Figure 5, under the standard OMP pruning setup, the histogram changes little over the course of training, which supports the lazy training hypothesis [13] where performance rapidly improves, while weight magnitudes change very little, decoupling each weight's importance from its magnitude. With PIS, only important weights grow over the course of training, while most weights remain near zero, barely affecting the output activations of each layer. This phenomenon alleviates the lazy training problem by ensuring (1) pruning safety, as pruned weights are near zero prior which have minimal affect on layer activations, and (2) importance-magnitude coupling, as structurally important connections must grow to affect the output of the layer. ### On Invariance Consistency To further study whether particular invariances are learned, we compute the consistency metric [20], which measure the percentage of samples whose predicted label would flip when an invariant transformation is applied to the input. As seen in Table 3, the subnetwork found by IUNet, \(f^{P}(\cdot,\theta_{P}^{(0)})\), is able to preserve invariances specified in ILO much better than the supernetwork, \(f^{M}(\cdot,\theta_{M}^{(0)})\). This shows IUNet indeed captures desirable invariances. ### On Weight Visualization We visualize the supernetwork weights, \(\theta_{M}^{(T)}\), when trained with IUNet compared to standard maximum likelihood (MLP) to determine what structures preserve invariance. On _vision_ datasets: As seen in Figure 6, IUNet learns more locally connected structures, which improves translation invariance. Prior work [22] found network structure (as opposed to inductive bias) to be the limiting factor for encoding CNN inductive biases into MLPs, which IUNet successfully replicates. On _tabular_ datasets: As seen in Figure 6, IUNet weights focus more on singular features. This preserves invariance over random feature corruption, as the absence of some tabular features does not greatly alter output activations of most neurons. This structure can also be likened to tree ensembles [11], whose leaves split individual features rather than all features. ## Conclusion In this work, we study the viability of network pruning for discovering invariant-preserving architectures. Under the computer vision setting, IUNet bridges the gap between deep MLPs and deep CNNs, and reliably boosts ResNet performance. Under the tabular setting, IUNet reliably boosts performance of existing MLPs, comparable to applying the state-of-the-art regularization cocktails. Our proposed novelties, ILO and PIS, flexibly improves existing OMP pruning policies by both successfully integrating contrastive learning and alleviating lazy training. Thus, IUNet effectively uses pruning to tackle invariance learning. Figure 4: Visualization of weight magnitudes, \(|\theta_{M}^{(T)}|\), trained with different policies. The top row was trained on CIFAR10 and shows the magnitude of each RGB pixel for 6 output logits. The bottom row was trained on arrhythmia and shows the weight matrix of the 1st layer with 280 input and 512 output dimensions. Lighter color means larger magnitude. Figure 3: Histogram of weight magnitudes, \(|\theta_{M}^{(t)}|\), plotted over each epoch under different \(\kappa\) initializations settings. \(\kappa=1.0\) means normal initialization. Results shown for MLP\({}_{\text{VIS}}\) on the CIFAR10 dataset. \begin{table} \begin{tabular}{|c|c|c c c|c c c|} \hline Dataset & MLP\({}_{\text{TAB}}\) & OMP (\({}^{\text{MLP}_{\text{TAB}}}\)) & \(\beta\)-Lasso (\({}^{\text{MLP}_{\text{TAB}}}\)) & IUNet (\({}^{\text{MLP}_{\text{TAB}}}\)) & XGB & TabN & MLP\({}_{\text{TAB+C}}\) \\ \hline credit-g & 70.000 & 70.000 \(\pm\) 0.000 & 67.205 \(\pm\) 0.718 & 63.166 \(\pm\) 0.000 & 68.929 & 61.190 & **74.643** \\ anneal & 99.490 & 99.691 \(\pm\) 0.234 & 99.634 \(\pm\) 0.518 & **99.712 \(\pm\) 0.101** & 85.416 & 84.248 & 89.270 \\ kr-vs-kp & 99.158 & 99.062 \(\pm\) 0.142 & 99.049 \(\pm\) 0.097 & 99.151 \(\pm\) 0.064 & **99.850** & 93.250 & **99.850** \\ arrhythmia & 67.086 & 55.483 \(\pm\) 5.701 & 67.719 \(\pm\) 3.483 & **74.138 \(\pm\) 2.769** & 48.779 & 43.562 & 61.461 \\ mfeat. & 98.169 & 97.959 \(\pm\) 0.278 & 97.204 \(\pm\) 0.620 & **98.176 \(\pm\) 0.121** & 98.000 & 97.250 & 98.000 \\ vehicle & 80.427 & 81.115 \(\pm\) 2.214 & 80.611 \(\pm\) 1.244 & 81.805 \(\pm\) 2.065 & 74.973 & 79.654 & **82.576** \\ kc1 & 80.762 & **84.597 \(\pm\) 0.000** & 83.587 \(\pm\) 1.010 & **84.597 \(\pm\) 0.000** & 66.846 & 52.517 & 74.381 \\ adult & 81.968 & 82.212 \(\pm\) 0.582 & 82.323 \(\pm\) 0.424 & 78.249 \(\pm\) 3.085 & 79.824 & 77.155 & **82.443** \\ walking. & 58.466 & 60.033 \(\pm\) 0.112 & 58.049 \(\pm\) 0.309 & 59.789 \(\pm\) 0.456 & 61.616 & 56.801 & **63.923** \\ phoneme & 84.213 & 86.733 \(\pm\) 0.194 & 84.850 \(\pm\) 1.548 & 87.284 \(\pm\) 0.436 & **87.972** & 86.824 & 86.619 \\ skin-seg. & 99.869 & 99.866 \(\pm\) 0.016 & 99.851 \(\pm\) 0.015 & 99.876 \(\pm\) 0.006 & **99.968** & 99.961 & 99.953 \\ ldpa & 66.590 & 68.458 \(\pm\) 0.140 & 62.362 \(\pm\) 4.605 & 64.816 \(\pm\) 4.535 & **99.008** & 54.815 & 68.107 \\ nomao & 95.776 & 95.682 \(\pm\) 0.046 & 95.756 \(\pm\) 0.074 & 95.703 \(\pm\) 0.110 & **96.872** & 95.425 & 96.826 \\ cnea & 94.080 & 92.742 \(\pm\) 0.404 & 94.808 \(\pm\) 0.254 & **96.075 \(\pm\) 0.242** & 94.907 & 89.352 & 95.833 \\ blood. & 68.965 & 61.841 \(\pm\) 10.012 & 65.126 \(\pm\) 20.792 & **70.375 \(\pm\) 5.255** & 62.281 & 64.327 & 67.617 \\ bank. & **88.300** & **88.300 \(\pm\) 0.000** & 86.923 \(\pm\) 1.948 & **88.300 \(\pm\) 0.000** & 72.658 & 70.639 & 85.993 \\ connect. & 72.111 & 72.016 \(\pm\) 0.112 & 72.400 \(\pm\) 0.214 & 74.475 \(\pm\) 0.445 & 72.374 & 72.045 & **80.073** \\ shuttle & 99.709 & 93.791 \(\pm\) 3.094 & 99.687 \(\pm\) 0.027 & 93.735 \(\pm\) 2.303 & 98.563 & 88.017 & **99.948** \\ higgs & 72.192 & 72.668 \(\pm\) 0.039 & 72.263 \(\pm\) 0.149 & 73.215 \(\pm\) 0.384 & 72.944 & 72.036 & **73.546** \\ australian & 82.153 & 83.942 \(\pm\) 1.578 & 81.667 \(\pm\) 1.572 & 82.562 \(\pm\) 1.927 & **89.717** & 85.278 & 87.088 \\ car & 99.966 & **100.000 \(\pm\) 0.000** & **100.000 \(\pm\) 0.000** & 99.859 \(\pm\) 0.200 & 92.376 & 98.701 & 99.587 \\ segment & 91.504 & 91.603 \(\pm\) 0.508 & 91.317 \(\pm\) 0.074 & 91.563 \(\pm\) 0.000 & **93.723** & 91.775 & **93.723** \\ fashion. & 91.139 & 90.784 \(\pm\) 0.158 & 90.864 \(\pm\) 0.090 & 90.817 \(\pm\) 0.040 & 91.243 & 89.793 & **91.950** \\ jungle. & 86.998 & 92.071 \(\pm\) 0.420 & 87.400 \(\pm\) 0.489 & 95.130 \(\pm\) 0.807 & 87.325 & 73.425 & **97.471** \\ numerai & 51.621 & 51.443 \(\pm\) 0.370 & 51.905 \(\pm\) 0.299 & 51.839 \(\pm\) 0.067 & 52.363 & 51.599 & **52.668** \\ deuragari & 97.550 & 97.573 \(\pm\) 0.031 & 97.549 \(\pm\) 0.014 & 97.517 \(\pm\) 0.014 & 93.310 & 94.179 & **98.370** \\ helena & 29.342 & 28.459 \(\pm\) 0.531 & 29.834 \(\pm\) 0.354 & **29.884 \(\pm\) 0.991** & 21.994 & 19.032 & 27.701 \\ jannis & 68.647 & 66.302 \(\pm\) 3.887 & 69.302 \(\pm\) 0.248 & **69.998 \(\pm\) 1.232** & 55.225 & 56.214 & 65.287 \\ volkert & 70.066 & 68.781 \(\pm\) 0.045 & 69.655 \(\pm\) 0.189 & 70.104 \(\pm\) 0.215 & 64.170 & 59.409 & **71.667** \\ miniboone & 86.539 & 87.575 \(\pm\) 0.855 & 87.751 \(\pm\) 0.398 & 81.226 \(\pm\) 6.569 & **94.024** & 62.173 & 94.015 \\ apsfailure & 97.041 & **98.191 \(\pm\) 0.000** & 98.048 \(\pm\) 0.203 & **98.191 \(\pm\) 0.000** & 88.825 & 51.444 & 92.535 \\ christine & 70.295 & 69.819 \(\ ## Additional Related Work ### Tabular Machine Learning Tabular data is a difficult regime for deep learning, where deep learning models struggle against decision tree approaches. Early methods use forests, ensembling, and boosting Shwartz-Ziv and Armon (2022); Borisov et al. (2022); Chen and Guestrin (2016). Later, researchers handcrafted new deep architectures that mimic trees Popov, Morozov, and Babenko (2019); Arik and Pfister (2021). Yet, when evaluated on large datasets, these approaches are still beaten by XGB Chen and Guestrin (2016); Grinsztajn, Oyallon, and Varoquaux (2022). Recent work found MLPs with heavy regularization tuning Kadra et al. (2021) can outperform decision tree approaches, though this conclusion does not hold on small tabular datasets Joseph and Raj (2022). To specially tackle the small data regime, Bayesian learning and Hopfield networks are combined with MLPs Hollmann et al. (2022); Schafl et al. (2022). There are also work on tabular transformers Huang et al. (2020), though said approaches require much more training data. Without regularization, tree based models still outperform MLPs due to a better inductive bias and resilience to noise Grinsztajn, Oyallon, and Varoquaux (2022). To the best of our knowledge, the state-of-the-art on general tabular datasets remain heavily regularized MLPs (MLP\({}_{\texttt{TAB+C}}\)) Kadra et al. (2021). We aim to further boost regularized MLP performance by discovering model architectures that capture good invariances from tabular data. ### Contrastive Learning Contrastive learning, initially proposed for metric learning Chopra et al. (2005); Schroff, Kalenichenko, and Philbin (2015); Oh Song et al. (2016), trains a model to learn shared features among images of the same type Jaiswal et al. (2020). It has been widely used in self-supervised pretraining Chen et al. (2020); Chen et al. (2021), where dataset augmentation is crucial. Although contrastive learning was originally proposed for images, it has also shown promising results in graph data Zhu et al. (2019); You et al. (2020), speech data Baewski et al. (2020), and tabular data Bahri et al. (2021). Previous study has showed that speech transformers tend to overfit the contrastive loss in deeper layers, suggesting that removing later layers can be beneficial during finetuning Pasad, Chou, and Livescu (2021). While contrastive learning performs well pretraining unpruned models, its vanilla formulation performs poorly after network pruning Corti et al. (2022). In this work, we establish a connection between contrastive learning and invariance learning and observe that pruned contrastive models fail because of overfitting. ### Neural Architecture Search Neural Architecture Search (NAS) explores large superarchitectures by leveraging smaller block architectures Wan et al. (2020); Pham et al. (2018); Zoph et al. (2018); Luo et al. (2018); Liu, Simonyan, and Yang (2018). These block architectures are typically small convolutional neural networks (CNNs) or MLPs. The key idea behind NAS is to utilize these blocks Pham et al. (2018); Zoph et al. (2018) to capture desired invariance properties for downstream tasks. Prior works You et al. (2020); Xie et al. (2019) have analyzed randomly selected intra- and inter-block structures and observed performance differences between said structures. However, these work did not propose a method for discovering block architectures directly from data. Our work aims to address this gap by focusing on discovering the architecture within NAS blocks. This approach has the potential to enable NAS in diverse domains, expanding its applicability beyond the current scope. ## Loss Function Details We provide a more detailed description of our loss function in this section. Following notation from the main paper, we repeat the ILO loss function in Equation 6 below: \[\mathcal{L}(\theta;\mathcal{S})=\mathbb{E}_{x,y\sim D_{train}}\left[\mathcal{L} _{SUP}(x,y,\theta)+\lambda\mathcal{L}_{NCE}(x,y,\theta;\mathcal{S})\right] \tag{6}\] To better explain our loss functions, we introduce some new notations. First, we denote the decoder output probability function over classes, \(\mathcal{Y}\), as \(\tilde{p}_{\theta_{\mathcal{D}}}:\mathcal{H}\rightarrow[0,1]^{|\mathcal{Y}|}\), where \(f_{\mathcal{D}}=\text{argmax}\propto\tilde{p}_{\theta_{\mathcal{D}}}\). We denote the model output probability function by combining \(\tilde{p}_{\theta_{\mathcal{D}}}\) with the encoder as follows: \(p_{\theta}=\tilde{p}_{\theta_{\mathcal{D}}}\circ f_{\mathcal{E}}\). We introduce an integer mapping from classes \(\mathcal{Y}\) as \(\mathcal{I}:\mathcal{Y}\rightarrow\{0,1,2,...,|\mathcal{Y}|-1\}\). We show the maximum likelihood loss, \(\mathcal{L}_{SUP}\), in Equation 7 below. \[\mathcal{L}_{SUP}(x,y,\theta)=-log(p_{\theta}(x,\theta)_{\mathcal{I}(y)}) \tag{7}\] We show the supervised contrastive loss, \(\mathcal{L}_{NCE}\), in Equation 8 below. Following SimCLR Chen et al. (2020), we assume that the intermediary representations are \(d\)-dimensional embeddings, \(\mathcal{H}=\mathbb{R}^{d}\), and use the cosine similarity as our similarity function, \(\psi^{(cos)}:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\). \[\mathcal{L}_{NCE}(x,y,\theta;\mathcal{S})= \tag{8}\] ### Surrogate Objective We aim to learn invariance-preserving network architectures from the data. In our framework, this involves optimizing our invariance objective, which we repeat in Equation 9. We now prove that by minimizing the supervised contrastive loss in Equation 8 we equivalently maximize the invariance objective, outlined below. \[\theta^{*}=\underset{\theta}{argmax}\underset{x_{i},x_{j}\sim\mathcal{X}}{ \mathbb{E}}\left[\frac{\phi(f_{\mathcal{E}}^{M}(x_{i},\theta),f_{\mathcal{E}}^{ M}(x_{j},\theta))}{\phi(f_{\mathcal{E}}^{M}(x_{i},\theta),f_{\mathcal{E}}^{M}(g(x_{i} ),\theta))}\right] \tag{9}\] We convert the distance metric \(\phi\) into similarity metric \(\psi\). \[\theta^{*}=\underset{\theta}{argmax}\underset{x_{i},x_{j}\sim\mathcal{X}}{ \mathbb{E}}\left[\frac{\psi(f_{\mathcal{E}}^{M}(x_{i},\theta),f_{\mathcal{E}}^{M} (g(x_{i}),\theta))}{\psi(f_{\mathcal{E}}^{M}(x_{i},\theta),f_{\mathcal{E}}^{M} (x_{j},\theta))}\right]\] \[=\underset{\theta}{argmin}\underset{x_{i},x_{j}\sim\mathcal{X}}{ \mathbb{E}}\left[\frac{-\psi(f_{\mathcal{E}}^{M}(x_{i},\theta),f_{\mathcal{E}}^ {M}(g(x_{i}),\theta))}{\psi(f_{\mathcal{E}}^{M}(x_{i},\theta),f_{\mathcal{E}}^ {M}(x_{j},\theta))}\right]\] \[=\underset{\theta}{argmin}\underset{x_{i},x_{j}\sim\mathcal{X}}{ \mathbb{E}}\left[\frac{-\psi(f_{\mathcal{E}}^{M}(x,\theta),f_{\mathcal{E}}^{M} (g(x),\theta))}{\sum\limits_{x^{\prime}\neq x}\psi(f_{\mathcal{E}}^{M}(x, \theta),f_{\mathcal{E}}^{M}(g(x^{\prime}),\theta))}\right]\] \[=\underset{\theta}{argmin}\underset{x_{i},y\sim\mathcal{D}_{tr}}{ \mathbb{E}}\left[\frac{-\psi\left(\frac{f_{\mathcal{E}}^{M}(x,\theta)}{f_{ \mathcal{E}}^{M}(g(x),\theta)}\right)}{\sum\limits_{x^{\prime},y^{\prime} \sim D_{tr}}\psi\left(\frac{f_{\mathcal{E}}^{M}(x,\theta)}{f_{\mathcal{E}}^{M} (g(x^{\prime}),\theta)}\right)}\right]\] \[=\underset{\theta}{argmin}\underset{x_{i},y\sim D_{tr}}{ \mathbb{E}}\left[-log\left(\frac{\psi\left(\frac{f_{\mathcal{E}}^{M}(x, \theta)}{f_{\mathcal{E}}^{M}(g(x),\theta)}\right)}{\sum\limits_{x^{\prime},y^{ \prime}\sim D_{tr}}\psi\left(\frac{f_{\mathcal{E}}^{M}(g,\theta)}{f_{\mathcal{ E}}^{M}(g(x^{\prime}),\theta)}\right)}\right)\right] \tag{10}\] We set the similarity metric, \(\psi\), to be the same as our contrastive loss: \(\psi(\cdot)=exp(\psi^{(cos)}(\cdot))\). \[\theta^{*}=\underset{\theta}{argmin}\underset{x,y\sim D_{train}}{ \mathbb{E}}[\mathcal{L}_{NCE}(x,y,\theta;\mathcal{S})] \tag{11}\] Here, we showed that the vanilla contrastive loss function, Equation 8, serves as a surrogate objective for optimizing our desired invariance objective, Equation 9. By incorporating contrastive learning alongside the maximum likelihood objective in Equation 6, ILO effectively reveals the underlying invariances in the pruned model. ## Additional Discussion on Lazy Training The lazy training regime [14, 15] is a phenomenon when loss rapidly decreases, while weight values stay relatively constant. This phenomenon occurs on large over-parameterized neural networks [14]. Because the weight values stay relatively constant, the magnitude ordering between weights also changes very little. Therefore, network pruning struggles to preserve such loss decreases in the lazy training regime [14]. Because weights with very small magnitude have minimal effect on the output logits, pruning said weights will not drastically hurt performance. Thus, if the pruning framework can separate very small magnitude weights from normal weights prior to the lazy training regime, we can preserve loss decreases in the lazy training regime. The PIS setting accomplishes this by initializing all weights to be very small so that only important weights will learn large magnitudes. This guarantees that a large percentage of weights will have small magnitudes throughout training, while important larger magnitude weights will emerge over the course of training. ## Additional Experiments ### Effects of Proactive Initialization: Full Results We provide weight histograms on CIFAR100 and SVHN in Figure 5. As shown, the trends reported in the main text holds on other datasets. ### On Weight Visualization: Full Results We provide weight visualizations on CIFAR100 and SVHN in Figure 6. As shown, the trends reported in the main text holds on other datasets. ### On Consistency: Full Results We provide consistency experiments on CIFAR100 and SVHN in Table 6. As shown, the trends reported in the main text holds on other datasets. ## Implementation Details ### Dataset Details We considered the following _computer vision_ datasets: CIFAR10, CIFAR100 [14], and SVHN [15]. CIFAR10 and CIFAR100 are multi-domain image classification datasets. SVHN is a street sign digit classification dataset. Input images are \(32\times 32\) color images. We split the train set by 80/20 for training and validation. We test on the test set provided separately. We reported dataset statistics in Table 7. We considered 40 tabular datasets from OpenML [16], UCI [10], and Kaggle, following the \(\text{MLP}_{\text{TAB+C}}\) benchmark [14]. These tabular datasets cover a variety of domains, data types, and class imbalances. We used a 60/20/20 train validation test split, and reported dataset statistics in Table 8. We use a random seed of 11 for the data split, following prior work [14]. \begin{table} \begin{tabular}{|c|c|c c|} \hline Dataset & \(g(\cdot)\) & \(\text{MLP}_{\text{VIS}}\) & \(\text{IUNet}^{\text{(MLP}_{\text{VIS})}}\) \\ \hline \multirow{4}{*}{CIFAR10} & resize. & 44.096 \(\pm\) 0.434 & **97.349 \(\pm\) 4.500** \\ & horiz. & 80.485 \(\pm\) 0.504 & **99.413 \(\pm\) 1.016** \\ & color. & 56.075 \(\pm\) 0.433 & **98.233 \(\pm\) 3.060** \\ & grayscale. & 81.932 \(\pm\) 0.233 & **99.077 \(\pm\) 1.598** \\ \hline \multirow{4}{*}{CIFAR100} & resize. & 32.990 \(\pm\) 1.065 & **39.936 \(\pm\) 2.786** \\ & horiz. & 70.793 \(\pm\) 0.677 & **77.935 \(\pm\) 1.464** \\ & color. & 31.704 \(\pm\) 0.560 & **51.397 \(\pm\) 2.709** \\ & grayscale. & 71.245 \(\pm\) 0.467 & **76.476 \(\pm\) 1.245** \\ \hline \multirow{4}{*}{SVHN} & resize. & 36.708 \(\pm\) 2.035 & **77.440 \(\pm\) 0.627** \\ & horiz. & 71.400 \(\pm\) 1.651 & **95.082 \(\pm\) 0.166** \\ & color. & 61.341 \(\pm\) 0.946 & **91.097 \(\pm\) 0.395** \\ \cline{1-1} & grayscale. & 90.344 \(\pm\) 0.233 & **99.259 \(\pm\) 0.073** \\ \hline \end{tabular} \end{table} Table 6: Comparing the consistency metric (%) of the untrained supernetwork, \(\text{MLP}_{\text{VIS}}\) and \(\text{MLP}_{\text{TAB}}\), against IUNet’s pruned subnetwork under different invariant transforms, \(g(\cdot)\). IUNet preserves invariances better. Figure 5: Histogram of weight magnitudes, \(|\theta_{M}^{(t)}|\), plotted over each epoch under different \(\kappa\) initializations settings. \(\kappa=1.0\) means normal initialization. Results shown for \(\text{MLP}_{\text{VIS}}\) on the CIFAR10, CIFAR100, and SVHN datasets. Figure 6: Visualization of weight magnitudes, \(|\theta_{M}^{(T)}|\), trained with different policies. The models were trained on CIFAR10, CIFAR100, and SVHN. The magnitude of each RGB pixel for 6 output logits are plotted. ### Hyperparameter Settings All experiments were run 3 times from scratch starting with different random seeds. We report both the mean and standard deviation of all runs. All hyperparameters were chosen based on validation set results. For all experiments, we used \(\lambda=1\), which was chosen through a grid search over \(\lambda\in\{0.25,0.5,1.0\}\). For all experiments, we used a batch size of 128. For pre-pruning training, we used SGD with Nesterov momentum and a learning rate of 0.001, following past works [1]. For finetuning vision datasets, we used the same optimizer setup except with 16-bit operations except for batch normalization, following \(\beta\)-Lasso [22]. For finetuning tabular datasets, we used AdamW [15], a learning rate of 0.001s, decoupled weight decay, cosine annealing with restart, initial restart budget of 15 epochs, budget multiplier of 2, and snapshot ensembling [16], following prior works [10, 11]. It is important to note we did not tune the dataset and training hyperparameters for each tabular dataset individually like \(\text{MLP}_{\text{TAB+C}}\)[10], rather taking the most effective setting on average. For tabular datasets, we tuned the compression ratio over the following range of values: \(r\in\{2,4,8\}\) and the PIS multiplier over the following range of values: \(\kappa\in\{0.25,0.125,0.0625\}\). on a subset of 4 tabular datasets. We found that \(r=8\) and \(\kappa=0.25\) performs the most consistently and used this setting for all runs of IUNet in the main paper. It is important to note we did not tune hyperparameters for IUNet on each individual tabular dataset like \(\text{MLP}_{\text{TAB+C}}\)[10], making IUNet a much more efficient model than \(\text{MLP}_{\text{TAB+C}}\). For the tabular baselines [10, 10, 11], we used the same hyperparameter tuning setup as the MLP+C benchmark [10]. For vision datasets, we tuned the compression ratio over the following range of values: \(r\in\{2,4,8,16\}\) on each individual dataset for all network pruning models except \(\beta\)-Lasso4. For \(\beta\)-lasso [22], we tuned the hyperparameters over the range \(\beta=\{50\}\) and L1 regularization in \(l1\in\{10^{-6},2\times 10^{-6},5\times 10^{-6},10^{-5},2\times 10^{-5}\}\) on each individual dataset as done in the original paper. It is important to note that although we tuned both hyperparameters for both IUNet and baselines on each individual datasets, our main and ablation table rankings stay consistent had we chosen a single setting for all datasets, as shown in the detailed pruning experiments in the main paper. Footnote 4: This is because \(\beta\)-Lasso does not accept a chosen compression ratio as a hyperparameter. ### Supernetwork Architecture \(\text{MLP}_{\text{VIS}}\) is a deep MLP that contains a CNN subnetwork. Given a scaling factor, \(\alpha\), the CNN architecture consists of 3x3 convolutional layers with the following (out channels, stride) settings: \([(\alpha,1)\), \((2\alpha,2)\), \((2\alpha,1)\), \((4\alpha,2)\), \((4\alpha,1)\), \((8\alpha,2)\), \((8\alpha,1)\), \((16\alpha,2)]\) followed by a hidden layer of dimension \(64\alpha\). It is worth noting that our CNN does not include maxpooling layers for fair comparison with the learned architectures, following the same setup as \(\beta\)-Lasso [22]. To form the MLP Network, we ensured the CNN network structure exists as a subnetwork within the MLP supernetwork by setting the hidden layer sizes to: \([\alpha s^{2},\frac{\alpha s^{2}}{2},\frac{\alpha s^{2}}{2},\frac{\alpha s^{2 }}{4},\frac{\alpha s^{2}}{4},\frac{\alpha s^{2}}{8},\frac{\alpha s^{2}}{8}, \frac{\alpha s^{2}}{16},64\alpha]\). This architecture was also introduced in \(\beta\)-Lasso [22]. All layers are preceded by batch normalization and ReLU activation. We chose \(\alpha=8\) such that our supernetwork can fit onto an Nvidia RTX 3070 GPU. CNN is the corresponding CNN subnetwork with (out channels, stride) settings: \([(\alpha,1)\), \((2\alpha,2)\), \((2\alpha,1)\), \((4\alpha,2)\), \((4\alpha,1)\), \((8\alpha,2)\), \((8\alpha,1)\), \((16\alpha,2)]\), derived from prior works [22]. Again, we chose \(\alpha=8\) to be consistent with \(\text{MLP}_{\text{VIS}}\). ResNet[10] is the standard ResNet-18 model used in past benchmarks [1]. Resnet differs from CNN in its inclusion of max-pooling layers and residual connections. \(\text{MLP}_{\text{TAB}}\) is a 9-layer MLP with hidden dimension 512, batch normalization, and ReLU activation. We did not use dropout or skip connections as it was found to be ineffective on most tabular datasets in MLP+C [10]. ### Pruning Implementation Details Following Shrinkbench [1], we use magnitude-based pruning only on the encoder, \(f_{\mathcal{E}}\), keeping all weights in the decoder, \(f_{\mathcal{D}}\). This is done to prevent pruning a cutset in the decoder architecture, so that all class logits receive input signal. To optimize the performance, we apply magnitude-based pruning globally, instead of layer-wise. ### Hardware All experiments were conducted on an Nvidia V100 GPU and an AMD EPYC 7402 CPU. The duration of the tabular experiments varied, ranging from a few minutes up to half a day, depending on the specific dataset-model pair and the training phase (pre-pruning training or finetuning). For the vision experiments, a single setting on a single dataset-model pair required a few hours for both pre-pruning training and finetuning.
2309.09290
Coarse-Graining with Equivariant Neural Networks: A Path Towards Accurate and Data-Efficient Models
Machine learning has recently entered into the mainstream of coarse-grained (CG) molecular modeling and simulation. While a variety of methods for incorporating deep learning into these models exist, many of them involve training neural networks to act directly as the CG force field. This has several benefits, the most significant of which is accuracy. Neural networks can inherently incorporate multi-body effects during the calculation of CG forces, and a well-trained neural network force field outperforms pairwise basis sets generated from essentially any methodology. However, this comes at a significant cost. First, these models are typically slower than pairwise force fields even when accounting for specialized hardware which accelerates the training and integration of such networks. The second, and the focus of this paper, is the need for the considerable amount of data needed to train such force fields. It is common to use 10s of microseconds of molecular dynamics data to train a single CG model, which approaches the point of eliminating the CG models usefulness in the first place. As we investigate in this work, this data-hunger trap from neural networks for predicting molecular energies and forces can be remediated in part by incorporating equivariant convolutional operations. We demonstrate that for CG water, networks which incorporate equivariant convolutional operations can produce functional models using datasets as small as a single frame of reference data, while networks without these operations cannot.
Timothy D. Loose, Patrick G. Sahrmann, Thomas S. Qu, Gregory A. Voth
2023-09-17T14:55:08Z
http://arxiv.org/abs/2309.09290v2
# Coarse-Graining with Equivariant Neural Networks: A Path Towards Accurate and Data-Efficient Models ###### Abstract Machine learning has recently entered into the mainstream of coarse-grained (CG) molecular modeling and simulation. While a variety of methods for incorporating deep learning into these models exist, many of them involve training neural networks to act directly as the CG force field. This has several benefits, the most significant of which is accuracy. Neural networks can inherently incorporate multi-body effects during the calculation of CG forces, and a well-trained neural network force field outperforms pairwise basis sets generated from essentially any methodology. However, this comes at a significant cost. First, these models are typically slower than pairwise force fields even when accounting for specialized hardware which accelerates the training and integration of such networks. The second, and the focus of this paper, is the need for the considerable amount of data needed to train such force fields. It is common to use 10s of microseconds of molecular dynamics data to train a single CG model, which approaches the point of eliminating the CG model's usefulness in the first place. As we investigate in this work, this "data-hunger" trap from neural networks for predicting molecular energies and forces can be remediated in part by incorporating equivariant convolutional operations. We demonstrate that for CG water, networks which incorporate equivariant convolutional operations can produce functional models using datasets as small as a single frame of reference data, while networks without these operations cannot. **1. Introduction** Molecular dynamics (MD) has proven to be a powerful tool to study the molecular underpinnings behind a variety of interesting phenomena in the biological and material sciences.[1, 2, 3] The efficient integration of Newton's equation of motion in MD provides a fast method by which statistical information of a molecular system can be averaged over to arrive at conclusions about the thermodynamics and dynamics of the system. A variety of MD techniques have been developed to investigate systems at a range of different accuracies at the all-atom (AA) level. At the other end of the spectrum is coarse-graining (CG), which seeks to accurately simulate a molecular system at a resolution below that of atomistic MD.[4, 5, 6, 7] In recent years there has been a large amount of interest in applications of machine learning (ML) to molecular simulations both at the AA (and _ab initio_)[8, 9, 10, 11, 12, 13, 14, 15, 16] and CG[17, 18, 19, 20, 21] levels. While some of these approaches apply ML to generate a pairwise CG force field (see, e.g., the original work by Voth and co-workers[22, 23, 24, 25]), most work has pursued the idea to treat the ML model - typically a deep neural network (DNN) - as the CG force field itself. On the _ab initio_ MD end, DNN force fields tend to integrate much faster than full quantum treatments of the atoms as they can typically be evaluated as a series of matrix multiplications and thus may be useful as a method to speed up integration while maintaining an acceptable level of accuracy. On the CG end, however, ML-based methods tend to be slower than a simple pairwise CG forcefield, although they can be more accurate as they can naturally incorporate many-body correlations. The DNN can also be better at fitting the interaction data than the linear regression[23, 24]\({}^{,}\)[26] or relative entropy minimization[27, 28] (REM) CG'ing-based methods that are more typically employed in "bottom-up" CG model development. ML has a long history of drawing inspiration from nature to create powerful models which can tackle problems that were previously considered intractable. The architecture of the first neural networks was, as their name suggests, inspired by neural function in the brain.[29] Similarly, convolutional neural networks take advantage of processes found in animal eyes to identify and classify features above the single pixel-level in image processing [30, 31]. Both the brain and eye are remarkably powerful tools for learning and image processing respectively, so it is perhaps no surprise that these models can succeed at such tasks. In the application of DNNs to molecular systems, it should also come as no surprise that taking inspiration from physics can lead to better results. Most successful DNN-based methods incorporate physical constraints into their architectures and training schemes to improve the often-nebulous connection between DNN regression and physical reality as well as to speed up training. Typical DNN based force fields, such as CGnet [32] and CGSchnet [33], utilize the same objective function as the multiscale coarse-graining (MS-CG) method [22, 23, 24], the mean-squared error between the forces of the mapped reference data and the CG model [23, 24]. The difference lies in the usage of a neural network to learn these forces versus the force matching approach which employs least squares regression over a simpler set of model parameters, typically either Lennard Jones parameters or B-splines (or both). Further physical intuition can be applied to the structure of the networks themselves. The DNNs work in two stages. The first is a featurization stage in which the raw Cartesian coordinates for each particle in a configuration are converted into more natural internal coordinates while the second is a neural network that learns particle-wise energies and forces from these featurized configurations. The simplest featurization scheme corresponds to converting the coordinates into interparticle distances or their inverses as well as particle types, which can be done directly as is the case for CGnets [17]. These features are then subjected to physically inspired energy priors, i.e., harmonic for bonded particles and repulsive for non-bonded ones. This frees the energy-predicting network from needing to learn those features of the CG Hamiltonian and allows it to learn corrections to the priors instead. Another approach for featurizing molecular configurations which can generate more accurate results is to embed these features into a graph neural network as in Schnets and CGSchnets[8, 18] which are naturally suited to representing molecular systems. Each node of the graph represents a CG site, and each edge a distance between the two CG sites representing each node. Convolutions over these graph elements can be performed analogously to convolutions over pixels in 2-D images, giving graph neural networks a powerful tool to pool information across a variety of spatial scales[34]. These networks can then be trained to learn an effective embedding of the CG configuration which optimally predicts CG forces and energies, improving over the set of hand-selected internal coordinates used by CGNets. This embedding network fits into the previously discussed architecture in between the original featurization into internal coordinates and before the energy prediction network. This method also produces networks that may be more inherently transferable, as the embedding network can develop an effective embedding for any configuration so long as there are no CG types that have not been seen by the network. For systems such as proteins, this is possible to accomplish, so long as the training dataset contains all 20 amino acids. Both the handpicked featurization and the graph neural network implementation ensure that all resulting CG features are at least invariant to rotation and translation, and the overall force-field is equivariant. Equivariant neural networks specifically guarantee the proper transformation behavior of a physical system under coordinate changes. This relationship can be described more precisely in group theoretic terms, in which a group \(G\) operates on vector spaces \(X\) and \(Y\). A function \(f(x)\) that maps from \(X\) to \(Y\) is equivariant with respect to \(G\) if \[D_{Y}[g]f(x)=f(D_{X}[g]x) \tag{1}\] where \(D_{X}[g]\) and \(D_{Y}[g]\) are representations of element \(g\) in the vector spaces of \(X\) and \(Y\) respectively. Recently, a class of equivariant neural networks has been developed for atomistic molecular systems which incorporates equivariance into the hidden layers of the network [10, 11, 15]. These methods incorporate full vector information of the relative positions of atoms in addition to higher order tensor information to guarantee that the magnitude of the forces produced by these networks are invariant to rotation, translation and reflection (also known as the Euclidian or E(3) symmetry group) while the unit vectors describing the directions of these forces are equivariant under these operations. These properties impose a restraint on the networks based on the physics which in theory should make the networks far more capable of representing and predicting molecular forces. Specific implementations of equivariance differ from architecture to architecture, but the present work focuses on the NequIP [11] and Allegro models. These architectures take advantage of the natural translational and permutational equivariance of the convolution, and enforce that the convolutional filters are products of radial functions and spherical harmonics, which are rotationally invariant to achieve full E(3) equivariance [35]: \[S_{m}^{(I)}\big{(}\overrightarrow{\tau_{IJ}}\big{)}=R(\tau_{ij})Y_{m}^{(I)} \big{(}\widehat{\tau_{ij}}\big{)} \tag{2}\] where \(S_{m}^{(I)}\big{(}\overrightarrow{\tau_{IJ}}\big{)}\) is a convolutional filter over full distance vectors between atoms, \(\tau_{ij}\) is the scalar distance associated with \(\overrightarrow{\tau_{IJ}}\), and \(\widehat{\tau_{IJ}}\) is the corresponding unit vector. Allegro and NequIP differ in that NequIP is globally equivariant. NequIP achieves global equivariance via a message-passing layer which passes messages from adjacent graph nodes. These layers can learn a variety of functions, from graph convolutions to graph-wide targets, which encode information about the entire system [36]. Allegro removes this message passing layer and only achieves local equivariance as a cost to allow for parallelization of network evaluation, thereby allowing it to scale to much larger systems. The results of these equivariant networks can address a key weakness of ML: the large dataset requirements for training neural networks. To generate an effective DNN, one must supply a very large amount of training data (usually MD frames in the case of bottom-up CG model development), which can in turn make the resulting CG model less useful (or potentially useless) as it inflates the time required to calculate results. For example, prior training of a CGSchnet architecture to produce a force field for chignolin, a mini-protein containing 10 amino acids, required 180 microseconds of reference simulation [18]. By contrast, a more conventional CG hetero-elastic network model [37] for full-length integrin, containing 1780 residues, was generated using 0.1 microseconds of MD simulation [38]. This disparity may call into question the usefulness of DNNs as CG force fields under certain circumstances when applied to lower resolution CG models even considering the highly accurate results they may generate. As it turns out and will be shown in this paper below, one primary reason for the large number of examples (data) required to train DNN force fields is the equivariance with respect to molecular forces. Rotation invariant DNN-based methods must learn the equivariance of forces via training reinforcement, which adds considerably to the data cost of creating these models. Equivariant neural networks, on the other hand, build this information into the model inherently and have been shown to predict interatomic energies and forces for small molecules at atomic resolution with three orders of magnitude less training data than symmetry invariant architectures and to do so with even greater accuracy [11]. While training neural networks to predict energies and forces of atomistic resolution systems from _ab-initio_ quantum data is not the same as training a CG model from atomistic data, there is a natural analogy of learning to predict forces at a lower resolution from higher resolution data. In the former case, the high resolution is the quantum description of the system, while the low resolution is the atomistic description. In the case of CG'ing, the high resolution model is the atomistic description, while the low resolution is some chosen CG lower resolution. A key difference is that most methods that learn atomistic descriptions from quantum data treat bonded and non-bonded interactions as the same, and rely purely on internal coordinates to decide this, while CG DNN methods use labels and alternative energy priors to do this. This is necessary for complex CG systems as bond breaking and forming are typically ignored for these models, and because the length scales of bonded interactions can easily match and overlap with that of the non-bonded ones. For this reason, the currently available equivariant neural network-based methods must be used carefully when applying them to CG systems. There are certain cases in which the methods are fundamentally identical. The simplest case is that in which there are no bonds whatsoever, and each CG site or "bead" corresponds to an entire molecule. In this case, the act of making a CG model is equivalent to the act of reducing the quantum description of a nonreactive single particle, such as helium, to its atomistic representation. For this reason, the work in this paper is limited in scope to the coarse-graining of single-site liquids, namely single-site water as a key example. Water is also an ideal test case for a DNN CG method due to the high levels of correlation caused by the underlying hydrogen bonding. For this reason, single-site CG water models tend to fail to predict proper center of mass radial distribution functions (RDFs) for water unless they incorporate many-body correlations [39, 40, 41, 42, 43]. In this work we present an analysis of DNN-based CG models of single-site CG water utilizing invariant and equivariant convolutional operations. For the invariant model, the Deep Potential Molecular Dynamics method with smoothed embedding (DeePMD) [44, 45] is utilized. For the equivariant model, the Allegro model[146] is utilized. Each method is applied to water in the limit of low sampling: a maximum of 100 consecutive MD frames are used to train each model. This remainder of this paper is organized out as follows: First, a discussion of the methods is given, with the hyperparameters for all ML methods as well as all MD simulation parameters. A discussion of DeePMD and Allegro models is also presented. Following this, results for each model are presented. Pairwise RDFs are analyzed and compared to mapped atomistic reference data. Three-body angular correlations are also analyzed. Finally, the stability of each force field in the low sampling limit is discussed as well. These results are then discussed and conclusions on the usefulness of equivariant particle embedding in the field of CG modeling are drawn. ## 2 Methods In order to generate the dataset used to train the models, the LAMMPS[47] and GROMACS[48] MD programs were used to simulate 512 TIP3P[49] and SPC/E[50] water molecules for a total of 10 nanoseconds in the constant NVT ensemble, respectively. For both models, a Nose-Hoover thermostat[51, 52] was used to maintain the simulation at 300 K, and frames were captured every 2 ps for a total of 5000 frames, though far fewer were used in the training of the Allegro and DeePMD models. The resulting trajectory was mapped to a 1 CG site per water resolution using a center of mass (COM) mapping scheme. This was then passed as a training dataset to DeePMD and Allegro. The DeePMD method consists of both an embedding network and a fitting network. The embedding takes pairwise distances as input and outputs a set of symmetry invariant features which include three-body information such as angular and radial features from nearby atoms, denoted by the authors as the se_e2_a embedding. Notably, this embedding network is not a graph neural network. However, before the interatomic distances are fed into this matrix, they are converted into a set of coordinates based on inverse distances: \[\{x_{ij},y_{ij},z_{ij}\}\rightarrow\{s(r_{ij}),\hat{x}_{ij},\hat{y}_{ij},\hat{z}_{ ij},\} \tag{3}\] \[s\big{(}r_{ij}\big{)}=\begin{cases}\dfrac{1}{r_{ij}},&r_{ij}<r_{c1}\\ \dfrac{1}{r_{ij}}\bigg{\{}\dfrac{1}{2}\cos\bigg{[}\dfrac{\pi\big{(}r_{ij}-r_{c1 }\big{)}}{(r_{c2}-r_{c1})}\bigg{]}+\dfrac{1}{2}\bigg{\}},&r_{c1}<r_{ij}<r_{c2}\\ 0,&r_{ij}>r_{c}\end{cases} \tag{4}\] where \(x_{ij}\), \(y_{ij}\) and \(z_{ij}\) refer to the x, y and z projections of \(r_{ij}\) the distance between two particles i and j, and \(\hat{x}_{ij}=\dfrac{s(r_{ij})x_{ij}}{r_{ij}}\), \(\hat{y}_{ij}=\dfrac{s(r_{ij})y_{ij}}{r_{ij}}\), and \(\hat{z}_{ij}=\dfrac{s(r_{ij})z_{ij}}{r_{ij}}\). This set of features is then converted via the embedding network into a matrix of features that preserves the rotational, translational, and permutational symmetry of the system. These features are passed through per-atom subnetworks which compute the energy contribution from each atom to the total system energy. The gradients of these per-atom energies can then be used to calculate interatomic forces during an MD simulation. During training, the DeePMD model sees individual atoms as training samples which can be batched as usual [45]. Allegro is an extension of the NequIP model which trades global equivariance gained via a message-passing graph neural network for local equivariance in order to provide much greater scaling capability [46]. In Allegro, two sets of features are generated by the initial featurization for each pair of particles. The first is a scalar set of features which consist of interatomic distances and labels for each chemical species in the interaction. This feature set is symmetry invariant, as in the case of DeePMD. The second feature set contains unit vector information that correspond to these interatomic distances which are projected onto spherical harmonic functions. These features are then embedded through a series of layers onto a new equivariant feature set which is then fed into a multilayer perceptron (MLP) which predicts the energy of the interaction. The total energy of the system can be calculated as the sum of these energies, and the forces can be calculated via the gradients of these energies. For both Allegro and DeePMD models, a common network size was selected to ensure that differences in the performance of the models were most strongly correlated with the number of training samples. The embedding networks were composed of three layers with widths [8, 16, 32]. In the case of DeePMD models, this format is converted into a ResNet,[53] for which no timestep was selected. The energy fitting networks were also composed of three layers each with widths [32, 32, 32]. Each network was given a maximum cutoff for the environment of each atom of 7 Angstroms. Training parameters such as the number of epochs, learning rates, and early stopping were left up to the defaults of each model archetype to ensure that each model was trained according to its normal usage. Specific parameters for each model may be found in the Supplementary Information in the form of DeePMD and Allegro input files. A total of 4 models were trained according to the preceding description. For DeePMD, two models were trained, one using 100 frames (or 200 ps) of training data, and one using 10 frames (or 20 ps) of data. Two Allegro models were trained using 100 frames and 1 frame of training data. The TIP3P CG water models were simulated for 2,500,000 timesteps and the SPC/E CG water models were simulated for 1,000,000 timesteps. We note that the models presented here are trained in the data scarcity limit, and that modern equivariant neural networks are practically trained on orders of magnitude larger datasets than the datasets considered here. Each model was tested on a simulation of 3916 water molecules using a Nose-Hoover thermostat in the constant NVT ensemble at 300 K just as was the reference data. All models, except for the 10-frame DeePMD model, were simulated using a timestep of 2 fs, while the 10-frame DeePMD model used a 0.5 fs timestep, a choice which is explained in the Results and Discussion section. To calculate the RDFs, an outer cutoff for each model was selected to be 10 Angstroms. The 3-body angular distributions, \(P(\theta)\), were calculated for water using the following equation: \[P(\theta)=\frac{1}{N}\left\langle\sum_{I}\sum_{J\neq I}\sum_{K>J}\delta\big{(} \theta-\theta_{IJK}\big{)}\right\rangle_{R<R_{C}} \tag{5}\] where \(N\) is a normalization constant equal to the largest value in the calculated sum and \(R_{C}\) is the cutoff radius. For these correlation functions, an outer cutoff of 4.5 Angstroms was selected which corresponds with the second solvation shell of water originating from its tetrahedral ordering[54]. The 3-body correlations were calculated between 30 and 150 degrees with a bin width of 1 degree, which captures the full extent of the 3-body correlations seen in tetrahedral water[42]. ## 3 Results and Discussion _Force Error_. The validation RMSE for each ML model trained on the TIP3P and SPC/E water models are reported in Tables 1 and 2, respectively. For both water models, the 100-frame Allegro model demonstrates the best performance. Intriguingly, the performance of Allegro on a singular frame is identical to the performance of DeePMD on two orders of magnitude more data for the TIP3P water model, and is similar in magnitude for the SPC/E water model. The DeePMD 10-frame exhibits the largest validation RMSE, and as demonstrated from simulation, deviates the furthest in capturing the structural correlations in the reference models. \begin{table} \begin{tabular}{|l|c|} \hline Model & RMSE (kcal/mol Å) \\ \hline 10-frame DeePMD & 11.8 \\ \hline 100-frame DeePMD & 3.54 \\ \hline 1-frame Allegro & 3.64 \\ \hline 100-frame Allegro & 3.49 \\ \hline \end{tabular} \end{table} Table 2: RMSE values of ML models trained on the SPC/E water model. \begin{table} \begin{tabular}{|l|c|} \hline 10-frame DeePMD & 5.81 \\ \hline 100-frame DeePMD & 3.66 \\ \hline 1-frame Allegro & 3.66 \\ \hline 100-frame Allegro & 3.55 \\ \hline \end{tabular} \end{table} Table 1: RMSE values of ML models trained on the TIP3P water model. _Simulation stability_. Every model trained except for the 10-frame DeePMD model of TIP3P water was stable when simulated using a 2 fs timestep. The 10-frame DeePMD model suffered from severe energy drift at this time-step and the system quickly falls out of a liquid state. Figure 1 shows sample coordinates for the 10-frame DeePMD and 1-frame Allegro TIP3P models, detailing the extent of the instability of the former system. Interestingly, even the repulsive priors present in the DeePMD architecture cannot prevent the particles reaching unphysically close distances. Within 2,500 timesteps, the water sites coalesce into small clumps. To simulate the model long enough to collect data, a 0.5 fs time-step was selected, as a 2 fs timestep lead to an Figure 1: Post-equilibration snapshot of (a) the Allegro CG water model trained on 1 frame of AA reference data and b) the DeePMD water model trained on 10 frames of AA reference data of TIP3P water. almost immediate loss of multiple CG particles from the simulation box. Furthermore, a 500 fs Nose-Hoover damping coefficient was not strong enough to stabilize the system at 300 K and the system exhibited a lower temperature throughout the simulation. On the other hand, all Allegro models performed stably and did not naturally tend towards a lower temperature or a collapsed state. _Radial distribution functions (RDFs)._ Figures 2 and 3 show RDFs for each CG model compared to that of the mapped reference system for TIP3P and SPC/E water, respectively. Each stable NN model does a good job of capturing the structural correlations of liquid water, a difficult task for a single site CG water model based on pair-wise non-bonded CG potentials. While none of the CG models developed from either AA model fully capture the depth of the well directly beyond the first peak, all models aside from the 1-frame Allegro model are close, with the 100-frame DeePMD model performing the best for both water models. The 1-frame Allegro models overestimate the height of the first peak, while the well depth is overestimated for TIP3P water and underestimated for SPC/E water. None of the models can capture the small peak around 6 Angstroms for TIP3P water and 4.5 Angstroms for SPC/E water, with the 100 frame models performing slightly better than the 1-frame Allegro models. The most noteworthy differences come from the 10-frame DeePMD model, which deviates significantly because of its propensity to collapse individual water molecules onto one another in the TIP3P model, and its poor representation of the underlying liquid structure for the SPC/E model. Figure 2. a) Radial distribution functions of CG water for Allegro and DeePMD models compared to reference atomistic data for TIP3P water. b) Detail of RDF in top panel between 3 and 7 Angstroms. While the 1-frame Allegro model for both TIP3P and SPC/E water is the least accurate of the stable models, it still qualitatively captures the shape of the RDF and, in multiple cases, is slightly more accurate where the peaks and wells are located. Unsurprisingly, there is an overall trend of the peak and the peak of the RDF and the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF is located at the center of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF is located at the center of the RDF is located at the center of the RDF is located at the center of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF is located at the center of the RDF is located at the center of the RDF is located at the center of the RDF is located at the center of the RDF is located at the center of the RDF is located at the center of the RDF is located at the center of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF is located at the center of the RDF. The peak of the RDF is located at the center of the RDF, and the peak of the RDF is located at the center of the RDF is located at the center of the RDF is located at the center of the RDF is located at the center of the RDF. The peak of the RDF is located at the center increasing quality as the number of training examples increases, as seen in both the Allegro and DeePMD models. _Three-body correlations_. Each model aside from the 10-frame DeePMD model performs reasonably well by this benchmark, although this is not very surprising given how expressive a DNN-based force field can be. One value of a DNN-based CG force field may be its ability to better capture many-body correlations which can be difficult for pairwise CG potentials. Figures 4 and 5 show water-water-water triplet angular distributions for each model parameterized in comparison to the mapped atomistic TIP3P and SPC/E water models. As with the behavior seen Figure 4: Three body angular distributions between triplets of CG waters for each Allegro and DeePMD model compared to mapped atomistic reference data of TIP3P water. Distributions were calculated using Equation 5. in the RDFs, there is an overall trend of increasing accuracy with respect to increased amounts of training data. Between the models trained on 100 frames of data, the Allegro model outperforms DeePMD for the TIP3P AA water model, with a better representation of the minimum around 60 degrees and comparable accuracy everywhere else. For the SPC/E water model, the 100-frame DeePMD water performs slightly better in capturing three-body correlations around the minimum near 60 degrees. Of the stable models, the 1-frame Allegro model is again the least accurate, predicting much lower probability values everywhere except for the first peak at 45 degrees. As before, the 10-frame DeePMD model completely fails to capture the 3-body correlations of both TIP3P and SPC/E water. Figure 5: Three body angular distributions between triplets of CG waters for each Allegro and DeePMD model compared to mapped atomistic reference data of SPC/E water. Distributions were calculated using Equation 5. While structural correlations have been investigated in this manuscript, we note that dynamical behavior of ML CG models, including diffusion, has not to our knowledge been thoroughly investigated. Such investigations are hindered by the artificial speedup in dynamical behavior exhibited by Newtonian mechanics of the CG PMF compared to the corresponding AA system [55]. This acceleration of CG dynamics is due to the absence of fluctuation and dissipation forces [56, 57]. ML force-fields for AA models should ideally perfectly reproduce the dynamical behavior of the reference model [58]. However, ML CG models will ideally match the dynamical behavior of the PMF, whose value is not knowable without an explicit representation of the PMF in the first place. Consequently, we suggest systematic investigation of the convergence of dynamical behavior of ML CG force-fields is paramount, and a logical route forward from this manuscript. ## 4 Conclusions In this work, we compare symmetry invariant and symmetry equivariant neural networks in their capacity to generate accurate CG force fields in the limit of low training data. Two architectures, DeePMD and Allegro, were chosen for symmetry invariance and equivariance, respectively. We show that symmetry equivariant models can form stable CG water models with even just a single frame of reference condensed phase MD data. It is shown consistently across both AA water models that the Allegro architecture is more data efficient than that of DeePMD. It is not surprising that when holding model architecture constant, the models trained on more reference data outperformed those parameterized with less. In all cases, the 100 frame models were able to accurately capture RDFs and 3-body correlations, though there is certainly room for improvement in both the Allegro and DeePMD models. Furthermore, it would be worth investigating how similar architectures such as Schnet perform against these architectures. These architectures could likely produce even better models with the same training datasets if more hyperparameter sweeping was performed. In particular, the fitting and embedding networks were chosen to be far smaller than those used in previous studies to generate atomistic force fields from quantum mechanical simulations. For example, the original Allegro models utilized fitting networks with three hidden layers of 1024 neurons each, instead of the 32-width network used in the current work. This choice was made to maximize their speed as CG models depend on integration speed to further enhance their sampling of the system. Despite this, these models numerically integrate slower than even a corresponding AA system, with the fastest model, Allegro, integrating at a speed of \(\sim\)25 ns/day on a small 3916 particle system, even when utilizing 4 GPUs. In contrast, simulation of the corresponding AA system in GROMACS integrates at a speed of \(\sim\)717 ns/day using 4 GPUs. However, it must be noted that the integration time of the CG model cannot be directly compared to the integration time of the AA model because the "time" in the CG model is not same as the physical time of the AA model. For example, a one CG bead water model typically has a diffusion constant 5-10 times larger than the underlying AA system at 300K. While there is room for improvement in the integration speed of DNN-based CG force fields, the addition of equivariant embedding can reduce the amount of training data required to generate a stable model by orders of magnitude. Though it was not the most accurate model, Allegro could train a force field that reproduced all qualitative features of the 2- and 3-body correlations of water, with a reasonable amount of quantitative accuracy, even when using a single frame of MD training data containing 512 total training examples. In comparison, DeePMD could not create a model which stably formed a bulk liquid using 10x the amount of training data. This sidesteps one of the biggest hurdles for generating DNN CG force fields and suggests that incorporating physical intuition and restraints may increase their training efficiency. With additional advances in DNN integration speed, methods such as these could become the state of the art for CG modeling in the future. Explicit inclusion of bonded CG beads could also expand the capacity of these models to much more complicated systems for which traditional CG methods fail. **Supporting Information** Example input files for training of DeePMD and Allegro models **Acknowledgments** This material is based upon work supported by the National Science Foundation (NSF Grant CHE-2102677). Simulations were performed using computing resources provided by the University of Chicago Research Computing Center (RCC). **Data Availability** The data that support the findings of this work are available from the corresponding author upon request.
2308.16516
Curvature-based Pooling within Graph Neural Networks
Over-squashing and over-smoothing are two critical issues, that limit the capabilities of graph neural networks (GNNs). While over-smoothing eliminates the differences between nodes making them indistinguishable, over-squashing refers to the inability of GNNs to propagate information over long distances, as exponentially many node states are squashed into fixed-size representations. Both phenomena share similar causes, as both are largely induced by the graph topology. To mitigate these problems in graph classification tasks, we propose CurvPool, a novel pooling method. CurvPool exploits the notion of curvature of a graph to adaptively identify structures responsible for both over-smoothing and over-squashing. By clustering nodes based on the Balanced Forman curvature, CurvPool constructs a graph with a more suitable structure, allowing deeper models and the combination of distant information. We compare it to other state-of-the-art pooling approaches and establish its competitiveness in terms of classification accuracy, computational complexity, and flexibility. CurvPool outperforms several comparable methods across all considered tasks. The most consistent results are achieved by pooling densely connected clusters using the sum aggregation, as this allows additional information about the size of each pool.
Cedric Sanders, Andreas Roth, Thomas Liebig
2023-08-31T08:00:08Z
http://arxiv.org/abs/2308.16516v1
# Curvature-based Pooling within Graph Neural Networks ###### Abstract Over-squashing and over-smoothing are two critical issues, that limit the capabilities of graph neural networks (GNNs). While over-smoothing eliminates the differences between nodes making them indistinguishable, over-squashing refers to the inability of GNNs to propagate information over long distances, as exponentially many node states are squashed into fixed-size representations. Both phenomena share similar causes, as both are largely induced by the graph topology. To mitigate these problems in graph classification tasks, we propose CurvPool, a novel pooling method. CurvPool exploits the notion of curvature of a graph to adaptively identify structures responsible for both over-smoothing and over-squashing. By clustering nodes based on the Balanced Forman curvature, CurvPool constructs a graph with a more suitable structure, allowing deeper models and the combination of distant information. We compare it to other state-of-the-art pooling approaches and establish its competitiveness in terms of classification accuracy, computational complexity, and flexibility. CurvPool outperforms several comparable methods across all considered tasks. The most consistent results are achieved by pooling densely connected clusters using the sum aggregation, as this allows additional information about the size of each pool. Machine Learning Graph Neural Networks Pooling ## 1 Introduction Graph neural networks (GNNs) Kipf and Welling (2016) combine the computational power of neural networks with the structure of graphs to exploit both the topology of graphs and the available graph signal. Their applications are manifold, as they are used to classify single nodes within a graph (node classification) Kipf and Welling (2016); Roth and Liebig (2022a), classify entire graphs (graph classification) Zhang et al. (2018), and predict missing edges within the graph (link prediction) Pan et al. (2018). They are inhibited by several problems that impact the achieved results negatively. We propose a method to mitigate two of these problems for the graph classification task, namely over-smoothing Nt and Maehara (2019); Oono and Suzuki (2019); Chen et al. (2020) and over-squashing Alon and Yahav (2020); Topping et al. (2021). Over-smoothing describes a phenomenon that results in node representations becoming overly similar when increasing the depth of the GNN. This leads to a loss of relevant information and leads to worse empirical results across many tasks Kipf and Welling (2016); Li et al. (2018); Oono and Suzuki (2019). Various theoretical investigations confirmed that this problem is greatly enhanced by the underlying structure of the graphLi et al. (2018); Oono and Suzuki (2019); Cai and Wang (2020). Densely connected areas of the graph tend to over-smooth faster than sparsely connected areas Yan et al. (2022). Similarly, over-squashing also leads to loss of information albeit in a different way. It describes the inability of GNNs to propagate information over long distances in a graph. A recent theoretical investigation traced this back to bottlenecks in the graph Alon and Yahav (2020), which describe edges connecting denser regions of the graph. With an increased number of layers, exponentially much information has to get passed through these edges, but the feature vectors are of limited constant size. Since bottlenecks are an inherent attribute of the underlying graph, this problem is also amplified by the graph topology Alon and Yahav (2020); Topping et al. (2021). While various directions addressing over-smoothing have been proposed Topping et al. (2021); Roth and Liebig (2022); Yan et al. (2022), specifically for the graph classification task, pooling methods are a promising direction Ying et al. (2018); Luzhnica et al. (2019), which cluster sets of nodes in the graph into a single node. This can improve the data flow and change the underlying graph topology to one more suited for the respective task. The difficulty in applying pooling methods is the selection of these groups of nodes. Existing methods provide different criteria by which these nodes can be selected. Yet these are often prohibitively rigid and also not designed with over-smoothing and over-squashing in mind. Our work addresses these crucial points. The curvature of a graph has been identified to be a meaningful metric for locating structures responsible for over-squashing and over-smoothing Topping et al. (2021). The curvature between two nodes describes the geodesic dispersion of edges starting at these nodes. Based on this metric, we design CurvPool, a novel pooling method that clusters nodes based on a flexible property of the graph topology. By design, the resulting graph has a suitable structure that alleviates the detrimental effects of over-squashing and over-smoothing. Our empirical results on several benchmark datasets for graph classification confirm the effectiveness of our approach. In addition, CurvPool is theoretically and practically efficient to execute. ## 2 Preliminaries NotationWe consider graphs of the form \(G=(\mathcal{V},\mathcal{E})\) consisting of a set of \(n=|\mathcal{V}|\) nodes \(\mathcal{V}=\{v_{1},\ldots,v_{n}\}\) and edges indicating whether pairs of nodes are connected. For each node \(v_{i}\), the set of neighboring nodes is denoted by \(\mathcal{N}_{i}\) and its degree by \(d_{i}=|\mathcal{N}_{i}|\). The graph signal \(\mathbf{X}\in\mathbb{R}^{n\times d}\) consists of \(d\) features at each node. We consider the task of graph classification, which aims to find a suitable mapping \(f_{\theta}(\mathbf{X},G)=c\) predicting class likelihoods \(\mathbf{c}\) for the entire graph using some parameters \(\theta\). ### Graph Neural Networks Graph neural networks operate on graph-structured data and are designed to extract meaningful node representations. These are structured as layer-wise functions to update the node representation \[\mathbf{h}_{i}^{k+1}=\psi(\mathbf{h}_{i}^{k},\phi(\{\mathbf{h}_{j}^{k}\mid j \in\mathcal{N}_{i}\})) \tag{1}\] in each layer \(k\) using some neighbor aggregation function \(\phi\) and some combination function \(\psi\). The graph signal is used for the initial node representations \(\mathbf{h}_{i}^{0}=\mathbf{x}_{i}^{0}\). Many options for realizing the update functions have been proposed Kipf and Welling (2016). However, most methods suffer from two phenomena known as over-smoothing Chen et al. (2020) and over-squashing Alon and Yahav (2020); Topping et al. (2021). Over-smoothing refers to the case that node representations become too similar to carry meaningful information after a few iterations. Over-squashing occurs when exponentially much information is compressed into the representation of a few nodes, preventing information from flowing between distant nodes. This is induced by the structure of the graph as so-called bottlenecks cause over-squashing, which means that two parts of the graph are connected by relatively few edges. ### Pooling within GNNs A pooling operation reduces the spatial size of the data by aggregating nodes and their representations using some criterion. It results in a new graph \(G^{\prime}=(\mathcal{V}^{\prime},\mathcal{E}^{\prime})\) and new node representations \(\mathbf{H}^{\prime}\) with \(|\mathcal{V}^{\prime}|\leq|\mathcal{V}|\). Pooling methods offer various advantages which frequently include an increased memory efficiency and improved expressivity regarding the graph isomorphism problem Bianchi and Lachi (2023). Formally, each pool \(p_{i}\subset\mathcal{V}\) contains a subset of nodes, and our goal is to find a suitable complete pooling \[\mathcal{P}=\{p_{i}\subset V\mid p_{1}\cup\ldots\cup p_{n}=V\} \tag{2}\] so that every node of the graph is contained in at least one of the pools. This guarantees that no information that was contained in the initial graph is disregarded. The new set of nodes \(\mathcal{V}^{i}\) is given by turning each pool \(p_{i}\) into a new node \(v^{\prime}_{i}\). The new set of edges \(\mathcal{E}^{\prime}\) differs between pooling methods. The main challenge towards successful pooling operations within GNNs is finding a suitable pooling criterion \(\mathcal{P}\). ### The Curvature of a Graph Motivated by the Ricci curvature in Riemannian geometry, a recent investigation defined the curvature of a graph determines as the geodesic dispersion of edges starting at two adjacent nodes Topping et al. (2021). Two edges starting at adjacent nodes can meet at a third node, remain parallel, or increase the distance between the endpoints of the edges. Corresponding to these three cases and based on insights from previous edge-based curvatures Forman (2003); Ollivier (2007, 2009), they propose the Balanced Forman curvature: **Definition 2.1**.: _(Balanced Forman curvature Topping et al. (2021).) For any edge \((i,j)\) in a simple, unweighted graph G, we let BFC(i,j) = 0 if \(\min\{d_{i},d_{j}\}=1\) and otherwise_ \[\text{BFC}(i,j)=\frac{2}{d_{i}}+\frac{2}{d_{j}}-2+2\frac{|\triangle(i,j)|}{ \max\{d_{i},d_{j}\}}+\frac{|\triangle(i,j)|}{\min\{d_{i},d_{j}\}}+\frac{\gamma _{max}^{-1}(i,j)}{\max\{d_{i},d_{j}\}}(|\square^{i}|+|\square^{j}|) \tag{3}\] _where \(\triangle(i,j)\) are the 3-cycles containing edge \((i,j)\), \(\square^{i}(i,j)\) are the neighbors of \(i\) forming 4-cycles containing \((i,j)\) without containing a 3-cycle. \(\gamma_{max}(i,j)\) is the maximal number of 4-cycles containing \((i,j)\) traversing a common node._ We refer to Topping et al. (2021) for a comprehensive definition. This formulation satisfies the desired properties of the geodesic dispersion. The curvature is negative when \((i,j)\) are sparsely connected and positive curvature when redundant paths are available. We provide visualized examples in Figure 1. The important relationship to over-squashing is that edges with a negative curvature are considered to be the bottlenecks of the graph Topping et al. (2021). ## 3 Related Work Various methods for pooling within graph neural networks based on clusters of nodes have been proposed Yuan and Ji (2020); Zhang et al. (2018); Ranjan et al. (2020); Lee et al. (2019); Wang et al. (2020); Gao et al. (2019); Diehl et al. (2019); Li et al. (2020); Nguyen and Grishman (2018); Zhang et al. (2021); Du et al. (2021); Lei et al. (2022); Zhao et al. (2022). Some are based on the topology of the graph, while others are based on the node representations themselves. DiffPool Ying et al. (2018) is one of the most frequently employed strategies for pooling based on representations. For each node, it predicts a soft assignment within a fixed number of clusters allowing the pooling to be optimized with gradient descent. Several other methods similarly learn a mapping from node representations to pools Noutahi et al. (2019); Bianchi et al. (2020); Khasahmadi et al. (2020); Liu et al. (2021). However, there are two main concerns with this family of strategies. First, the number of clusters is predefined and fixed for all graphs in the considered task. Second, the structure of the graph is only taken into account using node representations, which do not capture all structural properties, as given by their limitations regarding the Weisfeiler-Leman test Morris et al. (2019). To address this, pooling strategies based on the graph topology were proposed. Fey et al. (2020) predefine a fixed set of graph structures and pool only these into single nodes. CliquePool Luzhnica et al. (2019) combines each clique in the graph. However, these methods rely on fixed structures in the graph and are unable to provide any pooling when the graph structure does not perfectly align. As an example, a graph could consist of densely connected communities, but these are only pooled when they constitute complete cliques. For comparison throughout this work, we will use one of both categories, namely DiffPool and CliquePool. ## 4 CurvPool We aim to construct an adaptive pooling method that can combine arbitrary structures in the graph without explicitly needing the knowledge of which structures we are interested in. In addition, the structure of the pooled graph should also be more resilient to over-smoothing and over-squashing, which then allows for a better flow of information. Using the curvature of the graph as the foundation for our pooling method allows us to achieve these properties. ### Pooling based on the Curvature of a Graph Since we base our new pooling approach on the Balanced Forman curvature \(\text{BFC}(\cdot)\), we initially calculate the curvature for every edge \((i,j)\in E\). We then need to convert curvature values of edges to sets of nodes we want to pool together. These values are used to decide if nodes \(i\) and \(j\) will be assigned to the same pool or not. There are different approaches for making this decision that lead to variations of CurvPool. In the general case, we use some criterion \(f(\text{BFC}(i,j))\) on the curvature of each edge to decide whether two nodes are combined. For the initial candidates for pools, this results in a set \[\mathcal{P}^{\prime}=\{\{i,j\}\mid(i,j)\in\mathcal{E}\wedge f(\text{BFC}(i,j)) \}\cup\{\{i\}\in\mathcal{V}\} \tag{4}\] that we augment by each node as additional pool candidates to fulfill our requirements for a complete pooling. We describe our choices for the criterion \(f\) in the next section. The main challenge arises when combining subsets of \(\mathcal{P}^{\prime}\). Nodes may be contained in multiple pools, as multiple edges of a node may satisfy our criterion for combination. This does not contradict our definition of a pooling but still should be considered since it significantly impacts the resulting graph. Intuitively, we want groups of nodes that are connected by edges of similar curvatures to be combined together. In this way, clusters of densely connected structures can be aggregated into a single node, and sparsely connected regions will be closer connected afterward. The authors of CliquePool chose a different approach and removed duplicate nodes from every non-largest pool they were contained in Luzhnica et al. (2019). This approach doesn't really suit CurvPool since all resulting pools after the initial selection are of the same size. Instead, we merge all pools whose intersections are non-empty, resulting in the final pooling \[\mathcal{P}=\{\bigcup_{p_{i}\in S}p_{i}\mid S\subseteq\mathcal{P}^{ \prime},\forall T\subset S:\left(\bigcup_{p_{i}\in T}p_{i}\right)\cap\left( \bigcup_{p_{j}\in S\setminus T}p_{j}\right)\neq\emptyset,\\ \forall p_{k}\in\mathcal{P}^{\prime}:\exists p_{i}\in S:p_{i} \cap p_{k}\neq\emptyset\Rightarrow p_{k}\in S\}. \tag{5}\] Each element in \(\mathcal{P}\) is then mapped to a new node in the pooled graph. Based on the previous node representations \(\mathbf{H}\in\mathbb{R}^{n\times g}\) of \(\mathcal{V}\) and an aggregation function \(\omega\), we construct new node representations \[\mathbf{h}^{\prime}_{j}=\omega(\{\mathbf{h}_{i}|i\in p_{j}\}) \tag{6}\] for each pool \(p_{j}\in\mathcal{P}\). Any aggregation scheme \(\omega\) can be used to calculate the node features of the resulting pools. We consider the mean (AVG), the sum (SUM), and the maximum (MAX) operators. This still leaves us with one final question. How is the new set of edges calculated? Since we strive to retain as much of the initial graph structure as possible, we simply remap the old edges from their respective nodes to the new pools they are contained in, resulting in \[\mathcal{E}^{\prime}=\left\{(m,n)\mid\exists\;p_{i},p_{j}\in\mathcal{P}\colon p _{i}\neq p_{j}\wedge m\in p_{i}\wedge n\in p_{j}\right\}. \tag{7}\] This same method is used for all considered variations of CurvPool, leaving only the strategy for which curvatures to use for pooling open. ### Curvature-based Strategies for Pooling We now present our considered strategies for choosing pairs of nodes for our initial pools. Each of the three strategies has slightly differing motivations and carries its own set of advantages and disadvantages. #### 4.2.1 HighCurvPool The fundamental idea of HighCurvPool is to aggregate nodes that are adjacent to edges with high curvature. This strategy combines all nodes that are connected by an edge with a curvature above a fixed threshold \(t_{\text{high}}\). Our initial set of pools \[\mathcal{P}^{\prime}_{highCurv}=\{\{i,j\}\mid\forall i,j\in V:BFC(i,j)>t_{ \text{high}}\}\cup\{\{i\}\in\mathcal{V}\} \tag{8}\] considers exactly these, for which overlapping sets will be merged as described in Section 4.1. Nodes are combined along the nodes in dense communities of the graph. As over-smoothing was shown to occur faster in dense communities Yan et al. (2022), these sets of nodes already contain similar representations, thereby being redundant when kept as separate nodes. HighCurvPool should alleviate this effect since the most strongly smoothed representations are aggregated, and the new graph contains more diverse neighboring states from each community. The effects of over-squashing should also reduce as the average path lengths become smaller and information from fewer nodes needs to be compressed for connecting edges. While HighCurvPool typically leads to an increase in curvature in bottlenecks, these are not directly removed. #### 4.2.2 LowCurvPool Analogous to HighCurvPool, LowCurvPool pools nodes that are connected by an edge with low curvature since these directly represent bottlenecks within the graph. Using a different threshold \(t_{\mathrm{low}}\), this results in an initial pooling of the form \[\mathcal{P}^{\prime}_{lowCurv}=\{\{i,j\}\mid(i,j)\in\mathcal{E}\wedge\text{BFC} (i,j)<t_{\mathrm{low}}\}\cup\{\{i\}\in\mathcal{V}\}. \tag{9}\] LowCurvPool leads to the removal of exactly those edges that are marked as problematic through the curvature. The aggregation of the two adjacent nodes lets the two separated subgraphs move closer while guaranteeing that all paths through the graph are retained, and no new bottlenecks are created. As a result, the average curvature of the graph rises. Since we assume that the curvature is a good indicator of the over-squashing problem, information will be propagated better through the graph, and over-squashing gets reduced. However, over-smoothing may still be an issue as separate communities of nodes become more closely connected, leading to faster smoothing Yan et al. (2022). #### 4.2.3 MixedCurvPool Finally, MixedCurvPool combines the other approaches. It utilizes both thresholds \(t_{\mathrm{high}}\) and \(t_{\mathrm{low}}\). While one functions as an upper bound, the other works as the lower bound for our initial pooling \[P^{\prime}_{mixedCurv}=\{\{i,j\}\mid\forall i,j\in V:BFC(i,j)<t_{low}\lor BFC (i,j)>t_{high}\}. \tag{10}\] Two nodes are combined along their edge either if the connecting edge represents a bottleneck or they are within the same densely connected community. The idea is that MixedCurvPool combines the advantages of both approaches to be as effective as possible against over-squashing and over-smoothing. Selecting adequate hyperparameters becomes more important as this approach carries a great risk of simplifying the graph too much and thus losing all the information hidden inside the graph topology that we want to extract in the first place. ### Runtime Complexity The runtime complexity of CurvPool is mainly given by the complexity of the Balanced Forman curvature. This complexity is \(\mathcal{O}(|\mathcal{E}|d_{max}^{2})\), with \(d_{max}\) being the maximum node degree of the graph Topping et al. (2021). The Figure 1: Example graphs with edges colored according to their curvature from low (red) to high (green). Leftmost are the original graphs while the graphs in the middle represent one step of LowCurvPool and the rightmost graphs represent a step of HighCurvPool. calculation of the pools themselves only has a complexity of \(\mathcal{O}(|\mathcal{E}|)\) while the complexity of merging overlapping pools is \(\mathcal{O}(2|E|)\). All further operations don't differ between the pooling approaches and thus are not considered further for this comparison. As CurvPool only depends on the graph structure, this step is only executed once before optimization and reused in all settings. CliquePools complexity is given through the calculation of the cliques via the Bron-Kerbosch algorithm Bron and Kerbosch (1973) and is \(\mathcal{O}(3^{n/3})\) in the worst case for a graph with n nodes Tomita et al. (2006). This can frequently be reduced in a practical setting Eppstein et al. (2013). Since DiffPool requires the calculation of a complete additional GCN its complexity is given by \(\mathcal{O}(LN^{2}F+LNF^{2})\) with \(L\) being the amount of layers, \(N\) being the amount of nodes and \(F\) being the amount of features Blakely et al. (2021). ## 5 Experiments To evaluate the effectiveness of our new pooling method, we compare its performance on different benchmark datasets on graph classification to well-established baselines. Our implementation is available online5. We consider datasets that cover diverse graph structures and tasks from different domains. HIV Wu et al. (2018) and Proteins Borgwardt et al. (2005) are common benchmark datasets that are rooted within biology. They consist of structures of chemical substances that are used to classify the nature or specific properties of these substances. IMDB-BINARY Yanardag and Vishwanathan (2015) is a common benchmark dataset that contains information about actors and movies. A graph indicates for a specific genre if actors, represented by nodes, have played together in the same movie, represented by edges. The classification task is to predict this genre. In addition, we extend our experiments to a custom dataset containing artificially generated graphs. These are generated using the approach for caveman graphs Watts (1999) resulting in multiple dense clique-like areas connected with a small number of edges that represent bottlenecks between these subgraphs. This dataset is referred to as Artificial. This setup allows us to compare different CurvPool variations effectively as the distribution of high and low curvature areas within the graph is smooth. The used features are equivalent to the node degrees. Footnote 5: [https://gitlab.com/Cedric_Sanders/masterarbeit](https://gitlab.com/Cedric_Sanders/masterarbeit) ### Experimental setup The classification loss is calculated via the log-likelihood loss, while the used optimizer is the Adam-Optimizer using a learning rate of \(0.001\) and a batch size of \(32\). We employ a 10-fold cross-validation and choose the best setting based on validation accuracy. The best run across all hyperparameters is used to calculate the accuracy for held-out test data for each of the folds. All splits are consistent across models. At each scale, our models utilize three convolutional layers, each followed by a ReLU activation and batch normalization Ioffe and Szegedy (2015). The complete model consists of three of these blocks with the corresponding pooling layers in between. Since we focus on graph classification, a global mean pooling layer and two linear layers are used to calculate the final classification. To keep them as comparable as possible, the pooling approaches only differ in the used pooling operation. ### Results Table 1 presents the overall results for the experiments. It shows the best parameter constellation per dataset and method. The different variations of CurvPool outperform the established methods on almost all of the datasets, albeit usually by only a few percentage points. Only on the Proteins dataset DiffPool can keep up with CurvPool. Especially HighCurvPool outperforms all other considered methods consistently, with the second-best result typically also going to a variation of CurvPool. ### Ablation Study The next step is to take a closer look at how some of the parameters impact the results of CurvPool. First up are the thresholds. Figure 2 represents the accuracy scores for the different sets of thresholds, including the different CurvPool variations. To better understand the impact of the thresholds the given histogram represents the distribution of curvature values in the corresponding dataset. This kind of visualization also allows for a good comparison of LowCurvPool, HighCurvPool, and MixedCurvPool. The thresholds aligned with the center of the histogram seem to achieve the highest scores. This implies that aggregating too many nodes and too few nodes both have detrimental effects on the achieved results. Selecting the correct threshold is a balancing act. For new datasets, we would recommend starting with thresholds that split the dataset into two halves since those seem to present a very good baseline for initial experiments. In terms of the different CurvPool variations, there is no clear winner. While HighCurvPool tends to achieve the highest scores across the board, it seems to be more sensitive to the choices of the threshold. MixedCurvPool is more consistent with fewer outliers in both directions. MixedCurvPool probably does best on datasets where the curvature values are distributed over a larger area. This explains its weak performance on the HIV dataset. Generally speaking, all three variations seem to be competitive, with differing strengths and weaknesses corresponding to individual datasets and thresholds. A comparison of the different aggregation schemes is presented in Table 2. For CurvPool, summing all node representations in a pool is clearly on top for all the different datasets. Meanwhile, CliquePool tends to achieve the best results when averaging. Especially notable are the differences between summing and averaging on the Artificial dataset. We explain the success of the sum aggregation by its similarity to more expressive message-passing schemes with respect to the Weisfeiler-Leman test. As pools of different sizes typically occur, the sum provides information about the number of nodes utilized in each pool. In contrast, the mean is unable to determine the number of nodes utilized for pooling, \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{GCN} & \multirow{2}{*}{DiffPool} & \multirow{2}{*}{CliquePool} & \multicolumn{3}{c}{CurvPool} \\ \cline{3-6} & & & Mixed & Low & High \\ \hline HIV & \(75.83\) & 76.66 & 78.33 & 76.31 & 78.61 & **80.06** \\ Proteins & 65.99 & **77.81** & 74.54 & 75.27 & 75.09 & **77.81** \\ IMDB-BINARY & 63.80 & 69.60 & 68.39 & **70.80** & 69.40 & **70.80** \\ Artificial & 73.20 & 73.80 & 73.40 & 73.30 & **74.10** & 74.00 \\ \hline \hline \end{tabular} \end{table} Table 1: Test accuracies for the best setting for each dataset and method according to the validation scores. The best score for each dataset is marked in bold, and the second-best score is underlined. Figure 2: Accuracy on the datasets for different thresholds. The histogram represents the distribution of curvature values in the dataset. MixedCurv pools all data points not within the range given via the purple line. LowCurv pools all data points to the left of the red triangle. HighCurv pools all data points to the right of the green triangle. similar to its inability to determine the number of neighbors in message-passing operations Xu et al. (2018). Thus, the sum aggregation provides important additional structural features and offers increased expressivity. ### Runtime The runtimes presented in Table 3 largely align with the established considerations of Section 4.3. CliquePool and CurvPool can utilize the precalculation of the poolings to drastically reduce the runtime per epoch and outperform DiffPool. Between CliquePool and CurvPool, runtimes are very close and in some cases even faster than the basic GCN. This can be explained through the larger number of aggregated nodes and the resulting smaller adjacency matrices. Especially meaningful is the amount of edges since it impacts the calculation of the Balanced Forman curvature negatively. Though this effect should be largely limited to the precalculation and not affect the time per epoch. ## 6 Conclusion and Future Work We introduced CurvPool, a novel pooling approach for graph neural networks that are designed to be effective against over-smoothing and over-squashing. It is based on the Balanced Forman curvature, which represents the connectivity between nodes. A high curvature value occurs for densely connected areas of the graph, which are prone to over-smoothing. Bottlenecks are located at edges with a low curvature value, leading to over-squashing. Our proposed methods HighCurvPool and LowCurvPool directly reduce these critical areas by combining exactly these nodes, resulting in a coarser graph and more effective message-passing. Simultaneously reducing both of these areas is done using our proposed MixedCurvPool. The first outstanding quality of CurvPool is its flexibility. The curvature can be calculated for any graph and always leads to a meaningful pooling metric while still being an inherent attribute of the graph itself. Other approaches like CliquePool on the other hand are server limited through the need for the existence of specific structures within the graph. A graph without cliques of appropriate size will not lead to a suitable pooling. Meanwhile pooling via DiffPool is almost completely independent of the graph structure itself since it uses an external pooling metric in the form of a clustering. This approach also requires additional knowledge about the graph or extensive hyperparameter tuning to select the fitting clique sizes. Another important factor is the low theoretical and practical runtime complexity of CurvPool which is linear in the number of edges. As the curvature and thus the pooling can be precomputed, the additional time during training is almost negligible. Our empirical results on several graph classification tasks show the effectiveness of our approach. CurvPool comes out slightly ahead while comparing classification accuracy for a bunch of different datasets. The experiments have also shown the viability of the different CurvPool variations in highlighting their strengths and weaknesses on specific datasets and parameter combinations. We found HighCurvPool using the sum aggregation to accomplish the strongest results consistently. We explain this as sets of nodes connected by edges with high curvature are prone to over-smoothing. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Accuracy} & \multicolumn{3}{c}{CliquePool} & \multicolumn{3}{c}{CurvPool} \\ \cline{2-7} & Sum & Avg & Max & Sum & Avg & Max \\ \hline HIV & 75.76 & **78.33** & 73.05 & **80.06** & 79.44 & 78.26 \\ Proteins & 74.36 & **74.54** & 72.72 & **77.81** & 75.45 & 76.54 \\ IMDB-BINARY & 67.40 & **68.39** & 63.59 & **70.80** & 69.20 & 60.20 \\ Artificial & 72.80 & 73.40 & **73.60** & **74.10** & 69.70 & 63.70 \\ \hline \hline \end{tabular} \end{table} Table 2: A comparison of different aggregation schemes for CliquePool and the best performing 1. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Runtime (s)} & \multicolumn{2}{c}{HIV} & \multicolumn{2}{c}{Proteins} & \multicolumn{2}{c}{IMDB-BINARY} & \multicolumn{2}{c}{Artificial} \\ \cline{2-9} & Epoch & Pre & Epoch & Pre & Epoch & Pre & Epoch & Pre \\ \hline GCN & **1.6** & - & **2.0** & - & 0.7 & - & **1.0** & - \\ DiffPool & 9.5 & - & 47.7 & - & 2.0 & - & 4.0 & - \\ CliquePool & 2.4 & **19** & 4.0 & **15** & **0.5** & **3** & **1.0** & 114 \\ CurvPool & 3.0 & 29 & 5.0 & 28 & **0.5** & 9 & **1.0** & **100** \\ \hline \hline \end{tabular} \end{table} Table 3: Runtime per method and dataset. Pre represents the duration for the precomputation of the poolings. Epoch is the training time per epoch. All times are in seconds. These carry redundant features which HighCurvPool then reduces to a single node. The sum aggregation allows our method to utilize structural properties of the graph as the resulting state is influenced by the number of nodes in each pool. In summary, this flexibility, its slightly improved classification accuracy, and its low runtime complexity make CurvPool a valuable alternative to the established pooling methods. LimitationsLimitations of CurvPool directly stem from limitations in the Balanced Forman curvature itself. While this strategy is very flexible, it may not perfectly align with the nodes that should be pooled in the optimal scenario. However, in case other strategies for ranking pairs of nodes emerge, these can be directly integrated into our method. CurvPool also does not consider node features, which might further enhance its effectiveness, albeit reducing its ability to precompute clusters. Additionally, while our empirical results already cover diverse datasets, CurvPool can be evaluated for additional tasks and against other methods to ensure its generalizability. Future workOur work opens up several directions for future work. While this paper focused on graph classification, it could be extended to node classification tasks in order to combine distant information. During our work, we noticed that the current theory on over-smoothing and over-squashing is unfit for pooled graphs. Metrics like the Dirichlet energy are not designed for pooling operations, making it challenging to quantify whether a pooling step can reduce over-smoothing. Thus, novel metrics and theoretical investigations are needed. Similarly, the effect of pooling methods in general on the curvature needs to be better understood from a theoretical perspective. AcknowledgementsThis research has been funded by the Federal Ministry of Education and Research of Germany and the state of North-Rhine Westphalia as part of the Lamarr-Institute for Machine Learning and Artificial Intelligence and by the Federal Ministry of Education and Research of Germany under grant no. 01IS22094E WEST-AI.
2309.08429
IHT-Inspired Neural Network for Single-Snapshot DOA Estimation with Sparse Linear Arrays
Single-snapshot direction-of-arrival (DOA) estimation using sparse linear arrays (SLAs) has gained significant attention in the field of automotive MIMO radars. This is due to the dynamic nature of automotive settings, where multiple snapshots aren't accessible, and the importance of minimizing hardware costs. Low-rank Hankel matrix completion has been proposed to interpolate the missing elements in SLAs. However, the solvers of matrix completion, such as iterative hard thresholding (IHT), heavily rely on expert knowledge of hyperparameter tuning and lack task-specificity. Besides, IHT involves truncated-singular value decomposition (t-SVD), which has high computational cost in each iteration. In this paper, we propose an IHT-inspired neural network for single-snapshot DOA estimation with SLAs, termed IHT-Net. We utilize a recurrent neural network structure to parameterize the IHT algorithm. Additionally, we integrate shallow-layer autoencoders to replace t-SVD, reducing computational overhead while generating a novel optimizer through supervised learning. IHT-Net maintains strong interpretability as its network layer operations align with the iterations of the IHT algorithm. The learned optimizer exhibits fast convergence and higher accuracy in the full array signal reconstruction followed by single-snapshot DOA estimation. Numerical results validate the effectiveness of the proposed method.
Yunqiao Hu, Shunqiao Sun
2023-09-15T14:30:38Z
http://arxiv.org/abs/2309.08429v1
# Hit-Inspired Neural Network for Single-Snapshot Doa Estimation with Sparse Linear Arrays ###### Abstract Single-snapshot direction-of-arrival (DOA) estimation using sparse linear arrays (SLAs) has gained significant attention in the field of automotive MIMO radios. This is due to the dynamic nature of automotive settings, where multiple snapshots aren't accessible, and the importance of minimizing hardware costs. Low-rank Hankel matrix completion has been proposed to interpolate the missing elements in SLAs. However, the solvers of matrix completion, such as iterative hard thresholding (IHT), heavily rely on expert knowledge of hyperparameter tuning and lack task-specificity. Besides, IHT involves truncated-singular value decomposition (t-SVD), which has high computational cost in each iteration. In this paper, we propose an IHT-inspired neural network for single-snapshot DOA estimation with SLAs, termed IHT-Net. We utilize a recurrent neural network structure to parameterize the IHT algorithm. Additionally, we integrate shallow-layer autoencoders to replace t-SVD, reducing computational overhead while generating a novel optimizer through supervised learning. IHT-Net maintains strong interpretability as its network layer operations align with the iterations of the IHT algorithm. The learned optimizer exhibits fast convergence and higher accuracy in the full array signal reconstruction followed by single-snapshot DOA estimation. Numerical results validate the effectiveness of the proposed method. Yunqiao Hu and Shunqiao Sun Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL, USA Sparse linear array, matrix completion, iterative hard thresholding, deep neural networks, single snapshot, direction-of-arrival estimation ## I Introduction Millimeter wave (mmWave) radar is highly reliable in various weather environments and antennas can be fit in a small form factor to provide high angular resolution, enhancing the environment perception capabilities. Compared with LiDAR, mmWave radar is a more cost-effective solution, making it crucial for autonomous driving [1, 2, 3]. Benefiting from multiple-input multiple-output (MIMO) radar technology, mmWave radars can synthesize virtual arrays with large aperture sizes using a small number of transmit and receive antennas [1]. To further reduce the hardware cost, sparse arrays synthesized by MIMO radar technology has been widely adopted in automotive radar [4, 2, 5]. Direction-of-arrival (DOA) estimation is one significant task for automotive radar. Classic subspace-based DOA estimation algorithms such as MUSIC [6] and ESPRT [7] require multiple snapshots to yield accurate DOA estimates. However, in highly dynamic automotive scenarios, only limited radar snapshots or even just a single snapshot are available for DOA estimation. Consequently, research on single-snapshot DOA methods with sparse arrays is of significant importance. The challenges associated with single-snapshot DOA with sparse arrays are the high sidelobes and the reduction of signal-to-noise ratio (SNR), both of which may cause errors and ambiguity in estimation [4]. If the sparse arrays are designed such that the peak sidelobe level is low [1, 8, 9]. Alternatively, the missing elements in the sparse arrays can be first interpolated using techniques like matrix completion [4, 2, 10, 11], followed by standard DOA estimation algorithms like MUSIC and ESPRT. Matrix completion approach exploits the low-rank property of the Hankel matrix formulated by array received signals, and completes the missing elements using iterative algorithms [10, 11]. However, typical algorithms for low-rank Hankel matrix completion such as singular value thresholding (SVT) [12] has high computational cost due to the compact singular value decomposition (SVD) in each iteration. In [13], an iterative hard thresholding (IHT) algorithm and its accelerated counterpart fast iterative hard thresholding (FHT) algorithm were proposed. Both IHT and FHT feature a simple implementation that utilizes efficient methods for SVD computation and Hankel matrix multiplication during the calculations, and FHT can converge linearly in specific conditions. However, IHT and FHT require an appropriate initialization and careful parameter tuning to achieve satisfactory estimates. Benefiting from the rise of deep learning, deep neural networks (DNNs) with various architectures have been proposed for low-rank matrix completion and show superior performance compared with traditional algorithms [14, 15]. However, these DNNs are usually composed of too many neural layers which leads to a large number of parameters. Furthermore, these DNNs are purely data-driven so they need a huge amount of training data to achieve desirable estimates, which is not available in the scenarios where data collection is expensive. In this paper, we propose a novel deep learning-based data completion method for sparse array interpolation termed as IHT-Net, and then apply it for DOA estimation. IHT-Net is constructed following the iteration process in IHT algorithm, but set parameters as learnable. In addition, autoencoders structures are introduced to substitute truncated-SVD (t-SVD) operation in IHT algorithm. The autoencoders with multiple linear layers can catch low rank presentations of the signal in the training process, which serves the same purpose as t-SVD. With extensive numerical simulations, we empirically show that the trained IHT-Net outperforms model-based methods such as FIHT algorithm for both signal reconstruction and DOA estimation using single snapshot. ## II System Model A sparse linear array's antenna positions can be considered a subset of a uniform linear array (ULA) antenna positions. Without loss of generality, let the antenna positions of an \(M\)-element ULA be \(\{kd\}\), \(k=0,1,\,\cdots,M-1\), where \(d=\frac{1}{2}\) is the element spacing with wavelength \(\lambda\). Assume there are \(P\) uncorrelated far-field target sources in the same range Doppler bin. The impinging signals on the ULA antennas are corrupted by additive white Gaussian Noise with variance of \(\sigma^{2}\). For the single-snapshot case, only the data collected from a single instance in time is available, resulting in the discrete representation of the received signal from a ULA as \[\mathbf{x}=\mathbf{A}\mathbf{s}+\mathbf{n}, \tag{1}\] where \(\mathbf{x}=[x_{1},x_{2},\ldots,x_{M}]^{T}\), \(\mathbf{A}=[\mathbf{a}\left(\theta_{1}\right),\mathbf{a}\left(\theta_{2} \right)\ldots,\mathbf{a}\left(\theta_{P}\right)]^{T}\), with \[\mathbf{a}\left(\theta_{k}\right)=\left[1,e^{j2\pi\frac{d\sin\left(\theta_{k} \right)}{\lambda}},\ldots,e^{j2\pi\frac{\left(M-1\right)d\sin\left(\theta_{k} \right)}{\lambda}}\right]^{T}, \tag{2}\] for \(k=1,\cdots,P\) and \(\mathbf{n}=\left[n_{1},n_{2},\ldots,n_{M}\right]^{T}\). Then, a Hankel matrix denoted as \(\mathcal{H}\left(\mathbf{x}\right)\in\mathbb{C}^{n_{1}\times n_{2}}\) where \(n_{1}+n_{2}=M+1\), can be constructed from \(\mathbf{x}\)[16]. The Hankel matrix \(\mathcal{H}\left(\mathbf{x}\right)\) admits a Vandermonde decomposition structure [2, 13, 17], i.e., \[\mathcal{H}\left(\mathbf{x}\right)=\mathbf{V}_{1}\mathbf{\Sigma}\mathbf{V}_{2 }^{T}, \tag{3}\] where \(\mathbf{V}_{1}=\left[\mathbf{v}_{1}\left(\theta_{1}\right),\cdots,\mathbf{v}_ {1}\left(\theta_{P}\right)\right]\), \(\mathbf{V}_{2}=\left[\mathbf{v}_{2}\left(\theta_{1}\right),\cdots,\mathbf{v}_ {2}\left(\theta_{P}\right)\right]\) with \[\mathbf{v}_{1}\left(\theta_{k}\right) =\left[1,e^{j2\pi\frac{\alpha\sin\left(\theta_{k}\right)}{\lambda }},\cdots,e^{j2\pi\frac{\left(n_{1}-1\right)\alpha\sin\left(\theta_{k}\right)} {\lambda}}\right]^{T}, \tag{4}\] \[\mathbf{v}_{2}\left(\theta_{k}\right) =\left[1,e^{j2\pi\frac{\alpha\sin\left(\theta_{k}\right)}{\lambda }},\cdots,e^{j2\pi\frac{\left(n_{2}-1\right)\alpha\sin\left(\theta_{k}\right)} {\lambda}}\right]^{T}, \tag{5}\] and \(\mathbf{\Sigma}=\mathrm{diag}\left(\left[\sigma_{1},\sigma_{2},\cdots,\sigma_{ P}\right]\right)\). Assuming that \(P\leq\min\left(n_{1},n_{2}\right)\), and both \(\mathbf{V}_{1}\) and \(\mathbf{V}_{2}\) are full rank of matrices, the rank of the Hankel matrix \(\mathcal{H}\left(\mathbf{x}\right)\) is indeed \(P\), thereby indicating that \(\mathcal{H}\left(\mathbf{x}\right)\) has low-rank property [13]. It's worth noting that a good choice for Hankel matrix size is \(n_{1}\approx n_{2}\)[18]. This ensures that the resulting matrix \(\mathcal{H}\left(\mathbf{x}\right)\) is either a square matrix or an approximate square matrix. Specifically, in this paper, we adopt \(n_{1}=n_{2}=\left(\frac{M+1}{2}\right)\) if \(M\) is odd, and \(n_{1}=n_{2}-1=\left(\frac{M}{2}\right)\) if \(M\) is even. We utilize a 1D virtual SLA synthesized by MIMO radar techniques [1] with \(M_{t}\) transmit antennas and \(M_{r}\) receive antennas. The SLA has \(M_{t}M_{r}<M\) elements while retaining the same aperture as ULA. Denote the array element indices of ULA as the complete set \(\left\{1,2,\cdots,M\right\}\), the array element indices of SLA can be expressed as a subset \(\Omega\subset\left\{1,2,\cdots,M\right\}\). Thus, the signals received by the SLA can be viewed as partial observations of \(\mathbf{x}\), and can be expressed as \(\mathbf{x}_{s}=\mathbf{m}_{\Omega}\odot\mathbf{x}\), where \(\mathbf{m}_{\Omega}=\left[m_{1},m_{2},\cdots,m_{M}\right]^{T}\) is a masking vector with \(m_{j}=1\), if \(j\in\Omega\) or \(m_{j}=0\) if \(j\notin\Omega\), and \(\odot\) denotes Hadamard product. Given the aforementioned statements, the Hankel matrix associated with an SLA configuration can be viewed as a subsampled version of \(\mathcal{H}\left(\mathbf{x}\right)\), wherein anti-diagonal entries corresponds to the elements of \(\mathbf{x}_{s}\) has values, while the remaining entries are zeros. With the low-rank structure we mentioned before, the missing elements can be recovered by finding the minimum rank of a Hankel matrix that aligns with the known entries [19]. \[\min_{\mathbf{x}}\;\mathrm{rank}(\mathcal{H}\left(\mathbf{x}\right))\quad \mathrm{s.t.}\;\left[\mathcal{H}\left(\mathbf{x}\right)\right]_{ij}=\left[ \mathcal{H}\left(\mathbf{x}_{S}\right)\right]_{ij},\;(i,j)\in\Theta. \tag{6}\] Here, \(\Theta\) is the set of indices of observed entries that is determined by the SLA. Noting that the rank minimization optimization in (6) is generally an NP-hard problem [19]. In [13] Cai et al. developed iterative hard thresholding (IHT) algorithm for low rank Hankel matrix completion. The convergence speed of IHT algorithm is accelerated by incorporating tangent space projection, resulting in a more expedited variant termed fast iterative hard thresholding (FIHT) algorithm. The main steps in the \(i\)-th iteration of IHT algorithm are as follows \[\mathbf{X}_{i}=\mathcal{H}\left(\mathbf{x}_{i}+\beta\left(\mathbf{x}_{s}- \mathbf{x}_{i}\right)\right), \tag{7}\] \[\mathbf{x}_{i+1}=\mathcal{H}^{\dagger}\left(\mathcal{T}_{r}\left(\mathbf{X}_{i} \right)\right), \tag{8}\] where (7) is gradient descent update for current estimate \(\mathbf{x}_{i}\) with fixed step size \(\beta\). Then \(\mathcal{H}\left(\cdot\right)\) operator transform signal vector from \(M\times 1\) Euclidean space to an \(n_{1}\times n_{2}\) Riemannian manifold. Thus, step (7) can be regarded as one-step gradient descent on a Riemannian manifold [19]. In step (8), \(\mathcal{T}_{r}\) represents t-SVD for \(\mathbf{X}_{i}\), which projects \(\mathbf{X}_{i}\) onto the fixed-rank manifold to derive a low rank approximation of \(\mathbf{X}_{i}\). Specifically, it is defined as \[\mathcal{T}_{r}\left(\mathbf{X}_{i}\right)=\sum_{k=1}^{r}\sigma_{k}\mathbf{u} _{k}\mathbf{v}_{k}^{\star},\;\;\sigma_{1}\geq\sigma_{2}\geq\cdots\geq\sigma_{ r}, \tag{9}\] where \(r\) is the rank of matrix \(\mathbf{X}_{i}\). The operator \(\mathcal{H}^{\dagger}\left(\cdot\right)\) in (8) is the inverse of \(\mathcal{H}\left(\cdot\right)\) which maps an \(n_{1}\times n_{2}\) Hankel matrix to an \(M\times 1\) vector. The IHT algorithm runs in an iterative way and has fast convergence speed [13]. However, achieving optimal results in IHT requires careful parameter tuning (e.g., step size \(\beta\) and rank \(r\)) and can be computationally expensive due to the t-SVD \(\mathcal{T}_{r}\) calculations, particularly in scenarios with large Hankel matrix dimensions. Once the full array response is obtained, DOA can be estimated using high-resolution DOA estimation algorithms that work for single-snapshot [20, 21]. ## III IHT-Net for Low Rank Hankel Matrix Completion In order to take advantage of the merits of IHT algorithm and network-based methods, IHT-Net maps the IHT update steps to a deep network architecture that consists of a fixed number of phases, each mirroring one traditional IHT iteration. As shown in Fig. 1, the IHT-Net mainly contains two components: initialization layer, and unrolled layers. The first component provides an initial estimate, analogous to IHT's initialization step, while the second component comprises multiple unrolled layers, mirroring the core iterative steps of the IHT algorithm. ### Initialization Layer We replace t-SVD in the IHT algorithm with shallow layers autoencoder structures, avoiding the need for the matrix rank knowledge and SVD computation. Inspired by the amazing performance of the masked autoencoders [22], we adopt the idea to implement an asymmetric structure that allows an encoder to operate only on observed values (without mask tokens) in the input Hankel vector and a decoder that reconstructs the full signal from the latent representation with mask tokens. As Fig. 1 shows, the gray squares represent mask tokens, with the values set to zeros. Since \(\mathbf{x}_{s}\) is in complex domain, we concatenate its real part and imaginary part along one dimension and get a \(2M\times 1\) vector. This makes the corresponding Hankel vector twice the original length, resulting in a size of \(2n_{1}n_{2}\times 1\). Next, non-zeros values are extracted from the Hankel vector \(\mathbf{\hat{X}}_{s}\) with dimension \(2N\times 1\), denoted as \(\mathbf{\hat{x}}_{s}\), along with their positions in the vector denote as a list \(\phi\), shared across all layers. As mentioned in [23], inserting multiple extra linear layers in deep neural networks works as implicitly rank Fig. 1: Illustration of IHT-Net architecture minimization of the latent coding. Motivated by this concept, the designed encoder combines 3 linear layers (with bias) separated by rectified linear units (ReLUs). As illustrated in Fig.2, the three linear layers share identical input and output dimension \(2N\), aligning with the input vector's length. In our implementation, \(2N\) is decided by the number of elements in \(\Theta\) expressed in (6). We denote the encoder in this layer as \(\mathcal{F}_{1}^{(0)}\left(\cdot\right)\), so the output of the encoder is defined as \[\bar{\mathbf{x}}_{mid}=\mathcal{F}_{1}^{(0)}\left(\bar{\mathbf{x}}_{s}\right). \tag{10}\] Then, the output \(\bar{\mathbf{x}}_{mid}\) is embedded into a \(2n_{1}n_{2}\times 1\) zero vector according to the non-zero values positions in original Hankel vector \(\bar{\mathbf{X}}_{s}\). The vector is denoted as \(\bar{\mathbf{X}}_{mid}\) with dimension \(2n_{1}n_{2}\times 1\). For the decoder, we follow the same design pattern as the encoder but with input and output size set to \(2n_{1}n_{2}\). Denoting the decoder in this layer as \(\mathcal{F}_{2}^{(0)}\left(\cdot\right)\), then the final output of the initialization layer is \[\hat{\mathbf{X}}_{0}=\beta_{0}\mathcal{F}_{2}^{(0)}\left(\mathcal{F}_{1}^{(0 )}\left(\bar{\mathbf{x}}_{s}\right)\right) \tag{11}\] where \(\beta_{0}\) is a learnable scalar, and parameters of \(\mathcal{F}_{1}^{(0)}\left(\cdot\right)\), \(\mathcal{F}_{2}^{(0)}\left(\cdot\right)\) are all learnable. Finally, the Hankel inverse mapping \(\mathcal{H}^{\dagger}\left(\cdot\right)\) operates on the output \(\hat{\mathbf{X}}_{0}\) to obtain a \(2M\times 1\) signal vector \(\hat{\mathbf{x}}_{0}\), which is expressed as \(\hat{\mathbf{x}}_{0}=\mathcal{H}^{\dagger}\left(\hat{\mathbf{X}}_{0}\right)\). ### Unrolled Layers The \(k\)-th unrolled stage consists of two modules. Module 1 is referred to as the Gradient Descent Module, while Module 2 is termed the Low-Rank Approximation Module, which are shown in Fig. 2. The Gradient Descent Module corresponds to Eq.(7) in IHT algorithm. With the input \(\hat{\mathbf{x}}_{k}\) from the \((k-1)\)-th stage, and \(\mathbf{x}_{s}\) which is broadcasted to every unrolled stage, the intermediate recovery result in \(k\)-th stage can be defined as \[\hat{\mathbf{X}}_{k}=\mathcal{H}\left(\hat{\mathbf{x}}_{k}+\beta_{k}\left( \mathbf{x}_{s}-\hat{\mathbf{x}}_{k}\right)\right), \tag{12}\] where the step size \(\beta_{k}\) is a learnable parameter. The Low-Rank Approximation Module keeps totally the same architecture as the initialization layer, while introducing a skip connection within the layers. We firstly extract \(\hat{\mathbf{x}}_{k}\) from the output \(\hat{\mathbf{X}}_{k}\) according to the non-zero values position list \(\phi\). Then the output of this module is derived by passing it through the autoencoders, resulting in \[\tilde{\mathbf{X}}_{k}=\mathcal{F}_{1}^{(k)}\left(\mathcal{F}_{2}^{(k)}\left( \hat{\mathbf{x}}_{k}\right)\right). \tag{13}\] After Hankel inverse operation, we have \(\tilde{\mathbf{x}}_{k}=\mathcal{H}^{\dagger}\left(\tilde{\mathbf{X}}_{k}\right)\). With the skip connection between the input \(\hat{\mathbf{x}}_{k}\) and the output \(\tilde{\mathbf{x}}_{k}\), the final output of the \(k\)-th unrolled stage is \[\hat{\mathbf{x}}_{k+1}=\tilde{\mathbf{x}}_{k}+\gamma_{k}\left(\tilde{\mathbf{ x}}_{k}-\hat{\mathbf{x}}_{k}\right) \tag{14}\] where \(\gamma_{k}\) is a learnable parameter weighting the residual term \((\tilde{\mathbf{x}}_{k}-\hat{\mathbf{x}}_{k})\). The final estimate \(\hat{\mathbf{x}}_{K}\) is obtained after \(K\) unrolled stages forward inference. ### IHT-Net Training Specifics We generate \(P\) point-target sources in the same range-Doppler bin. The angles of the sources follow a uniform distribution within the field of view (FoV) spanning \(\left[-60^{\circ},60^{\circ}\right]\). Their amplitudes have a uniform distribution ranging from \([0.5,1]\), while their phases are uniformly distributed between \([0,2\pi]\). Following equation (1), we can generate \(N_{b}\) training labels without noise, denoted as \(\{\mathbf{x}_{label}^{q}\}_{q=1}^{N_{b}}\) for specific SLA configuration. Then we obtain inputs \(\left\{\mathbf{x}_{label}^{q}\right\}_{q=1}^{N_{b}}\) for network training by adding different levels of Gaussian white noise with SNR randomly chosen from \([10\mathrm{dB},30\mathrm{dB}]\) for each training sample. In our experiment, \(P=2\), \(N_{b}=700000\). The learnable parameters in \(k\)-th phase of IHT-Net are \(\left\{\beta_{k},\gamma_{k},\mathcal{F}_{1}^{(k)},\mathcal{F}_{2}^{(k)}\right\}\). Hence, the learnable parameter set of IHT-Net is \(\left\{\beta_{k},\gamma_{k},\mathcal{F}_{1}^{(k)},\mathcal{F}_{2}^{(k)}\right\}_ {k=1}^{K}\). When \(k=0\), the learnable parameters are \(\left\{\beta_{0},\mathcal{F}_{1}^{(0)},\mathcal{F}_{2}^{(0)}\right\}\). Given the training data pairs \(\left\{\mathbf{x}_{label}^{q},\mathbf{x}_{input}^{q}\right\}_{q=1}^{N_{b}}\) (Note \(\mathbf{x}_{label}^{q}\) and \(\mathbf{x}_{input}^{q}\) are block data which has \(N\) training samples pairs), IHT-Net first takes \(\mathbf{x}_{input}^{q}\) as input and generates the output as the reconstruction results denoted as \(\hat{\mathbf{x}}_{K}^{q}\). We aim to reduce the discrepancy between \(\hat{\mathbf{x}}_{K}^{q}\) and \(\mathbf{x}_{label}^{q}\), while satisfying the low-rank approximation constraint, which can be stated as \(\mathcal{F}_{1}\circ\mathcal{F}_{2}\approx\mathcal{I}\). Therefore, the loss function for IHT-Net is designed as follows \[Loss_{total}=Loss_{1}+\alpha Loss_{2} \tag{15}\] with \[Loss_{1} =\frac{1}{N_{b}N}\sum_{q=1}^{N_{b}}\left\|\hat{\mathbf{x}}_{K}^{ q}-\mathbf{x}_{label}^{q}\right\|_{2}^{2}, \tag{16}\] \[Loss_{2} =\frac{1}{N_{b}N}\sum_{q=1}^{N_{b}}\sum_{k=0}^{K}\left\|\mathcal{H }^{\dagger}\left(\mathcal{F}_{1}^{(k)}\left(\mathcal{F}_{2}^{(k)}\left(\hat{ \mathbf{x}}_{k}^{q}\right)\right)\right)-\hat{\mathbf{x}}_{k}^{q}\right\|_{2}^{2}, \tag{17}\] where \(K+1\), \(\alpha\) are the total number of IHT-Net phases and the regularization parameter, respectively. In our experiments, \(\alpha\) is set to \(0.01\). For training IHT-Net, Adam optimization algorithm [24] is employed with an initial learning rate of \(10^{-4}\), which decays to 0.5 times of the original rate every 10 epochs. ## IV Numerical Results In this section, we evaluate IHT-Net's performance via numerical simulations. A ULA with \(N=21\) elements is considered, and an SLA is derived from the 21-element ULA by randomly choosing part of its antennas. We first perform experiments using an 18-element SLA, with the training dataset generated following the strategy described in Section III-C. Total \(100\) epochs of training are conducted in our training procedure. Fig. 3 (a) shows the initial rapid decay of the training loss within the first 10 epochs, which indicates that the proposed IHT-Net is easily trainable. To verify the Fig. 2: (a) Illustration of initialization layer of IHT-Net; (b) Illustration of the \(k\)th unrolled layer of IHT-Net. recovery performance of IHT-Net with different layers, we trained IHT-Net with different layers using the same training datasets. Then we randomly generated \(5,000\) testing samples in 20dB SNR for evaluation. The testing loss is calculated as (16). Fig. 3 (b) shows that the testing loss decreases as the number of unrolled phases increases, but this decline stabilizes after 8 phases. Thus, we choose to utilize 8 unrolled phases to balance reconstruction performance and computational efficiency in IHT-Net. Furthermore, we compared IHT-Net and the FIHT algorithm [13] in different SNRs, using a testing dataset of \(5,000\) samples per SNR level. In Fig. 3 (c), IHT-Net consistently outperforms FIHT in reconstruction loss, particularly at higher SNR, highlighting its superior performance. We also conducted experiments employing a 10-element SLA, and compared the recovered spectrums in various SNRs with different SLAs. Fig. 4 explicitly shows the beam patterns of recovered full array response by IHT-Net and FIHT, as compared with the spectrums of full array response with and without noise. It can be found that the proposed IHT-Net has denoising ability in relatively low SNR, e.g. 10dB, which indicates that the modules in IHT-Net play the same role as the t-SVD operation in FIHT. In addition, both IHT-Net and FIHT obtain promising spectrums in high SNR e.g. 30dB. Fig. 4(c) and (d) illustrate that for a sparser SLA, FIHT struggles to recover the original signal effectively. In contrast, IHT-Net consistently produces satisfactorily recovered spectrums which keep the main-lobes and sidelobes, confirming its superior recovery performance, particularly with sparser SLAs. Finally, we compared the mean square errors of DOA estimation using IHT-Net and FIHT reconstruction in different SNRs. We employed beamforming (BF) for DOA estimation. The testing samples number in each SNR is also \(5,000\). The errors were calculated using mean-square-loss (MSE). The results shown in Fig. 5 demonstrated that the IHT-Net completion leads to improved DOA estimation accuracy compared with FIHT completion. ## V Conclusions We have demonstrated a novel learning-based sparse array interpolation approach for single-snapshot DOA estimation, termed IHT-Net. It holds potential for applications in automotive radar systems employing sparse arrays. Derived from the FIHT algorithm, IHT-Net incorporates learnable parameters and nonlinear layers, offering an enhanced optimizer through supervised learning with shallow unrolled layers. IHT-Net is easily trainable and interpretable, facilitating further network design and development. Numerical simulations demonstrate its superior reconstruction and DOA estimation performance compared with FIHT. Fig. 4: Beamforming spectrum examples in different SNRs with different SLAs; (a) SNR=10dB, 18-element SLA; (b) SNR=30dB, 18-element SLA; (c) SNR=10dB, 10-element SLA; (d) SNR=30dB, 10-element SLA. Fig. 5: Comparison of DOA estimation errors after IHT-Net and FIHT [13] completion under different SNRs. Fig. 3: (a) IHT-Net training loss (defined in (16)) v.s. epoch for IHT-Net with 8 unrolled phases; (b) IHT-Net testing loss (defined in (16)) with various numbers of unrolled phases of IHT-Net; (c) Signal reconstruction error comparison between IHT-Net and FIHT [13] in different SNRs.
2309.06410
Solving the Pulsar Equation using Physics-Informed Neural Networks
In this study, Physics-Informed Neural Networks (PINNs) are skilfully applied to explore a diverse range of pulsar magneto-spheric models, specifically focusing on axisymmetric cases. The study successfully reproduced various axisymmetric models found in the literature, including those with non-dipolar configurations, while effectively characterizing current sheet features. Energy losses in all studied models were found to exhibit reasonable similarity, differing by no more than a factor of three from the classical dipole case. This research lays the groundwork for a reliable elliptic Partial Differential Equation solver tailored for astrophysical problems. Based on these findings, we foresee that the utilization of PINNs will become the most efficient approach in modelling three-dimensional magnetospheres. This methodology shows significant potential and facilitates an effortless generalization, contributing to the advancement of our understanding of pulsar magnetospheres.
Petros Stefanou, Jorge F. Urbán, José A. Pons
2023-09-12T17:23:15Z
http://arxiv.org/abs/2309.06410v2
# Solving the Pulsar Equation using Physics-Informed Neural Networks ###### Abstract In this study, Physics-Informed Neural Networks (PINNs) are skilfully applied to explore a diverse range of pulsar magnetospheric models, specifically focusing on axisymmetric cases. The study successfully reproduced various axisymmetric models found in the literature, including those with non-dipolar configurations, while effectively characterizing current sheet features. Energy losses in all studied models were found to exhibit reasonable similarity, differing by no more than a factor of three from the classical dipole case. This research lays the groundwork for a reliable elliptic Partial Differential Equation solver tailored for astrophysical problems. Based on these findings, we foresee that the utilization of PINNs will become the most efficient approach in modelling three-dimensional magnetospheres. This methodology shows significant potential and facilitates an effortless generalization, contributing to the advancement of our understanding of pulsar magnetospheres. keywords: magnetic fields; pulsars; stars: neutron; ## 1 Introduction Physics-Informed Neural Networks (PINNs) (Lagaris et al. (1997); Raissi et al. (2019)) is a relatively new but very promising family of PDE solvers based on Machine Learning (ML) techniques. This method is suitable to obtain solutions of Partial Differential Equations (PDEs) describing the physical laws of a given system by taking advantage of the very successful modern ML frameworks and incorporating physical knowledge. In recent years, PINN-based solvers have been used to solve problems in a great variety of fields: fluid dynamics (Cai et al. (2021)), turbulence in supernovae (Karpov et al. (2022)), radiative transfer (Korber et al. (2023)), black hole spectroscopy (Luna et al. (2023)), cosmology (Chantada et al. (2023))large scale structure of the universe (Aragon-Calvo (2019)), galaxy model fitting (Aragon-Calvo & Carvajal (2020)), inverse problems (Pakravan et al. (2021)) and many more. In our previous work (Urban et al. (2023), Paper I hereafter) we presented a PINN solver for the Grad-Shafranov equation, which describes the magnetosphere of a slowly rotating neutron star endowed with a strong magnetic field (a magnetar) in the axisymmetric case. In that paper, we demonstrated the ability of the network to be trained for various boundary conditions and source terms simultaneously. In this work, our purpose is to extend our PINN approach to the more general - and more challenging - case of rapidly rotating neutron stars (pulsars). The rotating case presents new interesting challenges related to the presence of current sheets in the magnetosphere. Our implementation is able to deal with these pathological regions sufficiently well, demonstrating its potential for problems where classical methods struggle. Pulsar magnetospheres have been studied extensively in the last 25 years with various approaches, each with its advantages and limitations. Contopoulos et al. (1999) were the first to solve the axisymmetric, time -independent problem, a result that was later confirmed and improved by Gruzinov (2005) and Timokhin (2006). Spitkovsky (2006) used a full MHD time-dependent code and was able to acquire solutions for aligned and oblique rotators. Solutions for arbitrary inclination were also obtained by Petri (2012) who used a time - dependent pseudo-spectral code. More recent approaches involve large-scale Particle-in-Cell simulations that include the influence of accelerated particles (Cerutti et al. (2015); Philippov & Spitkovsky (2018)). All of these, and many related, works have improved our understanding of the pulsar magnetosphere. However, some important questions remain still without an answer. The structure of the paper is the following. After a brief summary of the relevant equations and boundary conditions in section SS2, we will describe the PINN method in SS3, with emphasis in the new relevant details with respect to the magnetar problem (paper I). In section SS4 we present our results, showing that we are able to reproduce the well-established axisymmetric results encountered in the literature, but also indicating that new, possibly unexplored solutions can be encountered. We summarize our most relevant conclusions and discuss possible improvements and future extensions in section SS5. ## 2 Pulsar Magnetospheres. Our aim is to extend our previous results from paper I to find numerical solutions of the pulsar equation, that can be written as follows: \[\mathbf{\nabla}\times\left(\mathbf{B}-\beta^{2}\mathbf{B}_{p}\right)=\alpha\mathbf{B}, \tag{1}\] where \(\beta=v/c=|\Omega\sigma|/c=\varpi/\kappa_{\rm LC}\) is the co-rotational speed in units of the speed of light, with \(\kappa_{\rm LC}=c/\alpha\) being the light-cylinder (LC) radius and \(\mathbf{B}_{P}=\mathbf{B}-\mathbf{B}_{\phi}\) is the magnetic field perpendicular to the direction of rotation. Here, \(\alpha\) is a scalar function given by \[\alpha=\frac{4\pi}{c}\left(\mathbf{J}-\rho_{e}\mathbf{v}\right)\cdot\frac{\mathbf{B}}{B^{2}}. \tag{2}\] representing the ratio between the field-aligned component of the current in the corotating frame and the local magnetic field strength. We refer to the comprehensive and thorough recent review by Philippov & Kramer (2022) (and references therein) for a complete historical overview of magnetospheric physics and the mathematical derivation of the equations. As in Paper I, we focus in the axisymmetric case and, for convenience, we use compactified spherical coordinates \((q,\mu,\phi)\) where \(q=1/r,\mu=\cos\theta\). In these coordinates, any axisymmetric magnetic field can be written in terms of a poloidal and a toroidal scalar stream function \(P\) and \(T\) as \[\mathbf{B}=\frac{q}{\sqrt{1-\mu^{2}}}\left(\mathbf{\nabla}P\times\hat{\phi}+T\hat{\phi }\right). \tag{3}\] Here \(P\) is related to the magnetic flux and is constant along magnetic field lines. Plugging this expression into Eq. (1) and taking the toroidal component, we get \[\mathbf{\nabla}P\times\mathbf{\nabla}T=0, \tag{4}\] which means that \(T\) is only a function of \(P\) (\(T=T(P)\)) and, therefore, is also constant along magnetic field lines. The poloidal component of Eq. (1) gives the well-known Pulsar Equation (Michel (1973); Scharlemann & Wagoner (1973), which in our coordinates reads \[\left(1-\beta^{2}\right)\Delta_{\text{GS}}P+2\beta^{2}q^{2}\left(q\partial_{q }P+\mu\partial_{\mu}P\right)+G(P)=0, \tag{5}\] where \(G(P)=TT^{\prime}\) and \(\Delta_{\text{GS}}\) is the Grad-Shafranov operator, given by \[\Delta_{\text{GS}}\equiv q^{2}\partial_{q}\left(q^{2}\partial_{q}\right)+ \left(1-\mu^{2}\right)q^{2}\partial_{\mu\mu}. \tag{6}\] Notice that in the limiting case where \(\beta=0\), we ignore the rotationally induced electric field and Eq. (5) is reduced to the Grad-Shafranov equation. Hereafter we will use the following shorthand notation for the extended operator \[\Delta_{GS\beta}\equiv\left(1-\beta^{2}\right)\Delta_{\text{GS}}+2\beta^{2}q^ {2}\left(q\partial_{q}+\mu\partial_{\mu}\right). \tag{7}\] A crucial difference in the modelling of magnetar and pulsar magnetospheres is the source of poloidal currents \(TT^{\prime}\). In magnetars, currents are assumed to be injected into the magnetosphere due to the magnetic field evolution in the NS's crust, which twists the magnetosphere (Akgun et al., 2018). Therefore, the source term in Eq. (5) is given either as a user-parametrised model (as for example in Akgun et al. (2016) for the 2D problem or in Stefanou et al. (2023) for the 3D problem) or in a self-consistent manner by coupling the interior evolution with the magnetosphere. We adopted the latter approach in Paper I (see the astrophysical application in section 5 of that paper), where a series of magnetospheric steady-state solutions were obtained for each time-step of the internal magneto-thermal evolution. In the pulsar magnetosphere, however, the source of current is the LC. Lines that cross the LC have to open up and bend, giving rise to azimuthal fields and currents. We refer to the region with open field lines as the open region. On the other hand, lines that do not cross the LC, but turn back to the surface, co-rotate rigidly with the the star. We refer to this region as the closed region. The source term \(TT^{\prime}\) must be determined self-consistently to ensure smooth crossing of the lines along the LC. In particular, at the LC where \(\beta=1\), Eq. (5) takes the simple form \[-q^{2}\left(q\partial_{q}P+\mu\partial_{\mu}P\right)=2B_{z}=TT^{\prime}, \tag{8}\] which places a constraint on the possible functions \(T(P)\). This makes Eq. (5) complicated to solve, as two functions have to be determined simultaneously. In order to do so, additional physical boundary conditions have to be imposed (see next subsection). Through this work, we measure distances in units of the radius of the star \(R\) and magnetic fields in units of the surface magnetic field strength at the equator \(B_{0}\) (note that the surface magnetic field strength at the poles is \(2B_{0}\)). In these units, the magnetic flux is measured in units of the total poloidal flux \(P_{0}=B_{0}R^{2}\). It is convenient to additionally define the total magnetic flux carried by field lines that cross the LC in a non-rotating dipole \(P_{1}=P_{0}(R/R_{\text{LC}})\), which will be useful in what follows. Finally, the toroidal stream function is measured in units of \(B_{0}R\). ### Boundary Conditions We assume that the magnetic field at the surface of the star \(P(q=1)\) is known. For example, in the case of a dipole \[P(q=1,\mu)=B_{0}(1-\mu^{2}) \tag{9}\] but we will also explore other options. The last closed field line, called separatrix, will be labelled by \(P=P_{c}\). It marks the border between open and closed (current-free) regions. The value \(P_{c}\) corresponds to the total magnetic flux that crosses the light cylinder. The point where this line meets the equator is the called Y-point and it should lie at a radius \(r_{c}\), somewhere between the surface and the LC (see e.g. Timokhin (2006) for a detailed discussion). Far away from the surface, field lines should become radial, resembling a split monopole configuration (Michel, 1973). Inside the closed region, the magnetosphere is current-free and purely poloidal. No toroidal fields are developed, so that \[T(P>P_{c})=0 \tag{10}\] by definition. In the classical model, a current sheet should develop along the separatrix, supported by the discontinuity of the toroidal magnetic field between closed (\(B_{\phi}=0\)) and open (\(B_{\phi}\neq 0\)) regions. Another current sheet should exist along the equator and beyond the Y-point, supported by the opposed directions of the magnetic field lines between the two hemispheres. The current sheet forms the return current that accounts for the current supported by out-flowing particles along the open field lines and closes the current circuit. All this motivated the seminal works to impose equatorial symmetry by defining a numerical domain only on one hemisphere and imposing another boundary condition \(P=P_{c}\) at the equator. To reproduce these models, we will impose that along the equator and beyond the Y-point \[P(q<q_{c},\mu=0)=P_{c}. \tag{11}\] Here \(q_{c}=1/r_{c}\) is the corresponding location of the Y-point in compactified coordinates. However, later works have relaxed this assumption (Contopoulos et al., 2014), proposing a more general solution without a separatrix and with a transition region close to the equator. We will explore both cases in our results section. ### Total Energy and Radiated Power The total energy of the electromagnetic field in the magnetosphere is given by \[\mathcal{E}=\frac{1}{8\pi}\int\left(B^{2}+E^{2}\right)dV. \tag{12}\] It is convenient to describe the electromagnetic energy content of a given model in terms of the excess energy of a particular magneto-spheric solution with respect to the non-rotating dipole, \[\Delta\mathcal{E}=\frac{\mathcal{E}-\mathcal{E}_{d}}{\mathcal{E}_{d}}, \tag{13}\] where \[\mathcal{E}_{d}=\frac{1}{3}B_{0}^{2}R^{3} \tag{14}\] is the total magnetic energy of a non-rotating dipole. The total power radiated away by a rotating magnetosphere can be calculated by integrating the Poynting flux over a sphere far away from the star \[\hat{\mathcal{E}}=\frac{c}{4\pi}\int\left(\mathbf{E}\times\mathbf{B}\right)\cdot\mathbf{f} d\omega=-\frac{c}{R_{\mathrm{LC}}}\int_{0}^{P_{c}}TdP, \tag{15}\] where \(\omega\) is the solid angle. Again, it is convenient to measure the relative difference of the radiated power with respect to the classical order of magnitude estimate \[\hat{\mathcal{E}}_{d}=\frac{B_{0}^{2}R^{6}}{R_{\mathrm{LC}}^{4}}c, \tag{16}\] and hereafter we will express \(\hat{\mathcal{E}}\) in units of \(\mathcal{E}_{d}\).1 Footnote 1: We should stress that the (unfortunately) often used \(\sin^{2}\theta\) dependence with the inclination angle in \(\hat{\mathcal{E}}_{d}\) is unphysical. It misleads to conclude that an aligned rotator does not emit, but it actually does radiate in a similar amount (but a factor of about 2) than the orthogonal rotator. ## 3 Network Structure and Training Algorithm In Paper I, we used a PINN to calculate approximate solutions of the axisymmetric Grad-Shafranov equation. We refer the reader to that paper for a more detailed description of the generic implementation. In this section, we will briefly summarise the main points and highlight the novelties introduced to adapt our solver to the pulsar case. The principal changes introduced aim at enforcing the physical constraint (boundary conditions, or the \(T(P)\) requirement) by construction instead of leaving the job to minimize additional terms in the loss function, which usually does not reach the required accuracy. We consider solutions in points \((q,\mu)\in\mathcal{D}\) in a 2-dimensional domain \(\mathcal{D}\). We denote by \(\partial\mathcal{D}\) the boundary of this domain. To account for the equatorial constraint in Eq. (11), we will consider the equatorial line beyond \(r_{c}\) as part of \(\partial\mathcal{D}\). In order to ensure that \(P\) depends on the coordinates while \(T\) is solely a function of \(P\), we design a network structure consisting of two sub-networks that are trained simultaneously. The output of each sub-network depends only on its corresponding input The first sub-network takes the coordinates \((q,\mu)\) as input and returns as output a function that we denote by \(N_{P}\). Then, \(P\) is calculated using \[P(q,\mu)=f_{b}(q,\mu)+h_{b}(q,\mu)N_{P}(q,\mu;\Theta). \tag{17}\] where \(f_{b}\) can be any function in \(\mathcal{D}\) that satisfies the corresponding boundary conditions at \(\partial\mathcal{D}\). \(h_{b}\) is an arbitrary function representing some measure of the distance to the boundary, that must vanish at \(\partial\mathcal{D}\). Both user-supplied functions \(f_{b}\) and \(h_{b}\) depend only on the coordinates and are unaffected by the PINN. There is some freedom to decide their specific form, as long as they retain certain properties (e.g. they are sufficiently smooth) and they have the desired behaviour at the boundary. The only part that is adapted during training is \(N_{P}\) through its dependence on the trainable parameters \(\Theta\). With this _parametrisation_ (or _hard enforcement_) approach we ensure that boundary conditions are exactly satisfied by construction. This differs from the other usual approach, consisting of adding more terms related to the boundary conditions in the loss function. We will specify the particular form of the functions \(f_{b}(q,\mu)\) and \(h_{b}(q,\mu)\) in the next section when we discuss different cases. Next, the second sub-network takes \(P\) as input and returns \(N_{T}\) as output. This automatically enforces that \(T=T(P)\), required by Eq. (4). The new network output \(N_{T}(P;\Theta)\) is used to construct the function \(T(P)\) as follows: \[T(P)=g(P)N_{T}(P;\Theta)\, \tag{18}\] where \(g\) is another user-supplied function to include other physical restrictions. For example, we can use a step function to treat a possible discontinuity of \(T\) at the border between open and closed regions (Eq. (10)). Another advantage of this approach is that \(T^{\prime}\) is calculated from \(T\) via automatic differentiation in the second sub-network. A subtle point in the pulsar problem is the determination of the critical value \(P_{c}\) that separates the open and closed regions. We have explored two different approaches: **(a)**\(P_{c}\) is self-determined by the network and **(b)**\(P_{c}\) is fixed to some value. In the first case, \(P_{c}\) is considered an extra output of the first sub-network, closely connected to \(P\), resembling the approach taken by classical solvers for this problem. In contrast, the second method involves fixing \(P_{c}\) as a hyperparameter to a predetermined value, leaving the network to discover a solution that aligns with this fixed value. This introduces the necessity to loosen some of the other imposed constraints, like the position of the Y-point, and encourages the exploration of solution properties from a fresh standpoint. During the network training process, both \(P\) and \(T\) (and possibly \(P_{c}\), if we follow the case **(a)** above) are obtained simultaneously by minimising the following loss function \[\mathcal{L}=\omega_{1}\mathcal{L}_{PDE}+\omega_{2}\sigma(P_{c}), \tag{19}\] where \[\mathcal{L}_{PDE}=\frac{1}{N}\sum_{(q,\mu)\in\mathcal{D}}\left[\Delta_{GS \beta}P(q,\mu)+G(P(q,\mu))\right]^{2}. \tag{20}\] Here, \(N\) represents the size of the training set, and \(\omega_{1}\) and \(\omega_{2}\) are adjustable parameters. In approach **(b)**, \(\omega_{2}=0\), whereas in approach **(a)**, \(\omega_{2}\) is adjustable. Additionally, \(\sigma\) denotes the standard deviation of the \(P_{c}\) values for the points in the training set. \(P_{c}\) is a global magnetospheric value that should not depend on the coordinates. However, during training, there is no guarantee that this value will be the same for all the points that are considered. To ensure that the values of \(P_{c}\) at arbitrary points are as close to each other as possible, we minimise the standard deviation (\(\sigma(P_{c})\)) of the values of \(P_{c}\) at all the points of the training set by including an additional term in the loss function (19). This inclusion guarantees that the network's output for \(P_{c}\) remains constant and independent of the coordinates \((q,\mu)\). A schematic representation of our network's structure can be found in Fig. 1. We consider a fully-connected architecture for both sub-networks consisting of 4 hidden layers for the first one and 2 hidden layers for the second one. All layers have 40 neurons. Our training set consists of \(N=5000\) random points \((q,\mu)\in\mathcal{D}\), which are periodically changed every 500 epochs in order to feed the network with as many distinct points as possible. The total number of epochs is 35000. This number may seem big at first sight, but is necessary because of the amount of points considered. In order to minimise the loss function, we use the ADAM optimisation algorithm (Kingma & Ba, 2014) with an exponential learning rate decay. We have also considered in this work the idea of introducing trainable activation functions, suggested in PINNs originally by Jagtap et al. 2020. For each hidden layer \(k\), the linear transformation performed at that layer is multiplied by a trainable parameter \(c_{k}\). We found that this practice can accelerate convergence but a more rigorous study is out of the scope of this paper. ## 4 Magnetospheric Models In this section, we present the results obtained for various cases and under different physical assumptions. Our analysis encompasses the successful reproduction of all distinct axisymmetric models found in literature, which include: 1. The classical solutions (Contopoulos et al., 1999; Gruzinov, 2005), 2. The family of solutions with varying locations of the Y point (Timokhin, 2006), 3. The improved solution where the separatrix current sheet is smoothed out (Contopoulos et al., 2014), 4. The non-dipolar solutions (Gralla et al., 2016). Each of these models presents unique challenges and characteristics, and we will delve into the outcomes achieved for each one. In particular, the overall magnetospheric configuration, the poloidal flux at the separatrix \(P_{c}\), the functions \(T(P)\) and \(G(P)\), and the energy losses \(\mathcal{E}\) agree with the previous works. Figure 1: A sketch of the network structure. Two sub-networks are employed to ensure that \(P=P(q,\mu)\) and \(T=T(P)\). Figure 3: Same as figure 2 but with the Y-point positioned at \(r_{c}=0.4R_{\rm LC}\). The region where the return current flows thrurough open field lines in negligible. Figure 2: The classical axisymmetric pulsar magnetosphere. Fade black lines: magnetic field lines as contours of \(P\). Thick pink line: separatrix and equatorial current sheet, where \(P=P_{c}\). Vertical green line: light cylinder. Colourmap: the source current \(G(P)\). Labels on the field lines are in units of \(P_{c}\). Colourbar is in symmetrical logarithmic scale. z and x axes are in units of the stellar radius \(R\). The bulk of the return current flows along the current sheet, with a small percentage flowing along open field lines ### Hard enforcement of boundary conditions. As discussed earlier, we construct \(P\) according to Eq. (17) to fulfill the boundary conditions. For the cases (i) and (ii) we have employed an \(f_{b}\) with the following form: \[f_{b}(q,\mu)=(1-\mu^{2})\left[P_{c}+q(1-P_{c})\frac{\text{ReLU}^{3}\left(1- \frac{q_{c}}{q}\right)}{(1-q_{c})^{3}}\right], \tag{21}\] where ReLU (x) is the _Rectified Linear Unit_ function, which returns zero if \(x<0\) and \(x\) if \(x>0\)2, \(q_{c}=\nicefrac{{1}}{{r_{c}}}\) is the position of the Y-point and \(n\) is a free positive parameter. Footnote 2: Alternative definitions of \(\text{ReLU}(x)\) are \(x\mathcal{H}(x)\) (\(\mathcal{H}\) being the Heaviside step function) and \(\text{max}(0,x)\). Furthermore, we have chosen the following \(h_{b}\) function: \[h_{b}(q,\mu)=(1-\mu^{2})(1-q)\sqrt{\text{ReLU}^{3}\left(1-\frac{q_{c}}{q} \right)+\mu^{2}}, \tag{22}\] where the reader can check that \(h_{b}\) is zero at the boundary. The terms with the ReLU functions enforce that \(P=P_{c}\) at the equator when \(q<q_{C}\), and the third power ensures that \(P\) and its first and second derivatives are all continuous. The parametrisation for \(T(P)\) requires the additional function \(g\). We use a gaussian3transition beginning at \(P=P_{c}\): Footnote 3: Any other function that asymptotically approaches the Heaviside step function could be used instead. \[g(P)=\begin{cases}1&P\leq P_{c}\\ e^{-\frac{(P-P_{c})^{2}}{2(\mathcal{H}P)^{2}}}&P>P_{c}\end{cases} \tag{23}\] where \(\delta P\) is a small number that controls the width of the current sheet, where the transition of T from a finite value to zero takes place. Note that \(g\) and its first derivative are both continuous at the separatrix, so \(T\) and \(T^{\prime}\) are well defined. For the cases (iii) and (iv), the condition \(P=P_{c}\) at the equator is lifted and consequently, the parametrisation functions must be modified. Our choice for these cases is: \[f_{b}(q,\mu)=f_{1}(q)\sum_{l=1}^{l_{\text{max}}}\frac{b_{l}}{l}P _{l}^{\prime}(\mu)+f_{2}(q)(1-|\mu|) \tag{24}\] \[h_{b}(q,\mu)=q(1-q)(1-\mu^{2})\] (25) \[g(P)=-P\ \left(\text{ReLU}\left(1-\frac{|P|}{P_{c}}\right) \right). \tag{26}\] Adjustable weighting functions, denoted as \(f_{1}\) and \(f_{2}\), are utilized to control the relative importance of the two terms. To be specific, \(f_{1}\) should be significant near the surface but diminish at considerable distances from the star, while \(f_{2}\) should vanish close to the surface but be dominant at \(q\ll 1\). At intermediate distances, approximately around the LC, both \(f_{1}\) and \(f_{2}\) should be much smaller than \(h_{b}\) to ensure that the neural network contribution in Eq. (17) dominates. In order to allow for more versatile configurations beyond the standard dipole representation, the surface boundary condition is presented as a linear combination of magnetic multipoles. To impose a current-free region, the function \(g\) is employed, and while it does not necessarily indicate a current sheet, it is chosen to be at least quadratic. This ensures that \(T^{\prime}\) and consequently \(G\) experience at least one change of sign, allowing the current circuit to close. ### Classical solutions The solutions obtained in Contopoulos et al. (1999); Gruzinov (2005) will be referred to as the _classical solutions_. Fig. 2 illustrates contour lines of the poloidal flux function \(P\), with the colormap representing the source current \(G=TT^{\prime}\). All the well-known characteristics of the pulsar magnetosphere are observed in these solutions: The open field lines extend beyond the light cylinder and stretch towards infinity, eventually adopting a split monopole configuration at considerable distances, and a current sheet forms at the equator due to the magnetic field's reversal between the north and south hemispheres. Figure 4: **(a)** Colourmap of the residuals of the pulsar equation. Colourbar is in log scale. **(b)** Evolution of the loss function with the training epochs. Only the value of the loss every 100 epochs is plotted for clarity. Big spikes correspond to changes in the training set. Small fluctuations can be interpreted as the variance of the PDE residual. In the region where field lines have \(P>P_{c}\), both the toroidal magnetic field and poloidal current are zero, except for the narrow transition zone \([P_{c},P_{c}+\delta P]\) located just inside the separatrix. Beyond the separatrix, the blue region illustrates the small portion of the return current that flows back to the star along open field lines. In contrast, the nearly white region just inside the separatrix represents the substantial portion of the return current that flows along the current sheet. We summarise in Tab. 1 the model parameters of different solutions. The first line corresponds to the classical model depicted in Fig. 2. To showcase the accuracy of our findings, we present in Fig. 4a how well our solution aligns with the pulsar equation (5). The color map illustrates the absolute error of the pulsar equation for our model, revealing remarkably low values within the bulk of the domain \((\lesssim 10^{-5})\). As expected, the error is slightly larger in proximity to the separatrix, a region of discontinuity. Nonetheless, this discrepancy does not impede our solver from effectively approximating the solution throughout the rest of the domain, nor does it significantly affect the solution far from these regions. In general, we anticipate the maximum error to be of the order of \(\sim\sqrt{\mathcal{L}}\) since it corresponds to the error of the PDE for a considerable set of random points. Indeed, as depicted in Fig. 4b, at the conclusion of the training process, the loss reaches values around \(\sim 10^{-7}\), confirming this assumption. The prominent spikes observed in the figure correspond to the periodic changes of the training set of points. However, it is noticeable that as the training epochs progress, the spikes diminish, indicating that the network has learned to generalize to new points without compromising accuracy. The small fluctuations between the spikes, should be interpreted as the variance in the approximation to the solution. As the network adapts its parameters to acquire the solution, it cannot simultaneously reconcile all the training points. Consequently, it fluctuates around a mean instead of finding a single minimum value. With the same PINN, it is straightforward to produce solutions with different positions of the Y-point, as in Timokhin (2006) simply by varying the parameter \(q_{c}\) in Eq. (21). Fig. 3 shows an example of such a solution, where the Y-point lies at a distance \(r_{c}=0.4R_{\text{LC}}\). In this case, the totality of the return current flows through the current sheet. Interestingly, we observe that the luminosity for \(r_{c}=0.4R_{\text{LC}}\) is an order of magnitude larger than the classical solution (see Table 1). As discussed in Timokhin (2006), both the luminosity and the total energy stored in the magnetosphere increase with decreasing \(r_{c}\), so the magnetosphere will generally try to achieve the configuration with the minimum energy, that is, with \(r_{c}\) as close as possible to the light cylinder. Therefore, although the configurations with a small \(r_{c}\) are mathematically sound and very interesting from the astrophysical point of view (much larger luminosity), they are probably short-lived and less frequent in nature than the standard configuration. ### Non-fixed boundary condition at the equator In most previous studies that used classical methods to tackle this problem, it was a common practice to solve it in just one hemisphere and setting boundary conditions at the equator (Eq. (11)), where a kind of "jump" is expected to occur. This approach is reasonable, but it limits the range of magnetospheric solutions that can be obtained. The solutions have equatorial symmetry (in addition to axisymmetry) and end up having a specific configuration: a dipole magnetic field at the surface and a Y-point configuration where the equatorial and separatrix current sheets meet. However, by utilizing PINNs and taking advantage of their local and flexible nature for imposing constraints and boundaries, we can create solutions with fewer (only physically relevant, rather than mathematical) requirements. In this section, we introduce solutions where boundary conditions are applied only at the surface and at infinity, without restricting the equator as part of the solution domain. At infinity, we simply demand that the solution approaches a split monopole configuration (last term in Eq. (24)) with a specific value (denoted by \(P=P_{\infty}\)) at the equator: \[P(\mu,q=0)=P_{\infty}(1-|\mu|)\.\] Since the equator is free from any boundary conditions, some other constraint must be imposed to distinguish between a possibly infinite class of different solutions. We decide to set the value of \(P_{c}\) beforehand. This value separates the regions with and without electric currents. This approach is just as valid and perhaps more versatile than other constraints, like pinning down the position of the Y-point. As for \(P_{\infty}\), we do not fix its value; instead we let the network figure it out. The value of \(P_{\infty}\) separates the regions with currents of different sign. In Fig. 5 we present a typical solution for these boundary conditions. This magnetospheric configuration bears resemblance to the one obtained in Contopoulos et al. (2014), where a substantial number of field lines that cross the light cylinder close inside the equatorial current sheet. However, we have arrived to this result by following a different prescription. They enforced specific boundary conditions at the equator to ensure that the perpendicular component of the Lorentz force applied to the equatorial current sheet becomes zero. On the other hand, our approach involved imposing that the solution becomes a split monopole with a certain magnetic flux at infinity while leaving the equator unrestricted. An interesting generalisation of this set of solutions involves applying non-dipolar surface boundary conditions. Instead of sticking to the basic dipole case, the surface magnetic field can be a combination of various magnetic multipoles (Gralla et al., 2016). This option is not feasible in the classical solution because having contributions of even multipoles breaks the equatorial symmetry. Nevertheless, our solver can handle this situation without any issues, because it is Figure 5: Same as figure 2 but for the case of non-fixed equatorial boundary condition. Field lines that cross the light cylinder are close through the current sheet. The solid pink line indicates the line \(P=P_{c}=1.4P_{1}\), whereas the dashed pink line corresponds to \(P=P_{\infty}=0.77P_{1}\). working throughout the entire domain. As we move to significant distances from the star, we anticipate the multipole solution to gradually converge towards the classical dipole solution. Fig. 6 shows an example of such a case. In particular, we consider a combination of a dipole, a quadrupole, and an octupole, with the corresponding coefficients in Eq. (24) being \(b_{1}=1,b_{2}=-2\) and \(b_{3}=2\). At first glance, when observing the magnetosphere on a larger scale of tens of LC, it appears quite similar to the previous one. However, upon closer inspection of the zoomed-in right panel, the complex and intricate structure in the innermost region of the magnetosphere becomes evident. To better understand the structure of this last model, in Fig. 7 we show the absolute value of the first six multipole weights computed over spheres at different radii. We see how the even \(l=2\) multipole quickly decreases with distance. Other even multipoles grow in the inner region, but they also become vanishingly small at a distance of a few LC. On the contrary, the odd multipoles approach the asymptotic values corresponding to the split monopole at radial infinity (\(P\propto 1-|\mu|\)), shown with dashed lines in the figure. ## 5 Discussion and final remarks Table 1 summarises the various models studied in this work. We use the letter \(C\) to denote solutions acquired following the classical approach, the letter \(E\) for solutions without an equatorial constraint and the letter \(M\) for solutions with multipolar content. For the first three models, \(P_{c}\) is determined by the PINN, while for the rest of them is fixed to a predetermined value. \(P_{\infty}\), when present, is always determined by the PINN. The excess energy for all the models is of the order of a few percent. Models with no equatorial restriction tend to have lower energies, indicating that they are the preferred configuration in nature. With the exception of the model C3, the luminosity for all models does not vary from the classical dipole case by more than a factor of three. In Fig. 8, we present the function \(T(P)\) for all the models of Table 1. The curves corresponding to models with fixed boundary conditions at the equator exhibit a steep transition to zero, which is modelled by the current sheet (Eq. (23)) and begins at \(P=P_{c}\). In contrast, the models with no restrictions at the equator smoothly reach the value \(P=P_{c}\) through a continuous transition from positive (orange area in Figs. 5, 6) to negative current (blue area in Figs. 5, 6). Importantly, these models do not develop a current sheet to close the current circuit. The area under each of the curves represents the luminosity (see Eq. (15)), which explains why the model with a lower \(r_{c}\) exhibits a significantly higher energy loss rate, as it has more open lines carrying Poynting flux to infinity. Additionally, the presence or absence of equatorial restrictions leads to distinct behaviours in the models. In the models without equatorial restrictions, not all the Poynting flux carried by lines crossing the light cylinder (LC) escapes to infinity. Instead, up to 50% of the total pulsar spindown energy flux (the area between \(P_{\infty}\) and \(P_{c}\)) remains confined in the equatorial current sheet. This trapped energy could potentially be dissipated in particle acceleration and high-energy electromagnetic radiation within a few times the light cylinder radius, as was first pointed out in Contopoulos et al. (2014). However, the exact fraction of power that can be locally dissipated and reabsorbed, as opposed to the fraction that is genuinely lost and contributes to the spin-down, remains unclear. In the last column of Table 1 we have included the expected range of values for such models. In this study, extending the results of our previous paper for slowly-rotating magnetar magnetospheres, our success lies in the proficient application of Physics-Informed Neural Networks (PINNs) to obtain numerical results for a diverse range of pulsar magnetospheric models, meticulously exploring various cases under different physical assumptions. Our extensive analysis encompasses the accurate reproduction of several distinct axisymmetric models found in the scientific literature. The decision to focus solely on axisymmetry was a deliberate one, aiming at affirming the remarkable ability of PINNs to effectively capture and characterize the intricate peculiarities exhibited by pulsar magnetospheres, including the features of the current sheet. Throughout our investigation, we purposefully accounted for a rich diversity of models, incorporating non-dipolar configurations. This work serves as a stepping stone towards the development of a robust and trustworthy general elliptic Partial Differential Equation solver, specifically tailored to address the challenging complexities of this and related astrophysical problems. Looking ahead, our research can be naturally extended to the three-dimensional magnetospheric case, a promising prospect that holds the potential for reaching a deeper understanding of the underlying physics governing pulsar magnetospheres. ## Acknowledgements We acknowledge the support through the grant PID2021-127495NB-I00 funded by MCIN/AEI/10.13039/501100011033 and by the European Union, the Astrophysics and High Energy Physics programme of the Generalitat Valenciana ASFAE/2022/026 funded by MCIN and the European Union NextGenerationEU (PRTRC-C17.I1) and the Prometeo excellence programme grant CIPROM/2022/13. JFU is supported by the predoctoral fellowship UAFPU21-103 funded by the University of Alicante. ## Data availability All data produced in this work will be shared on reasonable request to the corresponding author.
2309.16826
An Attentional Recurrent Neural Network for Occlusion-Aware Proactive Anomaly Detection in Field Robot Navigation
The use of mobile robots in unstructured environments like the agricultural field is becoming increasingly common. The ability for such field robots to proactively identify and avoid failures is thus crucial for ensuring efficiency and avoiding damage. However, the cluttered field environment introduces various sources of noise (such as sensor occlusions) that make proactive anomaly detection difficult. Existing approaches can show poor performance in sensor occlusion scenarios as they typically do not explicitly model occlusions and only leverage current sensory inputs. In this work, we present an attention-based recurrent neural network architecture for proactive anomaly detection that fuses current sensory inputs and planned control actions with a latent representation of prior robot state. We enhance our model with an explicitly-learned model of sensor occlusion that is used to modulate the use of our latent representation of prior robot state. Our method shows improved anomaly detection performance and enables mobile field robots to display increased resilience to predicting false positives regarding navigation failure during periods of sensor occlusion, particularly in cases where all sensors are briefly occluded. Our code is available at: https://github.com/andreschreiber/roar
Andre Schreiber, Tianchen Ji, D. Livingston McPherson, Katherine Driggs-Campbell
2023-09-28T20:15:53Z
http://arxiv.org/abs/2309.16826v1
# An Attentional Recurrent Neural Network for Occlusion-Aware ###### Abstract The use of mobile robots in unstructured environments like the agricultural field is becoming increasingly common. The ability for such field robots to proactively identify and avoid failures is thus crucial for ensuring efficiency and avoiding damage. However, the cluttered field environment introduces various sources of noise (such as sensor occlusions) that make proactive anomaly detection difficult. Existing approaches can show poor performance in sensor occlusion scenarios as they typically do not explicitly model occlusions and only leverage current sensory inputs. In this work, we present an attention-based recurrent neural network architecture for proactive anomaly detection that fuses current sensory inputs and planned control actions with a latent representation of prior robot state. We enhance our model with an explicitly-learned model of sensor occlusion that is used to modulate the use of our latent representation of prior robot state. Our method shows improved anomaly detection performance and enables mobile field robots to display increased resilience to predicting false positives regarding navigation failure during periods of sensor occlusion, particularly in cases where all sensors are briefly occluded. Our code is available at: [https://github.com/andreschreiber/roaz](https://github.com/andreschreiber/roaz). ## I Introduction Throughout various domains, mobile robots are becoming increasingly prevalent as technological advancements enable such robots to autonomously execute a greater number of tasks. In agriculture, for example, compact mobile robots can move between crop rows and have been used to perform tasks such as corn stand counting [1] and plant phenotyping [2]. However, the agricultural field environment presents numerous challenges for such robots as this unstructured environment displays cluttered foliage, varying lighting conditions, and uneven terrain. These challenging conditions require algorithms that are robust to noise and sensor occlusions in order for the robots to remain autonomous, especially as the difficult nature of the environment increases the possibility that robots enter failure modes. Entering such failure modes may lead the robot to require external intervention to accomplish its task or may involve damage to the robot [3]. Thus, detecting potential navigation failures ahead of time becomes increasingly important in order to prevent damage and ensure optimal efficiency. However, the difficulty of developing algorithms to proactively detect such failure modes is exacerbated by the unstructured nature of the field environment, as such algorithms must be able to differentiate between scenarios representing genuine navigation failures (e.g., colliding with rigid obstacles or prematurely leaving the crop row) and the frequent but ultimately non-catastrophic noise (e.g., occlusions) created by the environment. An example of the difference between an occlusion that does not lead to navigation failure and a true failure mode is shown in Fig 1. Detecting such failure modes is commonly approached from the perspective of anomaly detection (AD) [4, 5, 6, 7, 8], with failure modes being treated as anomalies. Many works on AD [5, 6, 7] view the problem from a reactive perspective, in which anomalies are detected as they occur; however, with reactive AD, potential failure conditions cannot be detected before they occur in order to avoid them. Due to this limitation, recent work has focused on proactive AD in which the robot predicts the probability of failure within a time horizon using both current sensory inputs and planned future actions [4, 8, 9, 10]. In unstructured environments like those seen by the field robots, AD models may additionally utilize multi-sensor fusion of different inputs like RGB cameras and LiDAR to increase robustness to noise and occlusions [4, 8]. In these multi-sensor approaches, occlusion conditions can be implicitly learned without supervision [4] or explicitly Fig. 1: Example sequences from the field environment dataset introduced by Ji _et al._[4], with each of the two sequences displayed top to bottom. A blue line indicates the planned trajectory, and the LiDAR map is shown in the top right of the images. (a) shows a brief occlusion caused by low-hanging vegetation which does not lead to immediate navigation failure, while (b) shows an obstruction that leads to navigation failure. modeled [8]. However, such approaches only make use of current sensory inputs and do not maintain a history of prior sensory state. Thus, using multi-sensor fusion to address the possible occlusion of sensors can still fail if all sensors are briefly occluded. Such failures during periods of occlusion typically manifest as false positives, where the AD algorithm falsely predicts that the robot has entered a failure mode. As a prediction of failure can lead to interruption of normal robot operation, reducing these false positives minimizes the number of spurious interruptions and improves operating efficiency. Due to these limitations, we introduce a new proactive AD neural network. Our proposed network, termed Recurrent Occlusion-AwaRe (ROAR) anomaly detection, learns when sensors are occluded and incorporates an attention-based mechanism to fuse sensor data, predicted future actions, and a summary of prior robot state (informed by occlusion of the sensors) to provide improved AD performance when sensor occlusion occurs. Our architecture can reduce false positives in brief periods of total sensor occlusion, and more effectively makes use of its summarization of the prior robot state by explicitly learning when sensors are occluded. Furthermore, as multi-sensor approaches may not always be possible or economical due to their increased hardware requirements, we demonstrate that a variant of our model using only one input sensor can still provide an attractive alternative to existing multi-sensor fusion approaches. We summarize our contributions as follows: 1. We propose an attention-based recurrent neural network architecture that fuses planned future actions and multiple sensory inputs with a latent representation of robot state (which summarizes prior sensory inputs and prior planned control actions) to improve AD performance in unstructured environments, particularly when total sensor occlusion occurs. 2. We leverage an explicitly learned model of sensor occlusions to provide enhanced utilization of the latent representation of robot state and improved AD performance with our recurrent neural network architecture. 3. We show that our network demonstrates improved performance over existing methods, and displays increased robustness against false positives in brief periods of total sensor occlusion. We also demonstrate that when even only one sensor modality is used, our model significantly outperforms other single-sensor networks and provides an attractive alternative to multi-sensor fusion models in cases where multiple sensors may not be available. ## II Related Work AD is studied and applied in a variety of contexts. In the context of robotics and autonomous systems, AD (also called outlier detection or novelty detection) is frequently used to detect failures and often draws upon additional areas such as deep learning and multi-sensor fusion to provide improved performance. ### _Multi-Sensor Fusion using Neural Networks_ Contemporary robots and autonomous systems typically feature numerous sensors. Thus, there has been pronounced study on how to effectively fuse such sensor data, with many approaches utilizing neural networks for such fusion. For example, Nguyen _et al._[11] propose a neural network architecture that fuses multi-modal signals from images, LiDAR, and a laser distance map in order to learn to navigate in complex environments like collapsed cities. Similarly, Liu _et al._[12] present a method for learning navigation policies that makes use of multi-modal sensor inputs, improving robustness to sensor failure by introducing an auxiliary loss to reduce variance of multi- and uni-sensor policies and by introducing sensor dropout. Likewise, Neverova _et al._[13] present ModDrop, which introduces a modality-wise dropout mechanism similar to sensor dropout for multi-modal gesture recognition. ModDrop randomly drops sensor modality components during training, improving model prediction stability when inputs are corrupted. Numerous deep learning-based methods of multi-sensor fusion integrate attention mechanisms to achieve more effective fusion of sensory inputs. Such attention-based fusion mechanisms have seen significant use in fields such as human activity recognition. For example, [14, 15, 16] all describe attentional fusion architectures that combine data collected from multiple sensors that are affixed to a subject's body. ### _Occlusion Modeling_ In robotics and autonomous systems, sensors can experience occlusions, which may require a fusion mechanism with special provisions to ensure stable predictions. Occlusions may manifest as faulty sensor readings, and sensor- or modality-wise dropout could be used to account for such occlusions [12, 13]. Other works have also attempted to specifically devise strategies for fusion under sensor occlusion rather than treating an occlusion as a sensor failure. For example, Palffy _et al._[17] introduce an occlusion-aware fusion mechanism for pedestrian detection by using an occlusion-aware Bayesian filter. Ryu _et al._[18] describe a method for robot navigation that is intended to work specifically in cases of prolonged sensor occlusion of 2D LiDAR caused by issues like dust or smudges. However, the assumption of prolonged occlusions does not entirely suit the agricultural field environment that we target, which typically features brief dynamic occlusions (e.g., a leaf briefly covering the camera as the robot drives down a crop row). Similarly, we seek to design a multi-modal anomaly detector, whereas the method presented by Ryu _et al._[18] considers navigation using only a 2D LiDAR sensor. ### _Anomaly Detection using Machine Learning_ In addition to sensor fusion and occlusion modeling, AD using machine learning is directly related to our work. One frequently used approach to AD with machine learning involves analyzing the reconstruction error of autoencoders trained on non-anomalous data. For example, Malhotra _et al._[5] employs an encoder-decoder architecture that learns to reconstruct non-anomalous time-series and flags samples having high reconstruction error as anomalies. An and Cho [19] leverage the variational autoencoder (VAE) to detect anomalies using a more theoretically-principled reconstruction probability instead of the reconstruction error of a generic autoencoder. Lin _et al._[20] combine elements seen in other works [5, 19], proposing a VAE-LSTM model for time-series AD that leverages a VAE to generate features for short time windows and an LSTM to capture longer-term correlations relevant to AD. Other machine learning techniques for AD have also been studied, such as recent works [21, 22] that utilize contrastive learning as a method of AD by detecting out-of-distribution samples. Much research has also investigated applying AD specifically to robotics and autonomous systems. Wyk _et al._[23] describe a method for AD of sensors in autonomous vehicles by combining a convolutional neural network (CNN) with an adaptive Kalman filter that is paired with a failure detector. He _et al._[24] present an approach that detects anomalies in autonomous vehicles by exploiting redundancy among heterogeneous sensors to detect anomalies in sensor readings. In robotics, Yoo _et al._[25] describe a multi-modal autoencoder for AD applied to object slippage, whereas Park _et al._[6] use an LSTM-based VAE to detect anomalies in robot-assisted feeding. In the agricultural field, prior work [7] introduced a supervised VAE-based approach operating on LiDAR and proprioceptive measurements to predict a variety of anomalies. However, these approaches focus on predicting anomalies only as or after they have already occurred and cannot be directly used for forecasting future anomalies. ### _Proactive Anomaly Detection in Robotics_ Several works have proposed proactive AD methods, which-in contrast to reactive AD methods-can enable prediction of anomalies before they occur in order to take corrective actions to avoid failure entirely or to reduce the damage caused by such a failure. LaND [9] and BADGR [10] utilize a CNN architecture operating on input images from a robot's camera. The features extracted from the CNN are used as an initial state for an LSTM which predicts future events using planned control actions as input. Most similar to our proposed method are multi-modal proactive anomaly detection methods, such as PAAD [4] and GrASPE [8]. PAAD fuses camera images, 2D LiDAR data, and a predicted future trajectory using an attention-based multi-modal fusion architecture. GrASPE predicts navigation success probabilities for future trajectories using a multi-modal fusion architecture (fusing 3D LiDAR, RGB camera, and odometry data). The fusion mechanism in GrASPE uses graph neural networks (GNNs), forming a graph with sensor features as nodes. Sensor reliability information in GrASPE is also provided via the graph adjacency matrix (with sensor reliability computed through hand-designed, non-learning-based algorithms). However, both GrASPE and PAAD do not explicitly capture prior sensor state, with PAAD using only the sensor data from the current time step to make predictions and GrASPE relying only on a history of velocity measurements to capture the prior state of the robot. ## III Method Our goal is to design a method to predict future failures of an autonomous field robot during operation that is robust even in cases of brief total sensor occlusion. The model we propose accepts multi-modal sensory inputs from two sensors: a 2D LiDAR unit and an RGB camera. The 2D LiDAR produces a vector of range measurements \(\mathbf{x}_{l}^{(t)}\in\mathbb{R}^{L}\), and the RGB camera produces images \(\mathbf{x}_{c}^{(t)}\in\mathbb{R}^{H\times W\times 3}\). The model predicts future probabilities of failure for the next \(T\) time steps based on knowledge of future controls generated by a predictive controller used by the robot. As a result, the model also requires inputs specifying such planned control actions. Following the approach of PAAD [4], we provide the planned control actions as a grayscale image \(\mathbf{x}_{p}^{(t)}\in\mathbb{R}^{H\times W\times 1}\), in which the planned path from the predictive controller is projected from the camera's point of view as a curve onto a blank image. To incorporate historical information that aids in prediction during periods of total sensor occlusion, our proposed model leverages a latent representation of state \(\mathbf{h}^{(t)}\in\mathbb{R}^{D}\) that is used as input and evolved in each prediction step as new sensory and control inputs are provided. At each prediction step, the network outputs \(T\) probabilities of future failure \(\hat{\mathbf{y}}^{(t:t+T)}\coloneqq(y^{(t)},y^{(t+1)},...,y^{(t+T-1)})\in[0,1]^{T}\). For each time step, the model also predicts the probability of occlusion for the LiDAR and camera inputs: \(y_{\text{lidar}}^{(t)}\in[0,1]\) and \(y_{\text{camera}}^{(t)}\in[0,1]\). Similar to existing works [8, 9, 10, 4], the proactive nature of our proposed model is beneficial by allowing prediction of future failures. In addition, like PAAD [4] and GrASPE [8], our model uses a variety of sensor modalities to provide improved prediction robustness. Our proposed model also explicitly models sensor occlusion. However, as compared with GrASPE, our mechanism for occlusion prediction is directly learned within the neural network model, while GrASPE uses classical (non-learning-based) algorithms to determine sensor reliability. Learning occlusion via neural network grants greater flexibility by enabling the model to learn more nuanced representations of occlusion (such as those produced by intermediate layers of an occlusion prediction network) and does not require a hand-crafted algorithm for detecting occlusion. Unlike the prior models [8, 9, 10, 4], our proposed inclusion of a latent representation of robot state (which summarizes prior control and sensory inputs) allows our model to show increased resilience to false positives in cases of brief total sensor occlusion. For example, if the robot traverses a corn row with no obstructing ground-based obstacles and briefly experiences occlusion of both LiDAR and camera from leaves in the crop canopy, prior models may raise an anomaly, whereas our proposed model can utilize knowledge of the lack of obstacles captured by the latent state representation to avoid falsely reporting failures. We also combine the learning-based sensor occlusion estimation in the attention mechanism that fuses the sensory inputs, control inputs, and latent representation of state. The inclusion of occlusion estimation in the attention mechanism enables our model to learn how to combine information about predicted sensor occlusion with the latent robot state to provide improved predictions of future failure during sensor occlusion. ### _Data_ We utilize the dataset collected in a prior work [4] to verify our model. This data was collected using the 4-wheeled, skid-steer TerraSentia mobile robot. The TerraSentia features a forward-facing RGB camera (OV2710) producing images with a resolution of \(240\times 320\), and a LiDAR (Hokuyo UST-10LX) with \(270^{\circ}\) range at an angular resolution of \(0.25^{\circ}\) that yields 1081 range measurements. The predictive path is generated using the robot's predictive controller, and is projected onto a front-facing plane using the camera's known intrinsic parameters. In addition to the failure labels provided in the dataset, we add labels specifying camera occlusion and LiDAR occlusion. LiDAR occlusion was automatically labeled with samples displaying a median range measurement of less than \(0.3\) m for the center \(215^{\circ}\) of LiDAR measurements being labeled as occluded. Images were automatically labeled as occluded using thresholds on image sharpness and variance of pixel values. These image occlusion labels were then inspected and refined. Such refinement ensured correct occlusion labels even when conditions like high levels of glare from the sun led the automated labeling to predict the camera as occluded (even though the path ahead could still be seen). ### _Model Architecture_ The architecture for our model (shown in Fig. 2) consists of three feature extractors, a multi-head attention fusion module, a recurrent state feature, a fully-connected occlusion prediction head for each sensory input, and a fully-connected proactive anomaly detection prediction head. The three feature extractors are adopted from PAAD [4] as they have been shown to perform well in the agricultural field environment. The planned trajectory feature extractor accepts a \(240\times 320\) grayscale image as input and applies a region-of-interest (ROI) pooling layer followed by a convolutional neural network to produce a 64-dimensional feature vector \(\mathbf{f}_{\text{path}}^{(t)}\in\mathbb{R}^{64}\). The RGB image feature extractor is a convolutional neural network based on a ResNet-18 [26] backbone, with the convolutional layers pretrained on a visual navigation task [27]. This image feature extractor accepts a \(240\times 320\) RGB image and outputs a feature vector \(\mathbf{f}_{\text{camera}}^{(t)}\in\mathbb{R}^{64}\). Finally, LiDAR features are extracted with a supervised variational autoencoder (SVAE) as in prior works [4, 7]; this LiDAR feature extractor uses 1081-dimensional LiDAR input measurements to produce an output feature vector \(\mathbf{f}_{\text{lid}}^{(t)}\in\mathbb{R}^{64}\), where the features are the concatenated means and log-variances produced by the VAE. The features from the LiDAR and camera feature extractors are provided as inputs to occlusion prediction head networks, which feature two layers (the first having 32 outputs with ReLU activation and the second having 1 output with sigmoid activation). These prediction heads can be viewed as functions \(g_{\text{lidar,occ}}:\mathbf{f}_{\text{lidar}}^{(t)}\mapsto y_{\text{lidar}}^ {(t)}\) and \(g_{\text{camera,occ}}:\mathbf{f}_{\text{camera}}^{(t)}\mapsto y_{\text{camera}}^ {(t)}\). The data from the feature extractors is fused using a multi-head attention module. The multi-head attention mechanism can be viewed as computing attention for elements in a sequence, where the sequence elements are the state features, camera features, LiDAR features, and trajectory features. The multi-head attention module utilizes 8 attention heads. The keys and values for the attention module are formed by concatenating the state vector with the features computed by the feature extractors: \[K=V=[\mathbf{h}^{(t)},\mathbf{f}_{\text{path}}^{(t)},\mathbf{f}_{\text{camera }}^{(t)},\mathbf{f}_{\text{lidar}}^{(t)}] \tag{1}\] For the queries, the final three elements are the same as for the keys and values. The query for the first element (corresponding to the latent state representation) is formed by concatenating occlusion-biased features from outputs of the first fully connected layer of \(g_{\text{camera,occ}}\) and the first fully connected layer of \(g_{\text{lidar,occ}}\), which are denoted \(\mathbf{o}_{\text{camera}}^{(t)}\in\mathbb{R}^{32}\) and \(\mathbf{o}_{\text{lidar}}^{(t)}\in\mathbb{R}^{32}\), respectively. Letting \(\mathbf{f}_{\text{occ}}^{(t)}=[\mathbf{o}_{\text{camera}}^{(t)},\ \mathbf{o}_{\text{lidar}}^{(t)}]\), the queries are thus given by: \[Q=[\mathbf{f}_{\text{occ}}^{(t)},\mathbf{f}_{\text{path}}^{(t)},\mathbf{f}_{ \text{camera}}^{(t)},\mathbf{f}_{\text{lidar}}^{(t)}] \tag{2}\] The use of occlusion-biased features for the attention query vector corresponding to the latent state is informed by the assumption that the prior history of the robot is largely irrelevant when sensors are not occluded, since anomalies can be detected using the non-occluded sensor measurements. As a result, the latent state is primarily important when sensors are occluded. Thus, we query the latent state using sensory features that are biased towards features relevant in predicting occlusion to incorporate this assumption into our model. The recurrent latent state for the network, \(\mathbf{h}^{(t)}\), is initialized to a zero vector at the beginning of an inference sequence. The value of this latent state is evolved by setting it equal to the first attention output (the first sequence element in the attention mechanism corresponds to the latent state). This construction allows the latent state for the next time step to incorporate information from the sensory inputs, planned control actions, and current latent state. In addition, at each time step, a hardtanh activation (with minimum and maximum limits of -10 and 10, respectively) is applied to the new latent state in order to prevent uncontrolled growth in the magnitude of the latent state for longer sequences. The final prediction of the future anomaly scores is done by concatenating the features output from the attention module and feeding them through 2 fully connected layers. The first of the fully connected layers has a 128-dimensional output, upon which ReLU activation and dropout [28] are applied (with a dropout probability of 0.5). The second fully-connected layer outputs a \(T\)-dimensional vector with a sigmoid activation function applied; the outputs of this fully connected layer are the final anomaly prediction probabilities for the next \(T\) time steps. ### _Model Training_ The model is trained with a loss function that is composed of four components, corresponding to the SVAE [7] feature extractor loss (\(\mathcal{L}_{\text{SVAE}}\), which is composed of a KL divergence term and a reconstruction term), the anomaly classification loss (\(\mathcal{L}_{\text{anomaly}}\)), the camera occlusion classification loss (\(\mathcal{L}_{\text{camera,occ}}\)), and the LiDAR occlusion classification loss (\(\mathcal{L}_{\text{lidar,occ}}\)). The total loss is given by: \[\mathcal{L}=\mathcal{L}_{\text{SVAE}}+\alpha\mathcal{L}_{\text{anomaly}}+ \beta\mathcal{L}_{\text{camera,occ}}+\gamma\mathcal{L}_{\text{lidar,occ}} \tag{3}\] where \(\alpha\), \(\beta\), and \(\gamma\) are coefficients that specify the relative weighting of the individual loss terms. Based on prior work [4], we use a value of \(\alpha=6.21\). As we are primarily interested in the anomaly detection output, we set \(\beta=0.1\alpha\) and \(\gamma=0.1\alpha\) to prevent the training from focusing too heavily on occlusion outputs at the expense of the true output of interest. ## IV Experimental Results Our experiments involve a modified version of the dataset from PAAD, where we add labels indicating occlusion of the LiDAR and camera, as described in Section III. This dataset features 4.1 km of navigation with the TerraSentia, where the robot moves with a reference speed of 0.6 m/s and data is logged at 3 Hz. For the experiments, the network predicts failures for the next 10 time steps (i.e., \(T=10\)). We utilize the same training and test split as in PAAD, which features 29284 training samples (2262 of which involve anomalies) and 6869 test samples (of which 696 involve anomalies). Due to the small number of anomalies, we re-balance the training dataset by under-sampling non-anomalous samples and over-sampling anomalous samples. To train the sequential aspect of our model, we split the training dataset into contiguous sequences of length 8 to help with batching. The initial latent state is set to zero for the first prediction step, with the remaining 7 steps using the latent state from the prior prediction. At test time, the sequences are not split into these fixed-length sequences; instead, prediction at test time operates on entire temporally-coherent sequences, with the first prediction step using a latent state initialized to a zero vector. The model is trained using the Adam optimizer [29] with a learning rate of 0.0005 and a weight decay coefficient of 0.00015. ### _Baselines_ We compare the accuracy of ROAR against the following baselines on the test set: * _CNN-LSTM_[9, 10]: a model for intervention and future event prediction proposed in LaND and BADGR. This network features a convolutional feature extractor that generates features from the input RGB image, and then uses the image features as an initial hidden state in an action-conditioned LSTM that predicts the probability of future failure. * _NMFNet_[11]: an anomaly detection adaptation of a multi-modal fusion network that was devised for robot navigation in difficult environments. Like in prior work [4], we maintain the branch operating on 2D laser data and the branch operating on image data, and replace the 3D point cloud branch with a fully-connected network that processes future actions from the predictive controller. * _PAAD_[4]: the proactive anomaly detection network featured in our prior work. This network features an SVAE LiDAR feature extractor [7], a path feature extractor CNN, a ResNet-based camera image feature extractor [26], a multi-head attention sensor fusion module, and a fully-connected fusion layer that combines path image features with the fused observation features. * _Graph Fusion_: a graph fusion network inspired by GrASPE [8], where nodes correspond to features extracted from sensor and control inputs, along with an additional state node with a self-loop that is added to inject state information (rather than using prior velocity Fig. 2: Proposed network architecture for ROAR. Neural networks are shown as blue boxes, intermediate features as gray boxes, non-learned operations as pink boxes, the attention mechanism as an orange box, and outputs as green boxes. For clarity and conciseness, the SVAE [7] decoder is not shown. measurements as in GrASPE). Two GCN [30] layers (with edge weights computed as reliability measurements from the automatically-generated occlusion labels) and a GATv2 [31] layer are followed by a fully-connected failure prediction network. We benchmark against Graph Fusion rather than GrASPE as GrASPE uses data sources unavailable in our dataset (e.g., 3D LiDAR), and we are particularly interested in comparing the fusion mechanism of ROAR with the graph-based fusion seen in GrASPE. To ensure fair comparison, the CNN used for computing RGB camera features in each baseline model is the same pretrained ResNet-18 used by ROAR. In addition, we show results using an image-only version of our model (_IO-ROAR_) that removes the LiDAR feature extractor to highlight the ability for a variant of our network to provide results comparable to or exceeding multi-sensor alternatives even in cases where additional sensors may not be available. ### _Quantitative Results_ We evaluate the models using two quantitative metrics: * _F1-score_: the harmonic mean of precision and recall, given by \(FI=2PR/(P+R)\). This metric quantifies performance of the model in a threshold-dependent manner, where we select a threshold of 0.5 (i.e., we flag failures as when the predicted probability of failure exceeds 0.5). The F1-score varies from 0 to 1, with higher values being better. * _PR-AUC_: a metric calculating the area under the precision-recall curve. This is a threshold-independent metric that quantifies anomaly prediction performance.1 This metric varies from 0 to 1, with higher values being better. Footnote 1: Due to the highly skewed nature of the dataset, PR-AUC is used instead of ROC-AUC [32]. The results are presented in Table I. To account for different initializations potentially yielding better results, the values shown in Table I are the averages for each model over 5 training runs with different random seeds. We also present the results of PR-AUC and F1-score for the best model of the 5 trainings, where the best model is selected as the model having the highest value for the threshold-independent PR-AUC metric. As seen in Table I, our method (ROAR) outperforms all baselines in terms of both F1-score and PR-AUC. The superior performance of ROAR compared to the baselines demonstrates how incorporating prior robot state and learned occlusion information leads to improved anomaly detection performance. Furthermore, we see that even the image-only variant of our network outperforms one of the multi-modal baselines that leverages an additional sensor (NMFNet), and performs only slightly worse than Graph Fusion and PAAD (which both leverage an additional sensor modality). Table II shows the number of parameters and neural network inference time of Graph Fusion, PAAD, and ROAR (collected on a machine with an i9-9900K and an RTX 2070 Super). The results in Table II demonstrate that ROAR has comparable inference speed to PAAD, despite providing improved anomaly detection accuracy. ROAR also displays faster inference than Graph Fusion due to the additional overhead introduced by using graph neural networks. Graph Fusion also requires occlusion labels to be provided as inputs even at test time (which adds additional latency beyond the neural network inference time shown in Table II by requiring computation of occlusion label inputs), while ROAR does not need the occlusion labels to be provided during test time. ### _Qualitative Results for Total Sensor Occlusion_ In Fig. 3, we demonstrate qualitative predictions of our model in the case of total sensor occlusion. The prediction probabilities are shown as a blue curve in the graphs and an anomaly probability threshold of 0.5 is shown as a red line. The predictions shown in Fig. 3 demonstrate the robustness of our model to producing false positives in cases of brief sensor occlusion in an otherwise obstacle-free environment. In Fig. 3, we also show predictions using PAAD and reset-state ROAR (a variant of ROAR in which we set the state vector to zero to highlight the effect of removing the state information). While both PAAD and reset-state ROAR produce false positives in this case, ROAR does not produce a false positive in this example occlusion scenario. Furthermore, in Fig. 4, we demonstrate the effect of prolonged synthetic total sensor occlusion for both PAAD and ROAR. In Fig. 4, the occlusion of all sensors causes PAAD to predict failures in the near future (even though the path was clear), whereas ROAR is robust to the occlusion of all sensors. However, as the period of occlusion lengthens, ROAR becomes increasingly likely to predict a failure. This demonstrates how our model captures the intuitive insight that brief occlusions in otherwise normal scenarios are not necessarily failures, but as the duration of occlusion grows, the probability of failure increases. ### _Ablation Study_ To study how different design considerations affect our model, we conducted an ablation study. We specifically analyze four variants of our model: * _No State_: a version of our model where the latent state is removed (always set to zero), preventing the model from capturing information about the history of sensor and control inputs. * _No Occlusion_: a version of our model where the occlusion modeling is removed, and the query vector for the attention module equals the key and value vectors. * _Fixed Occlusion_: a version of our model where we provide a vector of repeated occlusion labels produced by the automated labeling algorithm as the state query vector instead of using the occlusion-biased features (i.e., this variant includes occlusion predictions but the predictions are not learned using a neural network). * _ROAR_: the complete ROAR model. The results from the ablation study are shown in Table III. These results show averages over 5 trainings on different random seeds to account for different initializations, as well as the metrics for the best model (the model of the 5 training runs that displays the highest value of the threshold-independent PR-AUC metric). The ablation study shows that, compared to other variants, ROAR on average displays higher performance in terms of the threshold-independent PR-AUC, and the best ROAR model outperforms the other models both on PR-AUC and F1-score. Such results demonstrate the importance of both occlusion modeling and the use of state. These Fig. 4: Predictions using PAAD (left) and ROAR (right) on a sequence featuring synthetic total sensor occlusion for the final three frames. ROAR shows greater robustness to simultaneous LiDAR and camera occlusion when compared to PAAD. Fig. 3: Image (with predicted path drawn as a blue curve) and LiDAR readings for three sequential frames, along with predictions for the last frame using PAAD, reset-state ROAR, and ROAR. results also show that learning occlusion with a neural network and using occlusion-biased features for querying state outperforms the approach of using non-learning-based algorithms to classify occlusions and providing the resulting labels as neural network inputs. Furthermore, while the final ROAR model displays slightly lower average F1-score than the no occlusion and fixed occlusion models, the higher average PR-AUC is more advantageous due to its threshold-independent nature. Specifically, PR-AUC provides a general, threshold-independent picture of anomaly detection performance, whereas the F1-score could be improved for a fixed model by tuning the detection threshold. ## V Conclusion We have presented a novel occlusion-aware recurrent neural network architecture for proactive anomaly detection in field environments that is particularly well-suited for cases when brief periods in which all sensors are occluded are possible. Our network fuses sensory input data, a planned trajectory, and a latent representation of state to predict probabilities of future failure over a given time horizon. We further enhanced our network by explicitly learning when sensors are occluded, and using this learned information to moderate the use of our latent representation of robot state. Our experimental results validate our approach by demonstrating superior quantitative performance over prior methods, while also qualitatively showing robustness to false positives during brief periods when all sensors are occluded. Although our method outperforms the baselines, it shows the limitation of requiring explicit labels of failures due to the use of supervised learning. One possible direction for future work could be to adopt a semi-supervised or unsupervised approach, such as one based on reconstruction errors.
2309.07056
Deep Quantum Graph Dreaming: Deciphering Neural Network Insights into Quantum Experiments
Despite their promise to facilitate new scientific discoveries, the opaqueness of neural networks presents a challenge in interpreting the logic behind their findings. Here, we use a eXplainable-AI (XAI) technique called $inception$ or $deep$ $dreaming$, which has been invented in machine learning for computer vision. We use this technique to explore what neural networks learn about quantum optics experiments. Our story begins by training deep neural networks on the properties of quantum systems. Once trained, we "invert" the neural network -- effectively asking how it imagines a quantum system with a specific property, and how it would continuously modify the quantum system to change a property. We find that the network can shift the initial distribution of properties of the quantum system, and we can conceptualize the learned strategies of the neural network. Interestingly, we find that, in the first layers, the neural network identifies simple properties, while in the deeper ones, it can identify complex quantum structures and even quantum entanglement. This is in reminiscence of long-understood properties known in computer vision, which we now identify in a complex natural science task. Our approach could be useful in a more interpretable way to develop new advanced AI-based scientific discovery techniques in quantum physics.
Tareq Jaouni, Sören Arlt, Carlos Ruiz-Gonzalez, Ebrahim Karimi, Xuemei Gu, Mario Krenn
2023-09-13T16:13:54Z
http://arxiv.org/abs/2309.07056v2
# Deep Quantum Graph Dreaming: ###### Abstract Despite their promise to facilitate new scientific discoveries, the opaqueness of neural networks presents a challenge in interpreting the logic behind their findings. Here, we use a eXplainable-AI (XAI) technique called _inception_ or _deep dreaming_, which has been invented in machine learning for computer vision. We use this technique to explore what neural networks learn about quantum optics experiments. Our story begins by training deep neural networks on the properties of quantum systems. Once trained, we "invert" the neural network - effectively asking how it imagines a quantum system with a specific property, and how it would continuously modify the quantum system to change a property. We find that the network can shift the initial distribution of properties of the quantum system, and we can conceptualize the learned strategies of the neural network. Interestingly, we find that, in the first layers, the neural network identifies simple properties, while in the deeper ones, it can identify complex quantum structures and even quantum entanglement. This is in reminiscence of long-understood properties known in computer vision, which we now identify in a complex natural science task. Our approach could be useful in a more interpretable way to develop new advanced AI-based scientific discovery techniques in quantum physics. ## I Introduction Neural networks have been demonstrably promising towards solving various tasks in quantum science [1, 2, 3]. One notorious frustration concerning neural networks, however, lays in their inscrutability: modern architectures often contain millions of trainable parameters, and it is not readily apparent what role that they each play in the network's prediction. We may, therefore, inquire about what learned concepts from the data that the network utilizes to formulate its prediction, an important prerequisite in achieving scientific understanding [4]. This has since motivated the development of eXplainable-AI (XAI), which interprets how the network comes up with its solutions [5, 6, 7, 8]. These developments have spurred physicists to address the problem of interpretability, resulting in the rediscovery of long-standing physics concepts [9, 10], the identification of phase transitions in quantum many-body physics [11, 12, 13, 14], the compression of many-body quantum systems [15], and the study on the relationship between quantum systems and their entanglement properties [16, 17]. Here, we apply neural networks in the design of quantum optical experiments. The growing complexity of quantum information tasks has since motivated the design of computational methods capable of navigating the vast combintorical space of possible experimental designs that involve unintuitive phenomena [18]. To this end, scientists have developed automated design and machine learning routines [19], including some that leverage genetic algorithms [20, 21], active learning approaches [22] and the optimization of parameterized quantum circuits [23, 24, 25]. One may inquire if we may be able to learn new physics from the discoveries made by such algorithms. For instance, the computer algorithm Melvin[19], which topologically searches for arrangements of optical elements, has led to the discovery of new concepts such as the generation of entanglement by path identity [26] and the creation of multipartite quantum gates [27]. However, the interpretability of these solutions is obfuscated by the stochasticity of the processes that create them as well as the unintuitiveness of their representations. The recent invention of Theseus[24], and its successor PyTheus[25] addresses this through the topological optimization of highly interpretable, graph-based representation of quantum optical experiments. This has already enabled new scientific discoveries, such as a new form of multi-photon interference [28], and novel experimental schemes for high-dimensional quantum measurement [29]. To this point, the extraction and generalization of new concepts has largely been confined to analyzing the optimal solutions discovered by these algorithms. However, we may inquire if we can learn more physics by probing the rationale behind the computer's discoveries. Little attention has hitherto been given towards the application of XAI techniques on neural networks trained on quantum experiments, which may allow us to conceptualize what our algorithm has learned. In so doing, we may guide the creation of AI-based design techniques for quantum experiments that are more reliable and interpretable. In this work, we present an interpretability tool based on the inceptionism technique in computer vision, better known as Deep Dreaming [30]. This technique has been applied to iteratively guide the automated design of quantum circuits [31] and molecules [32] towards optimizing a target property; it has also been applied in [33] to verify the reliability of a network trained to classify the entanglement spectra of many-body quantum systems. More importantly, it also lets us visualize what physical insights has the neural network gained from the training data. This lets us better discern the strategies applied throughout automated design processes, as well as to verify physical concepts rediscovered by the network, such as the thermodynamic arrow of time [34]. Here, we adapt this approach to quantum graphs. We train a deep neural network to predict properties of quantum systems, then inverse the training to optimize for a target property. We observe that the inverse training dramatically shifts the initial distribution of properties. We also show that, by visualizing the evolution of quantum graphs during inverse training, we are able to conceptualize the learned strategies applied by the neural network. We probe the network's rationale further by inverse training on the intermediate layers of the network. We find that the network learns to recognize simple features in the first layers and then builds up more complicated structures in later layers. Altogether, we synthesize a complete picture of what the trained neural network sees. We, therefore, posit that our tool may aid the design of more interpretable and reliable computer-inspired schemes to design quantum optics experiments. ## II Methodology ### Graphs and Quantum Experiments As developed in [24; 35; 36; 37; 25], we may represent quantum optical experiments in terms of colored, weighted, undirected multigraphs. This representation can be extended to integrated photonics [38; 39; 40; 41] and entanglement by path identity [42; 43; 26]. The vertices of the graph represent photon paths to detectors, whereas edges between any two vertices, \(a\) and \(b\), indicate correlation between two photon paths. We may assign an amplitude to them by introducing edge weights \(\omega_{a,b}\), and we may assign the photons' internal mode number through different edge colorings. We also permit multiple edges between the vertices to indicate the superposition of states. Here, we consider graph representations of four-qubit, two-dimensional experiments dealing with state creation. Specifically, we consider graphs with vertices \(V=\{0,1,2,3\}\) and mode numbers 0 and 1. Each graph, therefore, consists of 24 possible edges with real-valued edge weights between 1 and -1. We may determine the particular quantum state \(|\Phi(\omega)\rangle\), where \(\Phi(\omega)\) is the graph's weight function defined according to Eq. (2) in [25]. We condition the creation of each term in the state on subsets of edges which contains every vertex in the graph exactly once, otherwise known as the perfect matchings (PMs) of the graph. For each term, we can define three possible PMs, each distinguished by their 'directionality', which we show in Figure 1. We obtain the amplitude of the term through the sum of weights of the three perfect matchings, which are themselves determined by the product of edge weights. Applying this procedure for every possible ket in the joint Hilbert space \(\mathcal{H}=\mathbb{H}_{2}\otimes\mathbb{H}_{2}\otimes\mathbb{H}_{2}\otimes \mathbb{H}_{2}\), we may obtain the state \(|\Phi(\omega)\rangle\). ### Training Figure 2 illustrates the basic workflow behind the dreaming process. A feed-forward neural network is first trained on the edge weights \(\omega\) of a complete, quadripartite, two-dimensional quantum graph in order to make predictions on certain properties of the corresponding quantum state \(|\Phi(\omega)\rangle\). We randomly initialize \(\omega\) over a uniform distribution \([-1,1]\). The neural network's own weights and biases are optimized for this task via mini-batch gradient descent and the mean squared error (MSE) loss function. We consider the state fidelity \(|\bra{\Phi(\omega)}|\psi\rangle\ket{2}\) with respect to two well-known classes of multipartite entangled states within the joint Hilbert space \(\mathcal{H}\). First, the Greenberger-Horne-Zeillinger (GHZ) State [44], \(|\psi\rangle=|\text{GHZ}\rangle\), where \[|\text{GHZ}\rangle=\frac{1}{\sqrt{2}}(|0000\rangle+|1111\rangle), \tag{1}\] and, second, the W-state [45], \(|\psi\rangle=|W\rangle\), where \[|\text{W}\rangle=\frac{1}{\sqrt{2}}(|1000\rangle+|0100\rangle+|0010\rangle+| 0001\rangle). \tag{2}\] In addition, we also consider a measure of quantum state entanglement resulting from a graph - the concurrence [46]. Let \(A_{1},A_{2},A_{3},A_{4}\) each denote the subsystems of the joint quadripartite Hilbert space to which \(|\Phi(\omega)\rangle\) is Figure 1: **Brief overview of quantum graphs**. In this work, we consider _complete_ graph representations of two-dimensional, quadripartite quantum graphs. We let \(\omega_{a,b}\) denote the weight of the edge connecting vertex \(a\) to vertex \(b\). The weight’s magnitude is indicated by the transparency of the edge and the presence of a diamond signifies a negative edge weight. The creation of every possible state is conditioned on three possible types of perfect matchings, which are distinguished in terms of their direction. defined. Then assuming the pure state \(\rho=\left|\Phi(\omega)\right\rangle\left\langle\Phi(\omega)\right|\), we may write \[C(\rho)=\sum_{\mathcal{M}}C_{\mathcal{M}}(\rho)=\sum_{\mathcal{M}}\sqrt{2(\text{ }1-\text{tr}(\text{ }\rho_{M}^{2})\text{ })} \tag{3}\] where \(\mathcal{M}\) refers to a bipartition of the subsystem and \(\text{tr}(\text{ }\rho_{\mathcal{M}}^{2})\text{ }\) is the reduced density matrix obtained by tracing out \(\mathcal{M}\). In this work, we train our networks to make predictions on \(\text{tr}(\text{ }\rho_{\mathcal{M}}^{2})\text{ }\). Furthermore, for all cases considered, the network is trained on examples with a property value below a threshold of 0.5 to ensure that the network is not memorizing the best solutions in each case. Once convergence in the training has been achieved, we then execute the deep dreaming protocol to extract insights on what the neural network has learned. Given an arbitrary input graph, we select a neuron in the trained neural network. Then, we maximize the neuron's activation by updating the input graph via gradient ascent. In this stage, the weights and biases of the neural network are frozen, and we instead optimize for the edge weights of the input graph. At the end of the process, the graph mutates into a configuration which most excites the neuron. However, this may not entirely represent all that the neuron over-interprets from the input graph, as it has been shown in [47] that individual neurons can be trained to recognize various possible features of the input. Therefore, to uncover all that the neuron sees, we repeat this procedure over multiple different initializations. ## III Results ### Dreaming on the Output Layer Towards attaining a general idea of what the neural network has learned about select properties for the quantum state \(\left|\Phi(\omega)\right\rangle\), we first apply the deep dreaming approach on the output layer. Figure 3(a) illustrates the mutation of an input graph by applying the deep dreaming approach on a [400\({}^{3}\),10] (three hidden layers of 400 neurons, one hidden layer of 10 neurons) neural network, which has been trained to predict either the GHZ-state or the W-state fidelity. We also apply this approach on a [800\({}^{7}\)] neural network architecture, which has been trained to predict the mean value of \(\text{Tr}(\rho_{\mathcal{M}}^{2})\). While dreaming, we task our network to find configurations which maximizes the property value. It should be stressed that, in particular, the optimal configuration that maximizes \(\text{tr}(\overline{\rho_{\mathcal{M}}^{2}})\)_minimizes_ the concurrence; we, therefore, anticipate the dreamed graph to correspond to a maximally separable state. We obtain \(\left|\Phi(\omega)\right\rangle\) from the reconstructed, mutated graph and recompute its true property value in each step. In all cases, we find that the graph evolves steadily towards the maximum property value. We repeat this procedure for 1000 different quantum graphs and plot the distribution of each graphs' initial versus dreamed fidelities in Figure 3(b). In all three cases, we observe that the network consistently finds distinct examples with a property value outside the initial distribution's upper bounds. This demonstrates our approach's potential to discover novel quantum graphs which optimizes a specific quantum state property. The intermediate steps of the dreaming process allow us to discern what strategies the neural networks are applying to a given optimization task. In Figure 4, we summarize the evolution of different initial graphs during inverse training for different targets. In Figure 4(a), we observe that the neural network tries to activate the \(\left|0000\right\rangle\) and \(\left|1111\right\rangle\) states either by creating perfect matchings (PM) of these terms in unused directions - the input graph had no PM in that direction previously - or by completing them with the assistance of an existing Figure 2: **Quantum Graph Deep Dreaming.** (a) The weights and biases of a feed-forward neural network are continually updated during training to predict a property such as fidelity of a given input random quantum experiment represented by a graph. (b) In the deep dreaming process, the weights and biases of the network are frozen. The weights of an initial input graph are updated iteratively to maximize the output of the feed-forward network, which gives the network’s prediction on the aforementioned property. PM in some direction, as is seen in particular with the \(\left|\Phi(\omega)\right\rangle=\left|0011\right\rangle+\left|0101\right\rangle\) initialization. We note that the dreaming process creates these PMs such that their weights add up to 1. In circumstances where the initial graph starts with unwanted terms, or when the network unavoidably creates these terms while dreaming, the network attempts to eliminate them either by directly lowering the edge weights' magnitudes or by introducing negative weight PMs in different directions. We see this trend continue when the network is tasked with maximizing the W-state fidelity, as shown in Figure 4 (b), albeit instead in favouring the activation of the \(\left|1000\right\rangle,\left|0100\right\rangle,\left|0010\right\rangle,\) and \(\left|0001\right\rangle\) states. In Figure 4 (c), the network attempts to maximize the separability of the initial, maximally entangled state by first eliminating one term from the initial state via edge-weight minimization or through negative PMs, then creating PMs of additional terms which are separable with respect to the intermediate graph state across two or more bipartitions. Through our deep dreaming approach, we have shown that the network learns about creating states through the graph representation in order to consistently achieve optimal values for select properties of the quantum state. We remark that, for each state property, the network was able to ascertain the configurations which maximizes them while only seeing configurations having property values below 0.50. This strongly suggests that the network is achieving its tasks from physical insights, rather than by memorizing the best examples. ### Interpretability of Neural Network Structure We apply the deep dreaming approach on the neurons of its hidden layers to gain insight into the neural network's internal model, which generalizes well beyond the training data. We summarize the insights that we extract through our routine in Figure 5. To showcase the universality of our approach, we consider several different neural network architectures - the \([400^{4}]\), \([49^{10}]\) and \([36^{26}]\) networks - that have each been trained to predict the GHZ-state fidelity. For each network, we dream on the \(i^{th}\) neuron in the \(j^{th}\) hidden layer with 20 input graphs to best capture all of the possible structures exciting the neuron. We take particular interest in how the complexity of Figure 3: **Dreaming results for on the output layer of different neural network architectures.** (a) Evolution of an input graph’s fidelity with respect to (i) the GHZ state and (ii) the W state when dreaming on the \([400^{3},10]\) neural network; we also observe (iii) the evolution of an input graph’s concurrence when dreaming on the \([800^{7}]\) network. For each case, we show the intermediate steps of the input graphs’ evolution to its dreamed counterpart and only show edges whose weights are above a threshold of 0.4. These intermediate steps reveal that, in inverse-training, edges of perfect matchings which do not positively contribute to the target property are mitigated. (b) Distribution of initial vs. dreamed fidelities with respect to (i) the GHZ state and (ii) the W-state, as well as (iii) the mean value of \(\mathrm{tr}(\rho_{\mathcal{M}}^{2})\). We observe that most dreamed examples exceed the upper bound of the original dataset, attesting to our tool’s ability to find quantum graphs that are novel to the original dataset. the dreamed graphs evolves with the network depth. We obtain the greatest amount of information about our quantum graphs by considering all of the different ways, as seen through the graphs' PMs, that a ket is realized. We, therefore, attribute to each dreamed graph a \(3\times 16\) array, \(p_{i,j}\), consisting of the probabilities of all possible PMs; through this, we gain insight into the state created by the graph, as well as all PM directions being used to that purpose. As we go deeper into the neural network, we observe that the dreamed graphs activate a greater number of PM directions and kets, which reflects the increasing complexity of structures the neural network has learned to recognize. We also verify the multifaceted nature of the neurons: different input graphs are observed to result in dreamed graphs that recreate different input states. As we see in the third inset of Figure 5 (a), the neuron may over-interpret parts of the graph that best creates the \(\ket{0000}\) term, or it may either over-interpret different possible PM directions for \(\ket{0000}\), or parts of the graph which instead realize the \(\ket{1111}\) term. We may quantify the complexity of structures recognized throughout the network with the information entropy \(H_{i,j}\). We take the mean value of \(p_{i,j}\) across all of the dreamed graphs, then use it to compute \(H_{i,j}\) through the procedure outlined in Appendix V.3. Repeating this procedure across all hidden layer neurons, we may then determine the average entropy observed across the \(j^{th}\) layer, which gives us a general metric of the complexity of structures being recognized. We plot the trend of \(\overline{H_{i,j}}\) observed across all three neural network architectures in Figure 5(b). Intuitively, we expect that a deep neural network first learns to recognize simple structures, then more abstract features with network depth. Indeed, we observe consistently that, from an initial peak, the information entropy drops to its lowest values at the earlier layers, before gradually increasing near the end of the neural network. This certifies the universal assertion that the network identifies simple features of the input graph, such as edges that form one or two PMs to states, before forming more complicated graphical structures in the deeper layers that features a greater set of PMs. Figure 4: **Extracted strategies from the evolution of certain states when dreaming on the output layer of the neural network.** We discern the strategies employed by the inverse training routine when applied to a network tasked to optimize (a) the GHZ-State Fidelity, (b) the W-State Fidelity, and (c) the mean value of \(\text{tr}(\rho_{M}^{2})\) by considering several initialisations for each case. For each graph, we only show edges with weights greater than 0.3. We find that the network attempts to construct perfect matchings (PMs) of terms which positively contribute to the property value and whose weights add up to 1. Conversely, we find that the network eliminates unwanted terms by either directly reducing the edge weights of the PM corresponding to that term, or by introducing negative, disjoint perfect matchings of that term. For (c), we observe that the network ‘selects’ a term in the initial state to be minimized, then creates terms that are separable across two or more bipartitions with respect to the remaining states. ## IV Outlook In this article, we showcase preliminary results for adapting the deep dreaming approach to quantum optical graphs for deep neural networks on different target quantities. We apply our routine to ascertain the strategies employed by the neural network on its predictive task by dreaming on the output layer and throughout the network. Crucially, we demonstrate that the trained neural network builds a non-trivial model of the quantum state properties produced by a quantum experiment, and we find that the deep dreaming approach does remarkably well in finding novel examples outside of the initial dataset. Lastly, in applying our approach to the hidden layers of the neural network, we find that the network gradually learns to recognize increasingly complicated structures, and that the individual neurons are multifaceted in the possible structures that excites them. In future work, further transparency of the learned rep Figure 5: **Information entropy throughout each different neural network architecture.** (a) Workflow behind computing the mean information entropy for each layer of the trained neural network. We dream with multiple input graphs on each neuron in the neural network. To account for the diversity of structures that a neuron is interested in seeing, We compute the mean probability amplitudes for every possible perfect matching corresponding to each ket. We thereby observe the overall graph, which the neuron sees best. We may then compute the information entropy of each neuron, \(H_{i,j}(p)\), and the mean information entropy of the layer, \(\overline{H_{i,j}(p)}\). This gives us a measure of the complexity of structures seen by the neural network. As conveyed in the different \(p_{i,j}\) for each dreamed graph, we note the variety of structures which the network over-interprets; this illustrates the multifacetedness of the neurons. (b) Mean information entropy plots for the (i) [\(400^{4}\)] (ii) [\(49^{10}\)] and (iii) [\(36^{26}\)] neural network architectures. A general trend that we may discern in all three cases is that the mean information entropy converges to a minimum in the lower layers and then gradually increases as we go deeper. We may attribute this to the intuition that the network initially learns to recognize simpler structures, then learns increasingly complicated ones as we go deeper within the network. resentations can be possibly attained by applying regularization techniques such as \(\alpha\)-norm [48], jitter [30], or by dreaming on the mean of a set of input graphs [47] to converge towards more interpretable solutions. Furthermore, we may also find simpler networks on which to dream by applying pruning strategies based on the Lottery Ticket Hypothesis [49]. Above all, we may also apply these tools to larger graphs with more dimensions and explore different applications beyond state creation, such as Quantum Measurements and Quantum Communication. Thanks to their relative simplicity, the quadripartite graphs have been a good testing case for our inception approach, and the knowledge we extract from them can be used in other systems. Larger graphs and new targets will provide a novel and deeper understanding of quantum optics experiments as well as inspire new research. We foresee that our approach can be used to extend frameworks for automated setup design [4, 19, 25] as well as in generative molecular algorithms [32, 50] which adapt a surrogate neural network model. Through our approach, we can better decipher what these frameworks have learned about the underlying science, and understand the intermediate strategies taken towards a target configuration. ## Code availability We provide the data featured in this work, as well as the code that executes the deep dreaming protocol, can be found in this GitHub repository. ## Acknowledgements T.J. and E.K. acknowledge the support of the Canada Research Chairs (CRC) and Max Planck-University of Ottawa Centre for Extreme and Quantum Photonics.
2301.03412
Neighbor Auto-Grouping Graph Neural Networks for Handover Parameter Configuration in Cellular Network
The mobile communication enabled by cellular networks is the one of the main foundations of our modern society. Optimizing the performance of cellular networks and providing massive connectivity with improved coverage and user experience has a considerable social and economic impact on our daily life. This performance relies heavily on the configuration of the network parameters. However, with the massive increase in both the size and complexity of cellular networks, network management, especially parameter configuration, is becoming complicated. The current practice, which relies largely on experts' prior knowledge, is not adequate and will require lots of domain experts and high maintenance costs. In this work, we propose a learning-based framework for handover parameter configuration. The key challenge, in this case, is to tackle the complicated dependencies between neighboring cells and jointly optimize the whole network. Our framework addresses this challenge in two ways. First, we introduce a novel approach to imitate how the network responds to different network states and parameter values, called auto-grouping graph convolutional network (AG-GCN). During the parameter configuration stage, instead of solving the global optimization problem, we design a local multi-objective optimization strategy where each cell considers several local performance metrics to balance its own performance and its neighbors. We evaluate our proposed algorithm via a simulator constructed using real network data. We demonstrate that the handover parameters our model can find, achieve better average network throughput compared to those recommended by experts as well as alternative baselines, which can bring better network quality and stability. It has the potential to massively reduce costs arising from human expert intervention and maintenance.
Mehrtash Mehrabi, Walid Masoudimansour, Yingxue Zhang, Jie Chuai, Zhitang Chen, Mark Coates, Jianye Hao, Yanhui Geng
2022-12-29T18:51:36Z
http://arxiv.org/abs/2301.03412v2
Neighbor Auto-Grouping Graph Neural Networks for Handover Parameter Configuration in Cellular Network ###### Abstract The mobile communication enabled by cellular networks is the one of the main foundations of our modern society. Optimizing the performance of cellular networks and providing massive connectivity with improved coverage and user experience has a considerable social and economic impact on our daily life. This performance relies heavily on the configuration of the network parameters. However, with the massive increase in both the size and complexity of cellular networks, network management, especially parameter configuration, is becoming complicated. The current practice, which relies largely on experts' prior knowledge, is not adequate and will require lots of domain experts and high maintenance costs. In this work, we propose a learning-based framework for handover parameter configuration. The key challenge, in this case, is to tackle the complicated dependencies between neighboring cells and jointly optimize the whole network. Our framework addresses this challenge in two ways. First, we introduce a novel approach to imitate how the network responds to different network states and parameter values, called auto-grouping graph convolutional network (AG-GCN). During the parameter configuration stage, instead of solving the global optimization problem, we design a local multi-objective optimization strategy where each cell considers several local performance metrics to balance its own performance and its neighbors. We evaluate our proposed algorithm via a simulator constructed using real network data. We demonstrate that the handover parameters our model can find, achieve better average network throughput compared to those recommended by experts as well as alternative baselines, which can bring better network quality and stability. It has the potential to massively reduce costs arising from human expert intervention and maintenance. 1 Huawei Noah's Ark Lab, \({}^{2}\) University of Alberta, \({}^{3}\) McGill University, \({}^{4}\) Tianjin University {mehrtash.mehrabi, walid.masoudimansour, yingxue.zhang, chuaijie, chenzhitang2, haojianye, geng.yanhui}@huawei.com, mark.coates@mcgill.ca ## 1 Introduction The rapid growth in the number of devices that need real time, high quality connection to the internet (e.g., internet of things (IoT) devices, health monitoring equipment, devices used for online education and remote working, autonomous vehicles, etc.) makes it essential to improve cellular network performance. Unsatisfactory user experience and network interruption have negative impacts in our modern society. Thus, improving the cellular network has both economic and social impact towards achieving United Nations Sustainable Development Goals (UNSDGs) [22, 13]. Moreover, it can highly contribute to enhancing infrastructure, promoting sustainable industrialization, fostering innovation, responsible consumption, enabling sustainable cities and communities, and promoting decent work and economic growth [1, 1, 16]. The performance of a cellular network relies heavily on its parameter configurations and it is becoming more crucial, as the number of mobile users continues to grow rapidly [2]. These parameters govern access control, handover, and resource management [1, 1]. One of the factors that has a significant impact on the quality of service (QoS) in such networks is the handover parameter [10]. We provide more details concerning this parameter and its effects in the supplementary materials, Sec. A.1. Optimizing handover parameters is one of the most common approaches to guarantee minimum service delay or interruption and improve coverage and throughput [15]. However, with the massive increase in both the size and complexity of cellular networks, parameter configuration is becoming complicated. The current practice, which relies largely on experts' prior knowledge, is inadequate, requiring many domain experts and leading to high maintenance costs. One of the key challenges in the network parameter optimization problem is the complex spatial and temporal dependencies in the cellular network. Any employed algorithm should be capable of tracking the non-stationary changes in the environment, i.e., the fluctuations of user number, network load, etc. [1]. Also, due to the diverse characteristics of cells across the network, the best parameter configuration for one cell may not be optimal for another and parameter configuration of one cell not only affects its own performance, but also affects its neighbors' [1]. Therefore, there are strong interactions between neighboring cells which become extremely complicated in heterogeneous network. Consequently, developing an algorithm that can adapt to the temporal dynamics and cell diversity in real networks is essential for parameter configuration [11]. The current cellular network deployments are highly de pendent on human designed rules or analytical models based on domain knowledge and assumptions about the network dynamics which is far from optimal. They only consider a limited number of network states (e.g., user distribution, channel quality, etc.) and parameters, and cannot capture the complex relationships between network states, parameter configurations and network performance. Also, the assumptions of the network dynamics, based on which the rules/models are developed, are often simplified without considering the non-stationary changes in real environments, which degrades their performance. Finally, these rules/models may not be able to deal with the cell diversities in the network which makes them sub-optimal [12]. Recently, data-driven approaches based on machine learning (ML) have been extensively used for parameter configuration and network management in cellular networks [20, 14, 15, 16]. It has been shown that the multi-layer perceptron (MLP) can be considered as a universal function approximator [1]. Thus, in environments such as cellular networks where there is lack of an accurate analytical model and the network is highly dynamic, neural-network-based methods can be used to achieve high-accuracy prediction. ML models can utilize high dimensional information and approximate complex functions to fully describe the relationship between network states, parameter configurations and network performance metrics, which cannot be achieved by human experience. In order to address the above challenges, we investigate two important questions: 1) _Modeling_: how to model the spatial and temporal dependencies of the cellular network? 2) _Decision-making_: how to choose the parameter values to jointly optimize the overall performance of interconnected and interacting cells? We first, propose a ML-based model to precisely imitate the cellular network environment and then, use it to configure the parameters. We demonstrate that the handover parameters recommended by our model can achieve better average network throughput compared to the existing methods and our approach can massively reduce costs from human expert intervention and maintenance. It opens up the potential for high-quality internet access to geographical areas that are currently under-served by the cellular network. Besides, this framework can bring new possibilities for important applications to under-developed regions including online education, health monitoring devices by improving their real time connection [1]. Our main contributions are summarized as follows. * We propose a novel method to model the impact from the neighbors of each cell in a distinguishable way to capture the complex spatial dependencies of the network. * We consider the changing dynamics of the network in our reward model to better reflect the temporal dependencies. * We introduce a multi-objective optimization strategy based on the model to consider several performance metrics and improve the overall network throughput, which has the potential for high social impact applications. ## 2 Background and Related Work The adjustment of handover parameters helps to balance the traffic load in the network and it can dramatically affect the network throughput. During the handover process in cellular networks, in order to guarantee an acceptable service quality, a user equipment (UE) must monitor the reference signal received power (RSRP) of the serving cell (3GPP TS36.331 2016). As soon as the RSRP drops below a pre-defined threshold (called A2-threshold), the UE starts to report measurements to its serving cell and prepares for handover. Increasing the value of A2-threshold decreases the number of UEs in the serving cell in which the handover is triggered, and this spreads the serving cell's load to its neighbors, resulting in a significant change of throughput for the serving cell and its neighbors. While improving the load balance of the network, this can have adverse effects on the network performance since it forces frequent handovers which requires a considerable amount of bandwidth for measurement reporting and causes a drop in network throughput. Decreasing the value of A2-threshold, on the other hand, may cause a poor experience for edge UEs and lead to repeated connection loss due to weak signal. In attempt to solve the problem of optimization of the parameters of a wireless network, different techniques such as fuzzy systems, deep reinforcement learning (DRL), and contextual bandit have been used in the literature. (see Sec. A.2 for some details). The use of graph convolutional networks (GCNs) [16, 17, 18] has also yielded significantly well-designed models to predict the network traffic and optimize the corresponding parameters. For example, in [15], the authors introduce a novel handover strategy based on GCNs. The handover process is modeled as a directed graph by which the user tries to predict its future signal strength. Other works such as [15] introduce novel methods of network traffic prediction combined with a greedy search or action configuration method to optimize handover parameters. However, these works fail to consider the heterogeneous aspect of the cellular networks. Despite being effective, none of the above-mentioned methods uses the capacity of the neighbors' information to fully tailor the model to adapt to the spatial characteristics of a cellular network, where the interaction is complex and the network is heterogeneous. Also, despite the fact that these techniques consider some important measures of optimization, none of them approaches the problem at hand by considering two of the most important measures simultaneously (especially from the users' perspective): load balancing and throughput. In this article, we propose an effective and efficient framework that models the network as a heterogeneous graph where we learn an implicit interaction type for each neighboring cell. Then, it incorporates the impact of neighboring cells from each interaction group in a unique way. Moreover, in contrast to the available methods in the literature, we exploit two important measures in the network simultaneously, to configure the parameters effectively: throughput and load balancing, which are directly related to the user experience in the network. Problem Formulation Let us consider a network with \(N\) cells, and form \(N\) clusters each composed of one of the network cells as its center cell along with its neighboring cells. As an example, we choose the optimization of the A2-threshold to investigate the performance of our algorithm. According to the 3GPP standard (3GPP TS36.331 2016), an A2 event is triggered when the received power at user \(u\) from cell \(n\), \(P^{u,n}\), satisfies \[P^{u,n}+H_{ys}<Thresh, \tag{1}\] where \(H_{ys}\) is the hysteresis parameter to avoid frequent handovers and \(Thresh\) is the A2-threshold we are optimizing. We consider an online optimization process. In real practice, network operators are often conservative and only allow a limited number of experiments. During the optimization period of \(L\) days, and the A2-threshold can be adjusted once for each cell at the beginning of each day. For day \(t\), let \(D_{t}\) be the total bits transmitted by all the cells, and \(T_{t}\) be the total transmission time. We would like to maximize the accumulated network throughput of the optimization period, i.e., \(\max\sum_{t=1}^{L}\frac{D_{t}}{T_{t}}\). Maximizing the overall network throughput by jointly optimizing the A2-threshold of all cells is difficult. The problem becomes even more complicated as the network size increases, which makes a centralized solution not scalable. The adjustment of the A2-threshold of one cell only affects its local neighborhood and thus, we convert the centralized problem into a local decision problem. That is, each cell only examines its local performance metrics and chooses its own parameter configuration value. The adjustment of the A2-threshold affects the network throughput via two means: better resource utilization by load balancing, and improved cell throughput with less connection loss and measurement reporting. Consequently, in order to configure it, these two metrics must be considered in the local decision problem. The throughput of cell \(i\) on day \(t\) is highly dependent on its A2-threshold, formulated as \(a_{t}^{i}\), denoted as \(\alpha_{t}^{i}(a_{t}^{i})\). The load balancing factor in the \(i\)-th cluster with center cell \(i\) on day \(t\) with \(a_{t}^{i}\) is defined as the ratio of the center cell throughput to the average throughput of its neighboring cells, denoted by \(\beta_{t}^{i}(a_{t}^{i})\) and formulated as \(\beta_{t}^{i}(a_{t}^{i})=\alpha_{t}^{i}(a_{t}^{i})/\bar{\alpha}_{t}^{i}\), where \(\bar{\alpha}_{t}^{i}\) is the average throughput of the neighbors of cell \(i\) with action \(a_{t}^{i}\) and, denoting by \(\mathcal{N}_{t}(i)\) the set of all neighbors of cell \(i\) on day \(t\), it can be formulated as \(\bar{\alpha}_{t}^{i}=\frac{1}{|\mathcal{N}_{t}(i)|}\sum_{j\in\mathcal{N}_{t}(i )}\alpha_{t}^{j}(a_{t}^{j})\). The throughput ratio (rather than traffic/user ratio) is used since different cells have different capacities. This value approaches \(1\) when loads of different cells match their capacities. Our goal is to maximize the overall network throughput by optimizing the two important network performance metrics, namely, throughput ratio \(\beta_{t}^{i}(a_{t}^{i})\) and cell throughput \(\alpha_{t}^{i}(a_{t}^{i})\) for each cell \(i\in[1,N_{t}]\), where \(N_{t}\) is the total number of cells on day \(t\), at the same time. Therefore, we propose the following optimization problem for tuning the A2-threshold for cell \(i\): \[\operatorname*{arg\,max}_{a_{t}^{i}\in\mathcal{A}}\Big{(}-\sqrt{ \big{|}1-\beta_{t}^{i}(a_{t}^{i})\big{|}},\alpha_{t}^{i}(a_{t}^{i})\Big{)}, \tag{2}\] where \(\mathcal{A}\) is the set of all possible values for the A2-threshold in the cellular network. The challenge of solving the above problem lies in several folds. _First_, since the network performance function is complex, dynamic and unknown, obtaining accurate \(\beta_{t}^{i}(a_{t}^{i})\) and \(\alpha_{t}^{i}(a_{t}^{i})\) is difficult. Instead, in this work, we adopt a data-driven approach to learn reward models and estimate the performance metrics. _Second_, in real-world cases, only a limited experimental budget is allowed by network operators leading to insufficient diverse historical data (state, action pairs) to train a data-driven learning model. In our design, we use a data augmentation technique in the form of neighbor cell augmentation to enrich the features from each cell. _Third_, the handover parameter configuration is affected by adjacent cells. Thus, it is essential to model the information coming from the adjacent cells to achieve accurate reward modeling. _Lastly_, optimizing one performance metric greedily might hinder another, thus, how to jointly optimize different performance metrics needs careful consideration. ## 4 Temporal Auto-Grouping GCN for Reward Modeling In order to better capture the dependency between each cell and its neighboring cells, we _first_ introduce our novel method for neighboring cell feature aggregation. _Second_, we propose a temporal feature aggregation step with recurrent neural networks (RNN) to model the temporal correlation from the historical sequence of the network states. _Third_, we elaborate the overall training process, considering the impact from the neighboring cells, the temporal correlation in the network and the action we aim to optimize. ### Spatial Feature Modeling The handover parameters heavily impact the learning problem on the graph of the center cells as well as the neighboring cells, hence, we aim to capture the neighboring cells information during our modeling process. Recently, message-passing neural networks (MPNNs) in the form of graph neural networks (GNNs) have been introduced and showed to be effective in modeling real world applications with structural information. The dependencies in the dataset are modeled using a graph (Hamilton et al., 2017; Ying et al., 2018; Wang et al., 2019). In each layer of a GNN, each node's representation includes the features from itself as well as the features from its neighboring nodes (messages sent from the neighborhood). We believe the GNN framework is suitable for handling the dependencies between the center cell and the neighboring cells in cellular networks. We present more details on GNN and recent works on homogeneous and heterogeneous graphs in the Sec. A.3. Graph-Based Cellular Network ModelingWe construct a graph \(\mathcal{G}_{t}\!=\!(\mathcal{V}_{t},\mathcal{E}_{t},\mathbf{X}_{t})\) for day \(t\), where each node \(v\!\in\!\mathcal{V}_{t}\) represents one cell and is associated with a feature vector \(\mathbf{x}_{t}^{v}\!\in\!\mathbb{R}^{d}\) (\(v\)-th column of \(\mathbf{X}_{t}\!\in\!\mathbb{R}^{d\times|\mathcal{V}_{t}|}\)), including the statistical properties of node \(v\) measured on day \(t\). The statistical properties could include several features such as the antenna transmission power, physical resource block (PRB) usage ratio, the amount of data traffic, and the transmission bandwidth. These features serve as the node attributes. The edge set \(\mathcal{E}_{t}\) encodes the interactions between cells based on the handover events between pairs of cells. Based on historical data, if any pair of cells has an average number of handover events above a threshold \(\tau\), we assume an edge between those two cells. The neighboring set for node \(v\) is denoted as \(\mathcal{N}_{t}^{g}(v)\)=\(\{u|u\in\mathcal{V}_{t},(u,v)\in\mathcal{E}_{t}\}\). Due to the heterogeneous nature of the cellular network, the relationships between the neighboring cells can be complex. Concretely, there might be an implicit \(M\) latent relationship types \(\mathcal{R}=\{r_{1},r_{2},\cdots,r_{M}\}\) that can be learned to better handle the complex interactions in the cellular networks. Assuming each cell is represented by its states such as PRB usage, traffic, etc. in the network graph, we aim at dividing the neighboring cells into different groups, each of which will provide some information that is shared between the neighbors in that group and help to better capture the rich information from neighboring cells in a distinguishable way. Thus, inspired by the above motivation and a recent work [20], we propose a novel GCN approach called auto-grouping GCN (AG-GCN) to characterize this special property of cellular networks when handling the interactions between neighboring cells. In the following, we elaborate upon the detailed steps to realize our design. Neighborhood AugmentationIn cellular network modeling, since the experiment budget is limited, the historical data (state-action pairs) is not diverse enough to train our data-driven model. Besides, since we construct the graph based on the handover events, there are cells that have a very limited number of neighboring cells. Thus, in our design, we use a data augmentation technique in the form of neighbor cell augmentation based on the similarity between cells in a latent space, to enrich the features of each cell. We define a feature transformation function \(f(\cdot):\mathbb{R}^{d}\rightarrow\mathbb{R}^{l}\) which maps the input node feature \(\mathbf{x}_{t}^{v}\in\mathbb{R}^{d}\) to a latent space \(\mathbf{y}_{t}^{v}=f(\mathbf{x}_{t}^{v})\in\mathbb{R}^{l}\). In order to capture the long-range dependencies and similarity in the cellular network, we design an additional neighborhood in the latent representation space based on Euclidean distance. For each node \(v\in\mathcal{V}_{t}\), we form the augmented neighborhood \(\mathcal{N}_{t}(v)=\mathcal{N}_{t}^{g}(v)\cup\mathcal{N}_{t}^{s}(v)\), where \(\mathcal{N}_{t}^{g}(v)\) and \(\mathcal{N}_{t}^{s}(v)\) are the neighbors of node \(v\) in the original graph and in the latent space, respectively. The neighbors in the latent space are selected based on their Euclidean distance to the center cell. The \(n\) nearest nodes in the latent space are selected to create \(\mathcal{N}_{t}^{s}(v)\) for cell \(v\), where the number of nodes we select based on the feature similarity is equal to the neighborhood size in the original graph \(|\mathcal{N}_{t}^{g}(v)|\)=\(|\mathcal{N}_{t}^{s}(v)|\)=\(n\). The neighbor augmentation module in Fig. 1 illustrates this process. Neighborhood Auto-GroupingOnce we have obtained the augmented neighborhood set, the neighbors in the augmented neighborhood \(\mathcal{N}_{t}(v)\) are divided into different groups by a geometric operator \(\gamma\). Consider node \(v\) and its neighbor node \(u\in\mathcal{N}_{t}(v)\). The relation between them on day \(t\) is denoted as \(\gamma(\mathbf{y}_{t}^{v},\mathbf{y}_{t}^{u}):(\mathbb{R}^{l},\mathbb{R}^{l}) \rightarrow\mathcal{R}=\{r_{1},r_{2},\cdots,r_{M}\}\). This grouping aims at combining neighbors' information in groups with similar inter-group features. For each group \(r_{i}\in\mathcal{R}\), the neighborhood feature set on day \(t\) is defined as \(\mathcal{N}_{t}^{r_{i}}(v)=\{u|u\in\mathcal{N}_{t}(v),\gamma(\mathbf{y}_{t}^{v },\mathbf{y}_{t}^{u})=r_{i}\}\). The auto-grouping module in Fig. 1 demonstrates this process. Note that yellow neighbors (marked with *) are the projected counterparts of the neighbors in the graph space, while the green neighbors (marked with +) correspond to the augmented neighbors from the latent space. Conditional Message PassingSince the order within each neighbor group should not impact the output of the representation, we apply a permutation invariant function \(\pi(\cdot)\) on the neighbors within each group (mean pooling across each feature dimension) and aggregate them separately. Fig. 1 shows an example of the AG-GCN, where \(l=2\) and \(|\mathcal{R}|=4\) and the representation after the permutation invariant function \(\pi(\cdot)\) is shown by black dashed arrows ended to nodes 1, 2, 3, and 4. Then for each group \(r_{i}\in\mathcal{R}\), a non-linear transform is further applied as: \[\mathbf{z}_{t}^{v,r_{i}}=\sigma\Big{(}\mathbf{W}_{t}^{v,r_{i}}\cdot\pi\big{(} \{\mathbf{x}_{t}^{u}|u\in\mathcal{N}_{t}^{r_{i}}(v)\}\big{)}\Big{)}, \tag{3}\] where \(\mathbf{W}_{t}^{v,r_{i}}\) is a learnable weight matrix for the neighbors in group \(r_{i}\) of node \(v\) on day \(t\), and \(\sigma(\cdot)\) is a non-linear function, e.g., \(\tanh\). Then for each node we aim to aggregate the transformed neighborhood features from their different groups of neighbors in a distinguishable way. The vectors \(\mathbf{z}_{t}^{v,r_{i}}\) for \(r_{i}\in\mathcal{R}\) are further aggregated as \(\mathbf{h}_{t}^{v}=[\mathbf{z}_{t}^{v,r_{1}};\cdots;\mathbf{z}_{t}^{v,r_{M}}]\), where \([\ ;\ ]\) represents concatenation. ### Temporal Feature Modeling Capturing the trend in the evolution of the states of each cell within a day properly can benefit the prediction of the performance metric for the following day. We propose to use additional temporal features for each center cell to extract the changing dynamic pattern of its states within each day to further improve the reward model performance. We assume the samples of the center cell \(v\) on day \(t\) can be divided into \(K\) groups by their temporal order. For all the samples in each group \(k\), we take the average network state for each group of samples and denote it as \(\mathbf{x}_{t,k}^{v}\). We use an RNN layer to capture this temporal dependency of the features from different groups by feeding all the network states as an input sequence \(\mathbf{P}_{t}^{v}=\big{[}\mathbf{x}_{t,1}^{v}\cdot\mathbf{x}_{t,2}^{v}\cdot \cdots;\mathbf{x}_{t,K}^{v}\big{]}^{T}\in\mathbb{R}^{K\times d}\), to obtain \[\mathbf{c}_{t}^{v}=\mathrm{RNN}(\mathbf{P}_{t}^{v},\delta)\in\mathbb{R}^{d^{ \prime}}, \tag{4}\] where \(\delta\) and \(d^{\prime}\) are the set of trainable parameters and the output dimension of the RNN layer, respectively. ### Overall Training Pipeline The main purpose of the model is to estimate the real network's response, and predict the throughput ratio and throughput of the center cell for the next day based on the observed network states in the current day. These performance metrics are not only affected by the current day's states, but also highly correlated with the action we choose to configure for the next day. Thus, we also consider the actions of the next day. Furthermore, the throughput ratio and throughput of the next day are highly dependent on the previous performance metrics. Hence, we consider the current throughput ratio, i.e., \(\beta_{t}^{v}\), in the prediction process. To make the final prediction, the learned representation of the neighborhood by the AG-GCN aggregation, the temporal features of the center cell, and the throughput ratio of the current day, i.e. \(\beta^{v}_{t}\), are concatenated to form the state vector of cell \(v\) as \(\mathbf{s}^{v}_{t}=\Psi(\mathbf{W}^{v}_{t}\cdot\left[\beta^{v}_{t};\mathbf{c}^{v }_{t};\mathbf{h}^{v}_{t}\right])\), where \(\mathbf{W}^{v}_{t}\) is a learnable weight matrix for node \(v\) on day \(t\), and \(\Psi(\cdot)\) is a non-linear function, e.g., tanh. Since the final representation should be sensitive to the chosen input action (of which the decision making process will be elaborated in Sec. 5), the throughput ratio and throughput of the next day for cell \(v\) are formulated as the output of a non-linear transformation \(\Lambda(\cdot)\) function of state and action: \[\hat{\beta}^{v}_{t+1} =\Lambda\big{(}\mathbf{W}^{v}_{\beta}\cdot([\mathbf{s}^{v}_{t}; a^{v}_{t}])\big{)}, \tag{5}\] \[\hat{\alpha}^{v}_{t+1} =\Lambda\big{(}\mathbf{W}^{v}_{\alpha}\cdot([\mathbf{s}^{v}_{t}; a^{v}_{t}])\big{)}, \tag{6}\] where \(\mathbf{W}^{v}_{\beta}\) and \(\mathbf{W}^{v}_{\alpha}\) are trainable matrices of node \(v\) for throughput ratio and throughput models, respectively. The overall flow of data from the graph structure to the final prediction is represented in Fig. 1. Note that we train two separate models for predicting the throughput and the throughput ratio simultaneously. In order to properly use the A2-threshold for the prediction, we use the change in this parameter compared to the previous day as the action \(a^{v}_{t}=A2^{v}_{t+1}-A2^{v}_{t}\), where \(A2^{v}_{t+1}\) and \(A2^{v}_{t}\) are the A2-thresholds for cell \(v\) on day \(t+1\) and \(t\), respectively. The reason for this design choice has twofold. First, the original action space of A2 is large, but the range of the change of action can be smaller by controlling the adjustment steps, making it easier for the model to learn and conduct the decision making step. Besides, the delta action directly reflects the change in the cell coverage/loads, so they are more sensitive to the performance metrics. To form the training objective, we consider data of \(T+1\) consecutive days and form the pairs \((t,t+1)\), \(t\in\{1,2,\cdots,T\}\), to predict the throughput ratio and throughput of the center cell in day \(T+1\), trained by minimizing the following loss functions respectively: \[\frac{1}{T}\sum_{t=1}^{T}\frac{1}{N_{t}}\sum_{v=1}^{N_{t}}(\hat{ \beta}^{v}_{t+1}-\beta^{v}_{t+1})^{2}+\lambda_{1}||\Theta_{1}||^{2}, \tag{7}\] \[\frac{1}{T}\sum_{t=1}^{T}\frac{1}{N_{t}}\sum_{v=1}^{N_{t}}(\hat{ \alpha}^{v}_{t+1}-\alpha^{v}_{t+1})^{2}+\lambda_{2}||\Theta_{2}||^{2}, \tag{8}\] where \(\lambda_{1}\) and \(\lambda_{2}\) are the hyperparameters chosen for regularization. \(\Theta_{1}\) and \(\Theta_{2}\) represent all the trainable parameters in the models. The trained reward model is now able to mimic the real network and predict both throughput ratio and throughput of each center cell for the coming day and can be used to check the impact of actions towards the performance metrics we are considering. ## 5 Action Configuration As discussed in the earlier sections the main objectives to consider in the action configuration process are load balancing, identified by the throughput ratio, and the cell throughput. Hence, the best action for cell \(v\) on day \(t\), i.e. \(a^{v}_{t}\in\mathcal{A}\), is the one that optimizes the problem in (2). In general, when dealing with a multi-objective problem, different objectives are often conflicting, and we may not be able to optimize them simultaneously. One common way to tackle this problem is to give different objectives weights and optimize the weighted objective value. However, in our scenario, it is difficult to determine the weights and different clusters may require cluster-specific weights. Here we break the problem in (2) into two sub-problems, and solve them sequentially. We first optimize the action with respect to the predicted throughput ratio, i.e., \(\hat{\beta}^{v}_{t+1}(a^{v}_{t})\) for cell \(v\) on day \(t\), where \(a^{v}_{t}\in\mathcal{A}\), and then optimize the throughput \(\hat{\alpha}^{v}_{t+1}(a^{v}_{t})\). Specifically, the throughput ratio is optimized and we find the set of best \(c\) values for \(a^{v}_{t}\), denoted \(\mathcal{A}^{v}_{c}\), such that \[\underset{a^{v}_{t}\in\mathcal{A}^{v}_{c}}{\min}\!\!-\!\sqrt{\left|1-\hat{ \beta}^{v}_{t+1}(a^{v}_{t})\right|}\!\!\geq\!\!\!\max_{a^{v}_{t}\in\mathcal{A} -\mathcal{A}^{v}_{c}}\!\!\sqrt{\left|1-\hat{\beta}^{v}_{t+1}(a^{v}_{t})\right|}. \tag{9}\] Then, our goal is to achieve the maximum possible throughput for cell \(v\) on day \(t\) with this is through \[\hat{a}^{v}_{t}=\underset{a^{v}_{t}\in\mathcal{A}^{v}_{c}}{\arg\max}\ \hat{\alpha}^{v}_{t+1}(a^{v}_{t}). \tag{10}\] \(\hat{a}^{v}_{t}\) is then the final recommended action for cell \(v\) on day \(t\). This procedure for all the \(N_{t}\) cells of the network on day \(t\) is presented in Algorithm 1. ## 6 Experimental Results The experiments are conducted on a large-scale cellular network simulator constructed from real-world data which presented in Sec. B.1. We use principal component analysis Figure 1: The flow of information from graph structure to final prediction, used to form the training pipeline of two models for predicting the throughput \(\hat{\alpha}\) and the throughput ratio \(\hat{\beta}\) for \(\mathcal{T}_{in}=\{1,2,\cdots,t-1\}\). In this demo example, the auto-grouping module constructs \(M=4\) groups of neighbors, where \(l=2\). Empty groups are filled with the average of other groups. (PCA) as the mapping function \(f(\cdot)\), as defined in Sec. 4, for obtaining the latent representation in the AG-GCN step. It transforms the original feature into a 2-dimensional space to perform the neighborhood augmentation and the neighbors group assignment process. After this transformation, the relationship operator \(\gamma\) for the auto-grouping assigns a group to each subset of points in each quadrant of this two dimensional space presented in Table 1. The permutation-invariant function \(\pi\) applied on each group of neighbors is average in our experiments. ### Datasets To perform our experiments and evaluate the proposed model, two datasets are used in this study (see Sec. B.2 for more details): Dataset-A:A real metropolitan cellular network containing around 1500 cells sampled hourly and collected from Oct. 17 to Oct. 31, 2019. Each data sample contains information such as the cell ID, sample time, configuration of cell parameters, and measurements of the cell states. Dataset-B:Also a real metropolitan cellular network. The network contains 1459 cells, and the data is collected from Sep. 1 to Sep. 29, 2021. Each data sample contains similar information as above. ### Reward Model Accuracy Evaluation Dataset GenerationIn order to evaluate the prediction accuracy of our model, we use a simulator to modify Dataset-A with a random policy to diversify our network configuration. On each day, the A2-threshold for each cell is randomly selected around the default action -100 dBm within the range \([-105,-95]\). This approach provides us a fix data buffer with diverse action dataset to train all models and have a fair comparison of their accuracy. For Dataset-B, there exists a reasonable amount of the diversity in the handover parameter configuration, thus we directly use the raw dataset from the live network to perform the training and evaluation. Training Process and MetricsAs samples generated hourly, we aggregate them within each day as described in Sec. 4. To evaluate the model accuracy in predicting cell throughput and throughput ratio, we train the model with the generated pairs \(\{(1,2),\cdots,(t-1,t)\}\) for \(t=9\) and 12 days for Dataset-A and B, respectively. At each day \(t>2\), data pairs \(\{(1,2),\cdots,(t-2,t-1)\}\) are used as training and validation sets, and \((t-1,t)\) serves as testing set for evaluation across different models. We report the mean square error (MSE) to measure the reward model performance. Comparison with Benchmark ModelsIt is important to note that, the social impact of this work has not been addressed by ML approaches the same way as we propose. Due to the uniqueness of our problem, existing solutions for optimizing handover parameters are either not appropriate to solve it or there is no apparent way to directly adapt them to our problem. For instance, traditional handover optimization methods rely on designing fuzzy rules based on different measures of QoS in the network [20], however, designing proper rules is complex and cannot handle the change in highly dynamic systems well. Instead, we hope to use a data-driven approach, among which the (deep) RL method gains the most attention [1, 23]. However, in this type of problems, the network provider only allows limited exploration of the parameter values (e.g., allows changing the A2 value once a day) to ensure the stability of the network. Thus, we only have limited days for exploring the best action. RL models, nevertheless, usually need longer episodes to optimize the accumulated return. Despite this fact, we made some preliminary attempt, presented in Sec. B.3, to adapt the RL paradigm from the literature into our problem which did not show any advantage over our simpler design. In order to show the effectiveness of our proposed reward model, we compare it with alternative designs for the prediction model. It should be mentioned that all of these models are our contribution. In the Sec. B.4, we summarize our benchmarks and the properties of each model. The first model is MLP, where we only use the features of the center cells and ignore the neighboring cells' features. In GCN model, we follow the typical GCN formulation [1] and process the network as a homogeneous \begin{table} \begin{tabular}{|c|c|c|} \hline \(\gamma(\mathbf{y}_{t}^{v},\mathbf{y}_{t}^{u})\) & \(\mathbf{y}_{t}^{v}[0]>\mathbf{y}_{t}^{u}[0]\) & \(\mathbf{y}_{t}^{v}[0]\leq\mathbf{y}_{t}^{u}[0]\) \\ \hline \(\mathbf{y}_{t}^{v}[1]\leq\mathbf{y}_{t}^{u}[1]\) & 2 & 1 \\ \(\mathbf{y}_{t}^{v}[1]>\mathbf{y}_{t}^{v}[1]\) & 3 & 4 \\ \hline \end{tabular} \end{table} Table 1: The relationship operator \(\gamma\) \begin{table} \begin{tabular}{|c|c|c|} \hline \(\gamma(\mathbf{y}_{t}^{v},\mathbf{y}_{t}^{u})\) & \(\mathbf{y}_{t}^{v}[0]>\mathbf{y}_{t}^{u}[0]\) & \(\mathbf{y}_{t}^{v}[0]\leq\mathbf{y}_{t}^{u}[0]\) \\ \hline \(\mathbf{y}_{t}^{v}[1]>\mathbf{y}_{t}^{v}[1]\) & 2 & 1 \\ \(\mathbf{y}_{t}^{v}[1]>\mathbf{y}_{t}^{v}[1]\) & 3 & 4 \\ \hline \end{tabular} \end{table} Table 1: The relationship operator \(\gamma\) Figure 2: Achieved MSE of the throughput for test data of (a) Dataset-A and (b) Dataset-B for different methods graph where the neighbor information is aggregated jointly without distinction. The AG-GCN model ignores the temporal dependencies of the data which we consider in TAG-GCN model. In Fig. 2, we compare the prediction accuracy of these models for throughput in Dataset-A and B. We observe on average the best accuracy for the test set is achieved by AG-GCN and TAG-GCN, with TAG-GCN performing marginally better on the average rank metric across the evaluation days, indicating that our neighbor aggregation and temporal features extraction have a considerable impact on the reward modeling for cellular networks. The same results also achieved for the throughput ratio model. The same test is performed for throughput ratio and included in the supplementary materials in Fig. 10. ### Overall Parameter Optimization Performance The Action Recommendation ProcessIn the following experiments we use the presented models to recommend the actions for Dataset-A. The actions in day 1, i.e., Oct. 17, has been set to the default action which is -100 dBm. Unless otherwise stated, the action for the second day, i.e., Oct. 18, is initialized by a set of random actions around the default action in the range of \([-105,-95]\). The model is trained iteratively on each day and used to recommend actions for the next day. The process is depicted in Fig. 3, where states of the cells on day \(t\) are given to the trained model to predict performance metrics of the network on day \(t+1\) and the action \(a_{t}^{v}\) is adjusted for each cell based on the predictions. Finally, the network states and performance measurements for day \(t+1\) are computed according to the new selected action by the cellular network simulator and used for model training and action recommendation in the following day. Baseline Performance BoundsIn addition to the result achieved by the actions recommended by the models, we use three baseline performance bounds achieved by the _default A2-threshold_, the _expert rule_, and the _optimal actions_ of the simulator. As stated before the default A2-threshold value is \(-100\) dBm and this is used as the lower bound in the following experiments. The optimal actions in the simulator are obtained by brute-force search and it introduces the upper performance bound. The expert rule-based method provided by experienced network operator is a simple rule presented in the Sec. B.6. The performance achieved by the expert rule is better than the default action. We hope to use our proposed learning based framework to further fill the gap with the reward achieved by optimal actions. ResultsIn the following experiments, we compare the performance of the network under the actions recommended by different models in terms of the network throughput as defined in Sec. 3. We plot the trajectory of the throughput difference to the default A2-threshold baseline (dash black line) in Fig. 4. We repeat all the experiments 20 times for all models, where each run uses the same set of random actions on the first action exploration day (Oct. 18) for all the models. We also show the performance achieved through the expert rule action recommendation, default action, and the optimal actions of the simulator (random actions are also used on Oct. 18 for the curve of the optimal action). The quantitative results for Fig. 4 are summarized in Table 6 in the supplementary materials. TAG-GCN can achieve better average throughput in the final days which indicates the importance of our auto-grouping GCN design to tailor the heterogeneous property of the cellular networks. Besides, as expected, all the learning-based models can beat the expert rule algorithm which is highly dependent on human experience and is unable to recover from the performance degradation due to bad random initialization on the first day. Furthermore, to show the effectiveness of our proposed model in terms of load balancing and enhancing cluster throughput ratio, we illustrate the progress of this ratio achieved by TAG-GCN for some selected severely unbalanced cells in Fig.5. As it can be seen, the throughput ratio of the clusters form a trajectory that converges to 1 which is the ideal target value. More ablation studies are presented in Sec. B.7. Based on the above experiments, our proposed ML-based solutions can improve the network performance and optimize the handover process compared to the conventional methods such as using the default action or human experts rule-based methods. Moreover, the automation of the parameter optimization process achieved by our ML-based solutions reduces the domain expert's intervention and, hence, the management cost of network operators and improves the maintenance efficiency of cellular networks. Consequently, the proposed solutions open up the possibilities to provide reliable and high-quality network access even to geographical areas that are currently underserved by the cellular network. This can bring exciting new opportunities to these regions such as remote education, remote working, health monitoring, video streaming, etc. ## 7 Conclusion In this paper, we study the handover parameter configuration problem in cellular networks. We propose a reward prediction model to accurately imitate the cellular network and estimate the performance metrics. Our proposed model, i.e., Figure 4: Performance comparison of different models along with optimal action curve initialized with random actions Figure 3: The process of action recommendation by the trained model and the simulator TAG-GCN, investigates the impact of the adjacent cells and differentiate their impact on the center cell of each cluster. We also consider the network changing dynamics in our model to learn the temporal dependencies in the data. Based on the reward model, a novel multi-objective parameter configuration strategy is proposed to perform the optimization for each cluster and balance the performance metrics in each neighborhood. The conducted simulations shows the superiority of TAG-GCN which has a huge potential social impact by improving the cellular network parameters and providing massive connectivity and high coverage with a balanced traffic across the network. Hence, this can help the widespread adaptation of new technologies to benefit many sectors such as health and education.
2305.19546
Prediction of Born effective charges using neural network to study ion migration under electric fields: applications to crystalline and amorphous Li$_3$PO$_4$
Understanding ionic behaviour under external electric fields is crucial to develop electronic and energy-related devices using ion transport. In this study, we propose a neural network (NN) model to predict the Born effective charges of ions along an axis parallel to an applied electric field from atomic structures. The proposed NN model is applied to Li$_3$PO$_4$ as a prototype. The prediction error of the constructed NN model is 0.0376 $e$/atom. In combination with an NN interatomic potential, molecular dynamics (MD) simulations are performed under a uniform electric field of 0.1 V/angstrom, whereby an enhanced mean square displacement of Li along the electric field is obtained, which seems physically reasonable. In addition, the external forces along the direction perpendicular to the electric field, originating from the off-diagonal terms of the Born effective charges, are found to have a nonnegligible effect on Li migration. Finally, additional MD simulations are performed to examine the Li motion in an amorphous structure. The results reveal that Li migration occurs in various areas despite the absence of explicitly introduced defects, which may be attributed to the susceptibility of the Li ions in the local minima to the electric field. We expect that the proposed NN method can be applied to any ionic material, thereby leading to atomic-scale elucidation of ion behaviour under electric fields.
Koji Shimizu, Ryuji Otsuka, Masahiro Hara, Emi Minamitani, Satoshi Watanabe
2023-05-31T04:24:01Z
http://arxiv.org/abs/2305.19546v1
Prediction of Born effective charges using neural network to study ion migration under electric fields: applications to crystalline and amorphous Li\({}_{3}\)Po\({}_{4}\) ###### Abstract Understanding ionic behaviour under external electric fields is crucial to develop electronic and energy-related devices using ion transport. In this study, we propose a neural network (NN) model to predict the Born effective charges of ions along an axis parallel to an applied electric field from atomic structures. The proposed NN model is applied to Li\({}_{3}\)PO\({}_{4}\) as a prototype. The prediction error of the constructed NN model is 0.0376 \(e\)/atom. In combination with an NN interatomic potential, molecular dynamics (MD) simulations are performed under a uniform electric field of 0.1 V/A, whereby an enhanced mean square displacement of Li along the electric field is obtained, which seems physically reasonable. In addition, the external forces along the direction perpendicular to the electric field, originating from the off-diagonal terms of the Born effective charges, are found to have a nonnegligible effect on Li migration. Finally, additional MD simulations are performed to examine the Li motion in an amorphous structure. The results reveal that Li migration occurs in various areas despite the absence of explicitly introduced defects, which may be attributed to the susceptibility of the Li ions in the local minima to the electric field. We expect that the proposed NN method can be applied to any ionic material, thereby leading to atomic-scale elucidation of ion behaviour under electric fields. ## I Introduction Ion migration inside various devices, such as all-solid-state batteries and atomic switches [1; 2; 3], is achieved by applying external forces from applied electric fields. Numerous studies elaborating the stability of materials and the mobility of ions have been conducted using theoretical calculations because these aspects are directly related to device performance. To further advance our understanding of the operating mechanisms of such ion-conducting devices, atomic-scale analyses of ionic motion in device operating circumstances, that is, under electric fields, are crucial. Assuming a linear response, external forces arising from applied electric fields can be estimated simply by multiplying the electric field vector by the valence states of the ions. In electronic state calculations, such as density functional theory (DFT) calculations, the valence states are often evaluated, for instance, using Mulliken charges from the coefficients of atomic orbitals [4] or Bader charges using charge density distributions [5]. By contrast, the Born effective charges are defined from the induced polarisation in a periodic system by their atomic displacements (see Fig. 1(a)), or are equivalently defined from the induced atomic forces with respect to the applied electric fields. As our current interest lies in analysing ion motion under applied electric fields, and the latter definition precisely corresponds to the target situation, Born effective charges, rather than static valence states, are the suitable physical quantities to evaluate the external forces acting on the ions. In addition, the Born effective charges can be quantified as the number of each atom without the arbitrariness of the decomposition of the total charges. In most cases, these per-atom quantities are compatible with the computational processes of dynamic calculations using the methods described below. Recently, atomistic simulations using interatomic potentials constructed via machine learning (ML) techniques have gained increasing attention. Representative methods include the high-dimensional neural network potential (NNP) [6], Gaussian approximation potential [7], moment tensor potential [8], and spectral neighbour analysis potential [9]. Numerous studies have demonstrated that ML potentials optimised using DFT calculation data can predict various physical quantities comparable to those of DFT calculations at low computational costs [10; 11; 12; 13]. Notably, in their applications to solid electrolyte materials, the predicted ionic conductivities agree well with both the DFT and experimental results [14; 15]. To consider the application of these ML potentials for dynamic calculations under applied electric fields, predictive models of ion charges are necessary to evaluate the external forces, as stated earlier. Prior studies have proposed neural network (NN) models to predict the charges of ions [16; 17]; however, these models evaluate long-range electrostatic interactions or nonlocal charge transfer through the predicted charge states of ions. Therefore, in this study, we proposed an NN-based model to predict the Born effective charges for given structures. In combination with the conventional NNP, the proposed NN model was applied to dynamic simulations to evaluate the external forces under a uniform electric field. Herein, we employed Li\({}_{3}\)PO\({}_{4}\) as the prototype material, which is commonly used in the research on all-solid-state Li batteries [18; 19; 20]. We verified our scheme of dynamic calculations based on the proposed NN model by evaluating ion behaviour under applied electric fields. ## II Methodology Herein, we explain the computational details of the proposed NN model for the Born effective charge predictor. The Born effective charge is defined as follows. \[Z_{ij}^{*}=\frac{\Omega}{e}\frac{\partial P_{i}}{\partial u_{j}}=\frac{1}{e} \frac{\partial F_{i}}{\partial E_{j}}, \tag{1}\] where \(\Omega\) and \(e\) are the cell volume and the elementary charge, respectively; \(P_{i}\) and \(u_{j}\) are the macroscopic polarisation and atomic coordinates, respectively; \(F_{i}\) and \(E_{j}\) are the atomic forces and the electric field, respectively; and subscripts \(i\) and \(j\) represent the \(x\), \(y\), or \(z\) directions. In the formalism of NNP, atomic forces can be obtained analytically by applying the chain rule in the following relation. \[F_{j}=-\frac{\partial U}{\partial u_{j}}=-\frac{\partial U}{\partial G_{\nu}} \frac{\partial G_{\nu}}{\partial u_{j}}, \tag{2}\] where \(U\) is the total energy and \(G_{\nu}\) represents the symmetry functions (SFs) [6]. Based on the similarities in Eqs. (1) and (2), the framework of NNP appears to be modifiable as a Born effective charge predictor of a specific direction \(i\) (one direction in the \(3\times 3\) tensor) by replacing \(U\) and \(F_{j}\) with \(-\frac{\Omega}{e}P_{i}\) and \(Z_{ij}^{*}\), respectively. Essentially, we confirmed that the above modifications achieved some prediction performance; however, the obtained accuracy was not satisfactory. This inaccuracy can be rationalised by using the scalar quantities from the SFs as the inputs of NN to predict the vector quantities of macroscopic polarisation. Hence, to preserve directional information in the inputs, we employed vector atomic fingerprint (VAF) [21; 22], described below. \[V_{i}^{1,\alpha}=\sum_{j}\frac{R_{ij}^{\alpha}}{R_{ij}}e^{-\eta (R_{ij}-R_{s})}f_{c}(R_{ij}), \tag{3}\] \[V_{i}^{2,\alpha}= 2^{1-\zeta}\sum_{j}\sum_{k}(\mathbf{R}_{ij}+\mathbf{R}_{ik})^{ \alpha}\{1+\cos(\theta_{ijk}-\theta_{s})\}^{\zeta}\] \[e^{-\eta(\frac{R_{ij}+R_{ik}}{2}-R_{s})^{2}}f_{c}(R_{ij})f_{c}(R _{ik}), \tag{4}\] where \(\eta\) and \(\zeta\) are width parameters; \(R_{ij}\) and \(R_{ik}\) are the atomic vectors of atom \(i\) with \(j\) and \(k\), respectively; \(\theta_{ijk}\) is the angle between atoms \(i\), \(j\), and \(k\) at vertex \(i\); \(\alpha\) represents either the \(x\), \(y\), or \(z\)-coordinates; \(R_{s}\) and \(\theta_{s}\) determine peak positions; and \(f_{c}\) is the cutoff function, which is expressed as \[f_{c}(R_{ij})=\begin{cases}\frac{\cos(\frac{\pi R_{ij}}{R_{c}})+1}{2}&\text{ if }R_{ij}\leq R_{c},\\ 0&\text{if }R_{ij}>R_{c},\end{cases} \tag{5}\] where \(R_{c}\) is the cutoff distance. These functions are invariant to the rotation of the \(\alpha\)-axis. In a simplified model, the prediction of Born effective charge are sufficient only for a specific direction along the electric field (e.g., \(zx\), \(zy\), and \(zz\) when \(E_{z}\)), as in the following expression (see Fig. 1(b)). \[Z_{zj}^{*}=\frac{\Omega}{e}\frac{\partial P_{z}}{\partial V_{\nu}^{z}}\frac{ \partial V_{\nu}^{z}}{\partial u_{j}}. \tag{6}\] Thus, the proposed NN model requires only a VAF with \(\alpha=z\) as its input, which can be achieved by minimal modifications from the original NNP architecture. Note that we may extend the model to predict Born effective charges in the form of tensors in future work. Assuming that the forces vary linearly with the electric field, the external forces acting on the ions can be calculated as follows. \[\Delta F_{j}^{\text{NN}}=Z_{zj}^{*}E_{z}. \tag{7}\] The total forces were considered as the sum of the external forces and values obtained by the conventional NNP. \[F_{j}^{\text{Total}}=F_{j}^{\text{NNP}}+\Delta F_{j}^{\text{NN}}. \tag{8}\] Thus, simulations of ion dynamics under an electric field can be performed. The loss function of the proposed NN model includes errors in the macroscopic polarisation and Born effective charges in a manner similar to that of the NNP. \[\Gamma= \alpha\sum_{n=1}^{N_{\text{Train}}}\frac{(P_{z,n}^{\text{NN}}-P _{z,n}^{\text{DFP}})^{2}}{N_{\text{Train}}}\] \[+\ \beta\sum_{n=1}^{N_{\text{Train}}}\frac{\left\{\sum_{m=1}^{N_{ i}}\sum_{j\in\{x,y,z\}}(Z_{zj,m}^{\text{nn,NN}}-Z_{zj,m}^{\text{nn,DFP}})^{2} \right\}}{3M_{\text{Train}}}, \tag{9}\] Figure 1: (a) Schematic of electronic polarisation in Li\({}_{3}\)PO\({}_{4}\) induced by an electric field along \(z\)-axis. The yellow (green) clouds depict the increased (decreased) parts of charge density differences, as visualised by VESTA software [29]. (b) Schematic of the proposed NN model to predict Born effective charges. where \(N_{\rm Train}\) and \(M_{\rm Train}\) indicate the total amount of data and the total number of atoms, respectively. Here, training was executed by focusing on errors in the Born effective charges (\(\alpha=0\) and \(\beta=1\)). We performed atomistic simulations based on the proposed NN model using LAMMPS software [23] with the homemade interfaces. Next, we describe the training dataset of Li\({}_{3}\)PO\({}_{4}\) used to construct the NNP model. Note that we used parts of the structures and the corresponding total energies and atomic forces of DFT calculations generated in Ref. [14]. The dataset includes pristine (Li\({}_{12}\)P\({}_{4}\)O\({}_{16}\)), Li vacancy (Li\({}_{11}\)P\({}_{4}\)O\({}_{16}\)), and Li\({}_{2}\)O vacancy (Li\({}_{22}\)P\({}_{8}\)O\({}_{31}\)). For the pristine dataset, we used 14,001 snapshot structures of ab initio molecular dynamics (AIMD) with temperatures of 300, 2000, and 4000 K. The Li vacancy structures comprise 1,656 images from nudged elastic band (NEB) calculations. The Li\({}_{2}\)O vacancy structures comprise 5,000 snapshots of AIMD at 2000 K. In total, 20,657 structures were used to construct NNP. As the training dataset for the proposed NN model, we used 8,000 Li\({}_{12}\)P\({}_{4}\)O\({}_{16}\) snapshots from AIMD calculations at temperatures of 300 and 2000 K, 5,000 Li\({}_{11}\)P\({}_{4}\)O\({}_{16}\) images from NEB calculations, and 4,900 Li\({}_{22}\)P\({}_{8}\)O\({}_{31}\) snapshots from AIMD calculations at 2000 K. For these 17,900 structures, we performed density functional perturbation theory (DFPT) calculations [24] to obtain the Born effective charge tensors. In the DFPT calculations, we used a generalized gradient approximation with the Perdew-Burke-Ernzerhof functional [25], a plane-wave basis set of 500 eV cutoff energy, self-consistent field convergence criterion of \(10^{-6}\) eV, and \(k\)-point sampling mesh of \(6\times 6\times 4\) for Li\({}_{11}\)P\({}_{4}\)O\({}_{16}\) and \(4\times 4\times 2\) for Li\({}_{12}\)P\({}_{4}\)O\({}_{16}\) and Li\({}_{22}\)P\({}_{8}\)O\({}_{31}\). All calculations were performed using the Vienna Ab initio Simulation Package [26; 27]. We used different structural datasets for the two models because the calculated Born effective charges of the strongly distorted structures resulted in unreasonably large values. Additionally, as described above, the proposed NN model predicted three components (one direction of the \(3\times 3\) tensor) of the Born effective charge for simplicity. Thus, we trained the proposed NN model separately using the Born effective charges in each direction, considering the rotational manipulation of the structures and their tensor values, to enhance the variety of training datasets without performing additional DFPT calculations. ## III Results & Discussion First, we constructed the NNP using a network architecture of 125 input nodes, two hidden layers with 15 nodes, and one output node, [125-15-15-1], for each elemental species. The root-mean-square errors (RMSEs) of the total energies and atomic forces were 3.34 (2.91) meV/atom and 86.1 (87.9) meV/A, respectively, for the randomly chosen 90% (10%) of the training (test) data. Figure 2: Comparison between DFPT and NN on the Born effective charges of (a)-(c) training and (d)-(f) test sets. The comparisons are shown separately for each elemental species: (a, d) Li, (b, e) P, and (c, f) O. The light (dark) colours indicate the (off-)diagonal components. The obtained RMSE values were sufficiently small compared with those of other studies using NNP [14; 15]. Please refer to Fig. S1 for a comparison between the NNP predictions and DFT reference values. The hyperparameters used in the SFs are listed in Tables S1 and S2. Next, we constructed the proposed NN model for the Born effective charge predictor. We used the NN architecture of [180-10-10-1], where the RMSEs of the training (randomly chosen 90%) and test (remaining 10%) data were 0.0378 \(e\)/atom and 0.0376 \(e\)/atom, respectively. Tables S3 and S4 present the hyperparameters used in VAFs. Figure 2 compares the predicted Born effective charges and their DFPT values. Evidently, all the points, including both the diagonal and off-diagonal components, were located near the diagonal lines, thus suggesting fairly good predictions. In addition, the distributions show that the diagonal components of the Born effective charges varied considerably from their formal charges, that is, Li: \(+1\), P: \(+3\), and O: \(-2\), owing to the structural changes. Moreover, the charge states of oxygen underwent the largest variation, despite the fact that Li is a mobile species in Li\({}_{3}\)PO\({}_{4}\). Furthermore, such variations can be observed in the off-diagonal components of the Born effective charges, although these values were typically small in the crystalline structure: the averages of \(\sum_{i\neq j}\sqrt{Z_{ij}^{*2}/6}\) for Li, P, and O are \(3.87\times 10^{-2}\), \(2.47\times 10^{-3}\), and \(1.59\times 10^{-1}\), respectively. In particular, the off-diagonal values of oxygen varied the most between \(-1\) and \(+1.5\). In the following dynamic calculations, we used a uniform electric field of \(E_{z}=0.1\) V/A applied in the \(z\) direction. Although the magnitude of the electric field may be excessively large compared with the actual device operating conditions, we chose that value to magnify its effect within a feasible computational time. In addition, the errors of the external forces at the magnitude of this electric field could be estimated to be on the order of meV/A, suggesting that the constructed NN model was sufficiently accurate. Using both the constructed NNP and the proposed NN model, we performed canonical ensemble (\(NVT\)) MD simulations to investigate ion motion under an electric field. For these MD simulations, the temperature and computation time were set to 800 K and 300 ps, respectively. Based on the prior knowledge that Li moves through vacancy sites, we used a crystalline Li\({}_{47}\)P\({}_{16}\)O\({}_{64}\) model, which contained one Li vacancy (V\({}_{\rm Li}\)) in the supercell. Note that the size of the simulation model was larger than that of the training dataset, that is, it had a lower V\({}_{\rm Li}\) concentration. In fact, Li ions seldomly moved in the MD simulations using the pristine model with the above settings or the Li vacancy model at temperatures lower than 800 K. Figure 3(a) shows the mean square displacement (MSD) of each elemental species, calculated from the trajectories of the MD simulations without an electric field. The MSD of Li increased slightly with MD time, whereas those of P and O remained nearly zero, thus indicating that these elemental species were immobile. In the calculated MSDs under the electric field shown in Fig. 3(b), the MSD of Li increased rapidly, thus indicating Li migration. By contrast, the MSDs of P and O remained small (\(<1\) A\({}^{2}\)), as in the case without an electric field. Figures 3(c) and (d) show the MSDs of Li in each direction. We attribute the rapid growth of the total MSD under the electric field to the contributions from the \(z\)-direction, which is a physically reasonable result, as it is consistent with the direction of the electric field. In addition, the Li migration paths in crystalline Li\({}_{3}\)PO\({}_{4}\) are not always straight along the \(z\) direction, considering that Li moves along the lattice sites via the vacancy hopping mechanism. Hence, the MSDs along the \(x\)- and \(y\)-directions under an electric field exhibited more fluctuating behaviour compared with the case of without an electric field. We performed NEB calculations to examine the changes in the potential energy profiles in the presence of electric field. Here, the electrostatic energy that the moving V\({}_{\rm Li}\) acquires from the electric field, \(Z_{zz}^{*}E_{z}\Delta z\), is added to the total energy. For all the NEB calculations, we set the number of intermediate images to six. Figure 4(a) shows two migration paths of Li, in which Li moved primarily along the \(z\)-direction. The potential energy profiles obtained for the two paths are shown in Figs. 4(b) and (c). For both paths, we set the potential energy of the initial structure in Path-1 as the reference. In the case of Path-1, the potential energy barrier of Li Figure 3: Calculated MSDs of Li vacancy model (Li\({}_{47}\)P\({}_{16}\)O\({}_{64}\)). The MD simulations with temperature of 800 K (a) without electric field and (b) with \(E_{z}=0.1\) V/Å, where the MSDs are separately shown for each element. The MSDs of Li are separately shown for (c) \(x\) and \(y\) and (d) \(z\) components. migration (from images 1 to 8) was reduced from 0.157 to 0.0242 eV by the electric field. By contrast, the potential energy barrier in the opposite direction (images 8 to 1) increased from 0.444 to 0.485 eV. Similarly in the case of Path-2, the potential energy barrier of Li migration from images 1 to 8 (8 to 1) decreased from 0.702 (0.410) to 0.579 (0.402) eV by the presence of electric field. This directionality of the electric field affects the potential energies and facilitates Li migration along the direction of the electric field. In addition, we confirmed that this directionality was almost invisible along a path nearly perpendicular to the electric field (please refer to Fig. S2). The observed decrease in the potential energy barrier and directionality of ion migration agreed with previous NEB calculations of O defects in MgO using the modern theory of polarisation [28]. The atomic coordinates of the migrating Li and the surrounding P and O atoms with or without the electric field in Paths-1 and -2 are shown in Figs. 4(d) and (e), respectively. Note that for comparison, we set the initial atomic positions of the migrating Li in the two cases to be identical. We found that the intervals between intermediate images varied according to the presence of an electric field, whereas the paths were similar. Figure 4(f) shows the atomic forces acting on the migrating Li in each NEB image as a function of the \(z\) coordinates. Evidently, the absolute values of the atomic forces along the \(z\)-direction decreased and increased in front of and behind the potential energy barrier, respectively. This indicates a decrease in the barrier height and, subsequently, an acceleration of the Li motion along the \(z\) direction. Moreover, we found that the atomic forces along \(y\)-direction shifted negatively and positively in the presence of an electric field over Paths-1 and -2, respectively. Because these shifts correspond to the direction of Li motion along \(y\)-axis, we considered that the external forces facilitated the movement of Li toward the final positions, whereas their contribution was not as large as those in the \(z\) direction. By contrast, the changes in the atomic forces along the \(x\) direction were minor because the migrating Li in these paths moved primarily along the \(z\) direction, followed by \(y\) direction. Subsequently, we performed additional MD simulations to further examine the effects of external forces along \(x\)- and \(y\)-directions on Li migration, that is, the contribution from the off-diagonal components of the Born effective charges with respect to the electric field in the \(z\)-direction. In these simulations, we considered only the \(Z_{zz}^{*}\), and the \(Z_{zx}^{*}\) and \(Z_{zy}^{*}\) were set to 0 to exclude their contributions. Thus, we obtained a considerably smaller MSD for Li than that shown in Fig. 3(b) (see Fig. S3). A comparison of the MD trajectory lines Figure 4: NEB calculations of Li vacancy model (Li\({}_{47}\)P\({}_{16}\)O\({}_{64}\)). (a) Schematic of Li vacancy migrating paths. The potential energy profiles with \(E_{z}=0\) and \(E_{z}=0.1\) V/Å, respectively, for (b) Path-1 and (c) Path-2. The atomic coordinates of migrating Li and the neighbouring P and O atoms with \(E_{z}=0\) and \(E_{z}=0.1\) V/Å, respectively, for (d) Path-1 and (e) Path-2. (f) Circles, triangles, and squares show the atomic forces of migrating Li for \(x\)-, \(y\)-, and \(z\)-directions, respectively, at each image as a function of its \(z\)-coordinate. The open and filled marks show \(E_{z}=0\) and \(E_{z}=0.1\) V/Å, respectively. shown in Fig. S4 indicates pronounced Li migratory behaviour when all three terms are considered. These results suggest that the off-diagonal components slightly but effectively confined the potential energy surface of the Li migration paths and consequently enhanced Li motion. This enhancement did not appear when the charges were treated as scalar quantities because of the absence of external forces along the \(x\)- and \(y\)-directions. This also demonstrates the significance of using the Born effective charges to study ion behaviour under electric fields. Finally, we performed MD simulations with amorphous Li\({}_{3}\)PO\({}_{4}\) under the electric field using the proposed scheme. An amorphous structure was generated using the melt-quench approach, as described in Ref. [14], and the model contained 384 Li, 128 P, and 512 O atoms without including specific defects. Please refer to Fig. S5(a) for the structural image. Here, we set the temperature to 600 K, which is lower than that used in the above cases. Without an electric field, the ions were displaced only to the optimal positions at the early stage of MD, as indicated by the MSDs shown in Fig. S5(b). By contrast, we observed considerably high mobility of Li under an electric field. The obtained MSD value shown in Fig. S5(c) is higher than that shown in Fig. 3(b). Figure 5 shows the trajectory lines of each ion in these two cases. The results clearly show that the ions were immobile without an electric field, whereas the Li ions moved extensively along the \(z\) direction in the presence of an electric field. Notably, the P and O ions became relatively mobile in the amorphous structure compared with crystalline Li\({}_{3}\)PO\({}_{4}\). Furthermore, we found that Li ions moved across the entire region in the amorphous model, whereas Li hopping was restricted to the vacancy sites in the crystal. We also noted that the Li ions moved more pronouncedly at 800 K, as shown in Fig. S6. These results suggest that the Li ions located at the local minima of the metastable amorphous structure susceptibly undergo the electric field effects and readily overcome the mobility barrier. ## IV Conclusion We proposed an NN model to predict the Born effective charges of ions based on their atomic structures. We demonstrated the performance of our proposed NN model using Li\({}_{3}\)PO\({}_{4}\) as a prototype ion-conducting material, where the error in the constructed model reached 0.0376 \(e\)/atom. In combination with the conventional NN potential, MD simulations were performed under a uniformly applied electric field. The obtained results indicated an enhanced displacement of Li along the electric field, which is physically reasonable. In addition, we confirmed the lowering of the potential energy barriers from NEB calculations under an electric field. Furthermore, we found that the external forces arising from the off-diagonal terms of the Born effective charges slightly but effectively confined the potential energy surface of the Li migration paths and consequently enhanced Li motion. Finally, we examined the Li behaviour in the amorphous Li\({}_{3}\)PO\({}_{4}\) structure. We found that Li ions located at the local minima were susceptible to electric field effects and readily overcame the mobility barrier. These results suggest that the Born effective charge tensors, depending on the local atomic structures, may be a suitable quantity for a detailed analysis of ion behaviour under external electric fields. Acknowledgements. We thank Mr. Takanori Moriya for his contributions in the early stage of this study, and Editage (www.editage.com) for English language editing. This study was supported by JST CREST Programs "Novel electronic devices based on nanospaces near interfaces" and "Strong field nanodynamics at grain boundaries and interfaces in ceramics" and JSPS KAKENHI Grant Numbers 19H02544, 20K15013, 21H05552, 22H04607, 23H04100. Some of the calculations used in Figure 5: Calculated MD trajectory lines of (a, d) Li, (b, e) P, and (c, f) O in the amorphous Li\({}_{3}\)PO\({}_{4}\) model. The upward direction in the figure indicates the \(z\)-direction. The MD simulations are performed for 300 ps with the temperature of 600 K (a)-(c) without and (d)-(f) with the electric field. this study were performed using the computer facilities at ISSP Supercomputer Center and Information Technology Center, The University of Tokyo, and Institute for Materials Research, Tohoku University.
2307.00134
Generalization Limits of Graph Neural Networks in Identity Effects Learning
Graph Neural Networks (GNNs) have emerged as a powerful tool for data-driven learning on various graph domains. They are usually based on a message-passing mechanism and have gained increasing popularity for their intuitive formulation, which is closely linked to the Weisfeiler-Lehman (WL) test for graph isomorphism to which they have been proven equivalent in terms of expressive power. In this work, we establish new generalization properties and fundamental limits of GNNs in the context of learning so-called identity effects, i.e., the task of determining whether an object is composed of two identical components or not. Our study is motivated by the need to understand the capabilities of GNNs when performing simple cognitive tasks, with potential applications in computational linguistics and chemistry. We analyze two case studies: (i) two-letters words, for which we show that GNNs trained via stochastic gradient descent are unable to generalize to unseen letters when utilizing orthogonal encodings like one-hot representations; (ii) dicyclic graphs, i.e., graphs composed of two cycles, for which we present positive existence results leveraging the connection between GNNs and the WL test. Our theoretical analysis is supported by an extensive numerical study.
Giuseppe Alessio D'Inverno, Simone Brugiapaglia, Mirco Ravanelli
2023-06-30T20:56:38Z
http://arxiv.org/abs/2307.00134v3
# Generalization Limits of Graph Neural Networks ###### Abstract Graph Neural Networks (GNNs) have emerged as a powerful tool for data-driven learning on various graph domains. They are usually based on a message-passing mechanism and have gained increasing popularity for their intuitive formulation, which is closely linked to the Weisfeiler-Lehman (WL) test for graph isomorphism to which they have been proven equivalent in terms of expressive power. In this work, we establish new generalization properties and fundamental limits of GNNs in the context of learning so-called identity effects, i.e., the task of determining whether an object is composed of two identical components or not. Our study is motivated by the need to understand the capabilities of GNNs when performing simple cognitive tasks, with potential applications in computational linguistics and chemistry. We analyze two case studies: (i) two-letters words, for which we show that GNNs trained via stochastic gradient descent are unable to generalize to unseen letters when utilizing orthogonal encodings like one-hot representations; (ii) dicyclic graphs, i.e., graphs composed of two cycles, for which we present positive existence results leveraging the connection between GNNs and the WL test. Our theoretical analysis is supported by an extensive numerical study. Graph neural networks, identity effects, generalization, encodings, dicyclic graphs, gradient descent. ## I Introduction Graph Neural Networks (GNNs) [1] have emerged as prominent models for handling structured data, quickly becoming dominant in data-driven learning over several scenarios such as network analysis [2], molecule prediction [3] and generation [4], text classification [5], and traffic forecasting [6]. From the appearance of the earliest GNN model [1], many variants have been developed to improve their prediction accuracy and generalization power. Notable examples include GraphSage [7], Graph Attention Networks [8], Graph Convolutional Networks (GCN) [9], Graph Isomorphism Networks [10], and Graph Neural Diffusion (GRAND) [11]. Furthermore, as the original model was designed specifically for labeled undirected graphs [12], more complex neural architectures have been designed to handle different types of graph structures, such as directed graphs [13], temporal graphs [14], and hypergraphs [15]. For a comprehensive review see, e.g., [16]. Over the last decade, there has been growing attention in the theoretical analysis of GNNs. While approximation properties have been examined in different flavors [17, 18, 1, 19], most of the theoretical works in the literature have focused on the _expressive power_ of GNNs. From this perspective, the pioneering work of [10] and [20] laid the foundation for the standard analysis of GNN expressivity, linking the message-passing iterative algorithm (common to most GNN architectures) to the _first order Weisfeiler Lehman (1-WL) test_[21], a popular coloring algorithm used to determine if two graphs are (possibly) isomorphic or not. Since then, the expressive power of GNNs has been evaluated with respect to the 1-WL test or its higher-order variants (called \(k\)_-WL test_) [20], as well as other variants suited to detect particular substructures [22, 23]. The assessment of the _generalization capabilities_ of neural networks has always been crucial for the development of efficient learning algorithms. Several complexity measures have been proposed over the past few decades to establish reliable generalization bounds, such as the Vapnik-Chervonenk (VC) dimension [24], Rademacher complexity [25, 26], and Betti numbers [27]. The generalization properties of GNNs have been investigated using these measures. In [28] the VC dimension of the original GNN model was established, and later extended to message passing-based GNNs by [29] for piecewise polynomial activation functions. Other generalization bounds for GNNs were derived using the Rademacher complexity [30], through a Probably Approximately Correct (PAC) Bayesian approach [31] or using random sampling on the graph nodes [32]. An alternative approach for assessing the generalization capabilities of neural networks is based on investigating their ability to learn specific _cognitive tasks_[33, 34, 35], which have long been of primary interest as neural networks were originally designed to emulate functional brain activities. Among the various cognitive tasks, the linguistics community has shown particular interest in investigating so-called _identity effects_, i.e., the task of determining whether objects are formed by two identical components or not [36, 37]. To provide a simple and illustrative example, we can consider an experiment in which the words \(\mathsf{AA},\mathsf{BB},\mathsf{CC}\) are assigned to the label "good", while \(\mathsf{AB},\mathsf{BC},\mathsf{AC}\) are labelled as "bad". Now, imagine a scenario where a subject is presented with new test words, such as \(\mathsf{XX}\) or \(\mathsf{XY}\). Thanks to the human ability of abstraction, the subject will be immediately able to classify the new words correctly, even though the letters \(\mathsf{X}\) and \(\mathsf{Y}\) were not part of the the training set. Besides its relevance in linguistics, the analysis of identity effects can serve as an intuitive and effective tool to evaluate the generalization capabilities of neural networks in a variety of specific tasks. These tasks encompass the identification of equal patterns in natural language processing [38] as well as molecule classification or regression [3]. In the context of molecule analysis, the exploitation of molecular symmetries plays a crucial role as it can be exploited to retrieve molecular orientations [39] or to determine properties of molecular positioning [40]. Furthermore, the existence of different symmetries in interacting molecules can lead to different reactions. Recently, in [41] it has been shown that Multilayer Perceptrons (MLPs) and Recurrent Neural Networks (RNNs) cannot learn identity effects via Stochastic Gradient Descent, under certain conditions on the encoding utilized to represent the components of objects. This finding raises a fundamental question that forms the core focus of our paper: "_Do GNNs possess the capability to learn identity effects?_" Motivated by this research question, this work investigates the generalization limits and capabilities of GNNs when learning identity effects. Our contributions are the following: 1. extending the analysis of [41], GNNs are shown to be _incapable_ of learning identity effects via SGD training under sufficient conditions determined by the existence of a suitable transformation \(\tau\) of the input space (Theorem III.1); an application to the problem of classifying identical two-letter words is provided by Theorem III.3 and supported by numerical experiments in SSIV-B; 2. on the other hand, GNNs are shown to be _capable_ of learning identity effects in terms of binary classification of _dicyclic graphs_, i.e., graphs composed by two cycles of different or equal length, in Corollary III.6; a numerical investigation of the gap between our theoretical results and practical performance of GNNs is provided in SSIV-C. The paper is structured as follows. SSII begins by providing a brief overview of fundamental graph theory notation. We then introduce the specific GNN formulation we focus on in our analysis, namely the Weisfeiler-Lehman test, and revisit the framework of rating impossibility theorems for invariant learners. In SSIII, we present and prove our main theoretical results. SSIV showcases the numerical experiments conducted to validate our findings. Finally, in SSV, we provide concluding remarks and outline potential avenues for future research. ## II Notation and background We start by introducing the notation and background concepts that will be used throughout the paper. ### _Graph theory basics_ A node-attributed graph \(G\) is an object defined by a triplet \(G=(V,E,\alpha)\). \(V\) is the set of _nodes_ or _vertices_\(v\), where \(v\) can be identified as an element of \(\mathbb{N}:=\{0,1,2,\ldots\}\). \(E\) is the set of edges \(e_{u,v}\), where \(e_{u,v}=(u,v)\in V\times V\). The term \(\alpha:V\rightarrow\mathbb{R}^{k}\) is the function assigning a _node feature_ (or _vertex feature_) \(\alpha_{v}\) to every node \(v\) in the graph. The _number of nodes_ of a graph G is denoted by \(N:=|V|\). All node features can be stacked in a _feature matrix_\(\mathbf{X}_{G}\in N\times\mathbb{R}^{k}\). The _adjacency matrix_\(\mathbf{A}\) is defined as \(A_{ij}=1\) if \(e_{ij}\in E\), \(A_{ij}=0\) otherwise. The _neighborhood_ of a node \(v\) is denoted by \(\mathcal{N}_{v}=\{u\mid e_{u,v}\in E\}\). ### _Graph Neural Networks_ Graph Neural Networks (GNNs) are a class of connection-ist models that aim to learn functions on graphs, or pairs graph/node. Intuitively, a GNN learns how to represent the nodes of a graph by vectorial representations (which are called _hidden states_), giving an encoding of the information stored in the graph. In its general form [1, 42], for each graph \(G=(V,E,\alpha)\in\mathcal{G}\) where \(\mathcal{G}\) is a node-attributed graph domain, a GNN is defined by the following recursive _updating scheme_: \[h_{v}^{(t+1)}=\text{UPDATE}\big{(}h_{v}^{(t)},\text{AGGREGATE}(\{\!\!\{h_{u} ^{(t)}|u\in\mathcal{N}_{v}\}\!\!\})\big{)}, \tag{1}\] for all \(v\in V\) and \(t=1,\ldots,T\), where \(T\) is the number of layers of the GNN and \(\{\!\!\{\cdot\}\!\}\) denotes a multiset. Here UPDATE and AGGREGATE are functions that can be defined by learnable or non-learnable schemes. Popular GNN models like GraphSAGE [7], GCN [9], Graph Isomorphism Networks [10] are based on this updating scheme. The model terminates with a READOUT function, chosen according to the nature of the task; for instance, global average, min or sum pooling, followed by a trainable multilayer perceptron are typical choices in the case of graph-focused tasks. At a high level, we can formalizate a GNN as a function \(g:\mathcal{G}\rightarrow\mathbb{R}^{r}\), where \(\mathcal{G}\) is a set of node-attributed graphs and \(r\) is the dimension of the output, which depends on the type of task at hand. The updating scheme we choose as a reference for our analysis follows [20]. This model has been proven to match the expressive power of the Weisfeiler-Lehman test [20] (see also Theorem II.1 below), and can therefore be considered a good representative model of the message passing GNN class. The hidden dimension \(h_{v}^{(t+1)}\in\mathbb{R}^{h}\) of a node \(v\) at the message passing iteration \(t+1\), for \(t=1,\ldots,T-1\), is defined as \[h_{v}^{(t+1)}=\sigma\big{(}W_{\text{upd}}^{(t+1)}h_{v}^{(t)}+W_{\text{agg}}^{ (t+1)}h_{\mathcal{N}_{v}}^{(t)}+b^{(t+1)}\big{)}, \tag{2}\] where \(h_{\mathcal{N}_{v}}^{(t)}=\text{POOL}\llbracket h_{u}^{(t)}|u\in\mathcal{N}_{v} \rrbracket\), \(\sigma:\mathbb{R}^{h}\rightarrow\mathbb{R}^{h}\) is an element-wise activation function and POOL is the aggregating operator on the neighbor node's features. The aggregating operator can be defined as a non-learnable function, such as the sum, the mean or the minimum, across the hidden features of the neighbors. For each node, the initial hidden state is initialized as \(h_{v}^{(0)}=\alpha_{v}\in\mathbb{R}^{k}\). The learnable parameters of the GNN can be summarized as \(\Theta:=(W_{\text{upd}}^{(0)},W_{\text{agg}}^{(0)},b^{(0)},W_{\text{upd}}^{ (1)},W_{\text{agg}}^{(1)},b^{(1)},\ldots,W_{\text{upd}}^{(L)},W_{\text{agg}}^{ (L)},b^{(L)})\), with \(W_{\text{upd}}^{(0)},W_{\text{agg}}^{(0)}\in\mathbb{R}^{k\times h}\), \(W_{\text{upd}}^{(t)},W_{\text{agg}}^{(t)}\in\mathbb{R}^{h\times h}\), for \(t=1,\ldots,T\), and \(b^{(t)}\in\mathbb{R}^{h}\), for \(t=0,\ldots,T\). ### _The Weisfeiler-Lehman test_ The _first order Weisfeiler-Lehman test_ (in short, _I-WL test_) [21] is one of the most popular isomorphism tests for graphs, based on an iterative coloring scheme. The coloring algorithm is applied in parallel to two input graphs, giving a color partition of the nodes as output. If the partitions match, then the graphs are possibly isomorphic, while if they do not match, then the graphs are certainly non-isomorphic. Note that the test is not conclusive in the case of a positive answer, as the graphs may still be non-isomorphic; nevertheless, the 1-WL test provides an accurate isomorphism test for a large class of graphs [43]. The coloring is carried out by an iterative algorithm which takes as input a graph \(G=(V,E,\alpha)\) and, at each iteration, computes a _node coloring_\(c^{(t)}(v)\in\mathcal{C}\) for each node \(v\in V\), being \(\mathcal{C}\subseteq\mathbb{N}\) a subset of natural numbers representing colors. The algorithm is sketched in the following. 1. At iteration 0, in the case of labeled graphs, the node color initialization is based on the vertex feature according to a specific hash function \(\operatorname{HASH}_{0}:\mathbb{R}^{k}\to\mathcal{C}\); namely, \(c^{(0)}(v)=\operatorname{HASH}_{0}(\alpha(v))\), for all \(v\in V\). For unlabeled graphs, a node color initialization is provided, usually setting every color as equal to a given initial color \(c^{(0)}\in\mathcal{C}\). 2. For any iteration \(t>0\), we set \[c^{(t)}(v)=\operatorname{HASH}((c^{(t-1)}(v),\llbracket c^{(t-1)}(n)|n\in \mathcal{N}_{v}\rrbracket)),\] \(\forall v\in V\), where \(\operatorname{HASH}\) injectively maps the above color-multiset pair to a unique value in \(\mathcal{C}\). The algorithm terminates if the number of colors between two iterations does not change, i.e., when the cardinalities of \(\{c^{(t-1)}(v)|v\in V\}\) and \(\{c^{(t)}(v)|v\in V\}\), namely, are equal. We conclude by recalling two results establishing the equivalence between GNNs' and 1-WL test's expressive power that will be instrumental for our analysis. A first result was proved in [10] and it characterizes the equivalence on a graph-level task for GNNs with generic message passing layers satisfying suitable conditions. Another characterization, reported below, is due to [20] and states the equivalence on a node coloring level, referring to the particular model defined in (2). **Theorem II.1** (See [20, Theorem 2]).: _Let \(G=(V,E,\alpha)\) be a graph with initial coloring \(c^{(0)}(v)\in\mathbb{R}\) for each node \(v\in V\) (so that \(c^{(0)}\in\mathbb{R}^{|V(G)|}\)). Then, for all \(t\geq 0\) there exists a GNN of the form (2) such that the hidden feature vector \(h^{(t)}\in\mathbb{R}^{|V(G)|}\) produced by the GNN at layer \(t\) coincides with the color vector \(c^{(t)}\in\mathbb{R}^{|V(G)|}\) produced by the 1-WL test at iteration \(t\), i.e., \(c^{(t)}\equiv h^{(t)}\)._ ### _Rating impossibility for invariant learners_ We now recall the framework of rating impossibility from [41], which we will then apply to the case of identity effects learning. In general, we assume to train a _learning algorithm_ to perform a rating assignment task, where the rating \(r\) is a real number. Let \(\mathcal{I}\) be the set of all possible inputs \(x\) (that could be, for instance, elements of \(\mathbb{R}^{d}\)). Our learning algorithm is trained on a dataset \(D\subseteq\mathcal{I}\times\mathbb{R}\) consisting of a finite set of input-rating pairs \((x,r)\). Let \(\mathcal{D}\) be the set of all possible datasets with inputs in \(\mathcal{I}\). The learning algorithm is trained via a suitable optimization method, such as Stochastic Gradient Descent (SGD) or Adaptive Moment Estimation (Adam) [44], that for any given training dataset \(D\) outputs the optimized set of parameters \(\Theta=\Theta(D)\in\mathbb{R}^{p}\), which, in turn, defines a model \(f=f(\Theta,\cdot)\). The rating prediction on a novel input \(x\in\mathcal{I}\) is then given by \(r=f(\Theta,x)\). In summary, a learning algorithm can thought of as a map \(L:\mathcal{D}\times\mathcal{I}\to\mathbb{R}\), defined as \(L(D,x)=f(\Theta(D),x)\). Given the stochastic nature of neural network training, we adopt a nondeterministic point of view. Hence we require the notion of _equality in distribution_. Two random variables \(X,Y\) taking values in \(\mathbb{R}^{k}\) are said to be _equal in distribution_ (denoted by \(X\stackrel{{ d}}{{=}}Y\)) if \(\mathbb{P}(X\leq x)=\mathbb{P}(Y\leq x)\) for all \(x\in\mathbb{R}^{k}\), where inequalities hold componentwise. With this notation, rating impossibility means that \(L(D,x_{1})\stackrel{{ d}}{{=}}L(D,x_{2})\) for two inputs \(x_{1}\neq x_{2}\) drawn from \(\mathcal{I}\setminus D\). Sufficient conditions for rating impossibility are identified by the following theorem from [41] (here slightly adapted using equality in distribution), which involves the existence of an auxiliary transformation \(\tau\) of the inputs. **Theorem II.2** (Rating impossibility for invariant learners, [41, Theorem 1]).: _Consider a dataset \(D\subseteq\mathcal{I}\times\mathbb{R}\) and a transformation \(\tau:\mathcal{I}\to\mathcal{I}\) such that_ 1. \(\tau(D)\stackrel{{ d}}{{=}}D\) (_invariance of the data_).1 Footnote 1: By definition, \(\tau(D):=\{(\tau(x),r):(x,r)\in D\}\). 2. _Then, for any learning algorithm_ \(L:\mathcal{D}\times\mathcal{I}\to\mathbb{R}\) _and any input_ \(x\in\mathcal{I}\) _such that_ 1. \(L(\tau(D),\tau(x))\stackrel{{ d}}{{=}}L(D,x)\) _(invariance of the algorithm), we have_ \(L(D,\tau(x))\stackrel{{ d}}{{=}}L(D,x)\)_._ This theorem states that under the invariance of the data and of the algorithm, the learner cannot assign different ratings to an input \(x\) and its transformed version \(\tau(x)\). This leads to rating impossibility when \(\tau(x)\neq x\) and \(x,\tau(x)\in\mathcal{I}\setminus D\). We conclude by recalling some basic notions on SGD training. Given a dataset \(D\), we aim to find parameters \(\Theta\) that minimize an objective function of the form \[F(\Theta)=\mathcal{L}((f(\Theta,x),r):(x,r)\in D),\quad\Theta\in\mathbb{R}^{p},\] where \(\mathcal{L}\) is a (possibly regularized) loss function. We assume \(F\) to be differentiable over \(\mathbb{R}^{p}\) in order for its gradients to be well defined. Given a collection of subsets \((D_{i})_{i=0}^{k-1}\) with \(D_{i}\subseteq D\) (usually referred to as training batches, which can be either deterministically or randomly generated), we define \(F_{D_{i}}\) as the function \(F\) where the loss is evaluated only on data in \(D_{i}\). In SGD-based training, we randomly initialize \(\Theta_{0}\) and iteratively compute \[\Theta_{i+1}=\Theta_{i}-\eta_{i}\frac{\partial F_{D_{i}}}{\partial\Theta}( \Theta_{i}), \tag{3}\] for \(i=0,1,\ldots,k-1\), where the sequence of step sizes \((\eta_{i})_{i=0}^{k-1}\) is assumed to be either deterministic or random and independent of \((D_{i})_{i=0}^{k-1}\). Note that, being \(\Theta_{i}\) a random vector for each \(i\), the output of the learning algorithm \(L(D,x)=f(\Theta_{k},x)\) is a random variable. ## III Theoretical analysis In this section we present our theoretical analysis. More specifically, in SSIII-A we establish a rating impossibility theorem for GNNs under certain technical assumptions related to the invariance of the training data under a suitable transformation \(\tau\) of the inputs; then, we illustrate an application to the case study of identity effects learning for a two-letter word dataset in SSIII-A1. In SSIII-B we prove that symmetric dicyclic graphs can be distinguished from the asymmetric ones by the 1-WL test, and consequently by a GNN. ### _What GNNs cannot learn: rating impossibility theorem_ We assume the input space to be of the form \(\mathcal{I}=\mathbb{R}^{d}\times\mathbb{R}^{d}\) and the learning algorithm \[L(D,x)=f(B,Gu+Hv,Hu+Gv),\ \forall x=(u,v)\in\mathcal{I}, \tag{4}\] where \(\Theta=(B,G,H)\) are trainable parameters and \(G,H\in\mathbb{R}^{d\times d}\). This class of learning algorithms perfectly fits the formulation given in [20], where the updating scheme is the one defined by (2). In this case, \[G =W_{upd}^{(1)},\quad H=W_{agg}^{(1)},\] \[B =\left(b^{(1)},W_{upd}^{(2)},W_{agg}^{(2)},b^{(2)}\ldots,W_{upd} ^{(N)},W_{agg}^{(N)},b^{(N)}\right).\] The learner defined by equation (4) mimics the behaviour of several GNN architectures, GCN included. In fact, when the graph is composed by only two nodes, the convolution ends up being a weighted sum of the hidden states of the two nodes. This property will have practical relevance in Theorem III.3 and its experimental realization in SSIV-B. In the following result we identify sufficient conditions on the dataset \(D\) and the training procedure able to guarantee invariance of GNN-type models of the form (4) trained via SGD to a suitable class of transformations \(\tau\) (hence verifying condition (ii) of Theorem II.2). **Theorem III.1** (Invariance of GNN-type models trained via SGD).: _Assume the input space to be of the form \(\mathcal{I}=\mathbb{R}^{d}\times\mathbb{R}^{d}\). Let \(\tau:\mathcal{I}\rightarrow\mathcal{I}\) be a linear transformation defined by \(\tau(x)=(u,\tau_{2}(v))\) for any \(x=(u,v)\in\mathcal{I}\), where \(\tau_{2}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) is also linear. Moreover, assume that_ * _the matrix_ \(T_{2}\in\mathbb{R}^{d\times d}\) _associated with the transformation_ \(\tau_{2}\) _is orthogonal and symmetric;_ * _the dataset_ \(D=\{((u_{i},v_{i}),r_{i})\}_{i=1}^{n}\) _is invariant under the transformation_ \(\tau_{2}\otimes\tau_{2}\)_, i.e.,_ \[(u_{i},v_{i})=\big{(}\tau_{2}(u_{i}),\tau_{2}(v_{i})\big{)},\quad\forall i=1, \ldots,n.\] (5) _Suppose k iterations of SGD as defined in (3) are used to determine parameters \(\Theta_{k}=(B_{k},G_{k},H_{k})\) with objective function_ \[F(\Theta)=\sum_{i=1}^{n}\ell\big{(}f(B,Gu_{i}+Hv_{i},Hu_{i}+Gv_{i}),r_{i} \big{)}+\lambda\mathcal{R}(B),\] _for some \(\lambda\geq 0\), with \(\Theta=(B,G,H)\) and where \(\ell\), \(f\) and \(\mathcal{R}\) are real-valued functions such that \(F\) is differentiable. Suppose the random initialization of the parameters \(B\), \(G\) and \(H\) to be independent and that the distributions of \(G_{0}\) and \(H_{0}\) are invariant with respect to right-multiplication by \(T_{2}\). Then, the learner \(L\) defined by \(L(D,x)=f(B_{k},G_{k}u+H_{k}v,H_{k}u+G_{k}v)\), for \(x=(u,v)\), satisfies \(L(D,x)\stackrel{{ d}}{{=}}L(\tau(D),\tau(x))\)._ Proof.: Given a batch \(D_{i}\subseteq D\), define \(J_{i}:=\{j\in\{1,\ldots,n\}:\)\(((u_{j},v_{j}),r_{j})\in D_{i}\}\) and \[F_{D_{i}}(\Theta)=\sum_{j\in J_{i}}\ell(f(B,Gv_{j}+Hu_{j},Hv_{j}+ Gu_{j}),r_{j})+\lambda\mathcal{R}(B).\] Moreover, consider an _auxiliary objective function_, defined by \[\tilde{F}_{D_{i}}(B,G_{1},H_{1},H_{2},G_{2})=\] \[\sum_{j\in J_{i}}\ell(f(B,G_{1}v_{j}+H_{1}u_{j},H_{2}v_{j}+G_{2} u_{j}),r_{j})+\lambda\mathcal{R}(B).\] Observe that \(F_{D_{i}}(\Theta)=\tilde{F}_{D_{i}}(B,G,H,H,G)\). Moreover, \[\frac{\partial F_{D_{i}}}{\partial B}(\Theta) =\frac{\partial\tilde{F}_{D_{i}}}{\partial B}(B,G,H,H,G) \tag{6}\] \[\frac{\partial F_{D_{i}}}{\partial G}(\Theta) =\frac{\partial\tilde{F}_{D_{i}}}{\partial I}(B,G,H,H,G)+\frac{ \partial\tilde{F}_{D_{i}}}{\partial L}(B,G,H,H,G)\] (7) \[\frac{\partial F_{D_{i}}}{\partial H}(\Theta) =\frac{\partial\tilde{F}_{D_{i}}}{\partial J}(B,G,H,H,G)+\frac{ \partial\tilde{F}_{D_{i}}}{\partial K}(B,G,H,H,G) \tag{8}\] Moreover, replacing \(D_{i}\) with its transformed version \(\tau(D_{i})=\{((u_{j},\tau_{2}(v_{j})),r_{j})\}_{j\in D_{i}}\), we see that \(F_{\tau(D_{i})}(\Theta)=\tilde{F}_{D_{i}}(B,G,HT_{2},H,GT_{2})\). This leads to \[\frac{\partial F_{\tau(D_{i})}}{\partial B}(\Theta) =\frac{\partial\tilde{F}_{D_{i}}}{\partial B}(B,G,HT_{2},H,GT_{2}) \tag{9}\] \[\frac{\partial F_{\tau(D_{i})}}{\partial G}(\Theta) =\frac{\partial\tilde{F}_{D_{i}}}{\partial G_{1}}(B,G,HT_{2},H,GT_ {2})\] \[\quad+\frac{\partial\tilde{F}_{D_{i}}}{\partial G_{2}}(B,G,HT_{2}, H,GT_{2})T_{2}^{T}\] (10) \[\frac{\partial F_{\tau(D_{i})}}{\partial H}(\Theta) =\frac{\partial\tilde{F}_{D_{i}}}{\partial H_{1}}(B,G,HT_{2},H, GT_{2})T_{2}^{T}\] \[\quad+\frac{\partial\tilde{F}_{D_{i}}}{\partial H_{2}}(B,G,HT_{2}, H,GT_{2}). \tag{11}\] Now, denoting \(\ell=\ell(f,r)\) and \(f=f(B,u,v)\), we have \[\frac{\partial\tilde{F}_{D_{i}}}{\partial G_{1}} =\sum_{j\in D_{i}}\frac{\partial\ell}{\partial f}\frac{\partial f }{\partial u}u_{j}^{T}, \frac{\partial\tilde{F}_{D_{i}}}{\partial H_{1}} =\sum_{j\in D_{i}}\frac{\partial\ell}{\partial f}\frac{\partial f }{\partial u}u_{j}^{T},\] \[\frac{\partial\tilde{F}_{D_{i}}}{\partial H_{2}} =\sum_{j\in D_{i}}\frac{\partial\ell}{\partial f}\frac{\partial f }{\partial u}v_{j}^{T}, \frac{\partial\tilde{F}_{D_{i}}}{\partial G_{2}} =\sum_{j\in D_{i}}\frac{\partial\ell}{\partial f}\frac{\partial f }{\partial u}u_{j}^{T}.\] In addition, thanks to assumption (5), we have \(u_{j}^{T}T_{2}^{T}=u_{j}^{T}\) and \(v_{j}^{T}T_{2}^{T}=v_{j}^{T}\) for all \(j\in J_{i}\). Thus, we obtain \[\frac{\partial\tilde{F}_{D}}{\partial G_{1}}T_{2}^{T} =\frac{\partial\tilde{F}_{D}}{\partial G_{1}}, \frac{\partial\tilde{F}_{D}}{\partial H_{1}}T_{2}^{T} =\frac{\partial\tilde{F}_{D}}{\partial H_{1}}, \tag{12}\] \[\frac{\partial\tilde{F}_{D}}{\partial H_{2}}T_{2}^{T} =\frac{\partial\tilde{F}_{D}}{\partial H_{2}}, \frac{\partial\tilde{F}_{D}}{\partial G_{2}}T_{2}^{T} =\frac{\partial\tilde{F}_{D}}{\partial G_{2}}. \tag{13}\] Now, let \((B_{0}^{\prime},G_{0}^{\prime},H_{0}^{\prime})\stackrel{{ d}}{{=}}(B_{0},G_{0},H_{0})\) and let \((B_{i}^{\prime},G_{i}^{\prime},H_{i}^{\prime})\) for \(i=1,\ldots k\) be the sequence generated by SGD, applied to the transformed data \(\tau(D)\). By assumption, we have \(B_{0}^{\prime}\stackrel{{ d}}{{=}}B_{0}\), \(G_{0}\stackrel{{ d}}{{=}}G_{0}^{\prime}\stackrel{{ d}}{{=}}G_{0}^{\prime}T_{2}\) and \(H_{0}\stackrel{{ d}}{{=}}H_{0}^{\prime}T_{2}\). We now show by induction that \(B_{i}^ \(H_{i}^{\prime}T_{2}\) for all indices \(i=1,\ldots,k\). Using equations (6) and (9) and the inductive hypothesis, we have \[B_{i+1}^{\prime} =B_{i}^{\prime}-\eta_{i}\frac{\partial F_{\tau(D_{i})}}{\partial B} (B_{i}^{\prime},G_{i}^{\prime},H_{i}^{\prime})\] \[=B_{i}^{\prime}-\eta_{i}\frac{\partial\tilde{F}_{D_{i}}}{\partial B }(B_{i}^{\prime},G_{i}^{\prime},H_{i}^{\prime}T_{2},H_{i}^{\prime},G_{i}^{ \prime}T_{2})\] \[\overset{d}{=}B_{i}-\eta_{i}\frac{\partial\tilde{F}_{D_{i}}}{ \partial B}(B_{i},G_{i},H_{i},H_{i},G_{i})\] \[=B_{i}-\eta_{i}\frac{\partial F_{\tau(D_{i})}}{\partial B}(B_{i}, G_{i},H_{i})=B_{i+1}.\] Similarly, using equations (7), (10) and (13) and the inductive hypothesis, we see that \[G_{i+1}^{\prime} =G_{i}^{\prime}-\eta_{i}\frac{\partial F_{\tau(D_{i})}}{\partial G }(B_{i}^{\prime},G_{i}^{\prime},H_{i}^{\prime})\] \[=G_{i}^{\prime}-\eta_{i}\left(\frac{\partial\tilde{F}_{D_{i}}}{ \partial I}(B_{i}^{\prime},G_{i}^{\prime},H_{i}^{\prime}T_{2},H_{i}^{\prime},G _{i}^{\prime}T_{2})\right.\] \[\left.+\frac{\partial\tilde{F}_{D_{i}}}{\partial L}(B_{i}^{ \prime},G_{i}^{\prime},H_{i}^{\prime}T_{2},H_{i}^{\prime},G_{i}^{\prime}T_{2} )T_{2}^{T}\right)\] \[=G_{i}^{\prime}-\eta_{i}\left(\frac{\partial\tilde{F}_{D_{i}}}{ \partial I}(B_{i}^{\prime},G_{i}^{\prime},H_{i}^{\prime}T_{2},H_{i}^{\prime},G _{i}^{\prime}T_{2})\right.\] \[\left.+\frac{\partial\tilde{F}_{D_{i}}}{\partial L}(B_{i}^{ \prime},G_{i}^{\prime},H_{i}^{\prime}T_{2},H_{i}^{\prime},G_{i}^{\prime}T_{2} )\right)\] \[\overset{d}{=}G_{i}-\eta_{i}\left(\frac{\partial\tilde{F}_{D_{i} }}{\partial I}(B_{i},G_{i},H_{i},H_{i},G_{i})\right.\] \[\left.+\frac{\partial\tilde{F}_{D_{i}}}{\partial L}(B_{i},G_{i},H_ {i},H_{i},G_{i})\right)\] \[=G_{i}-\eta_{i}\frac{\partial F_{D_{i}}}{\partial G}(B_{i},G_{i},H _{i})=G_{i+1}.\] One proceeds analogously for \(H_{i+1}^{\prime}\) using equations (8), (11) and (12). Similarly, one also sees that \(G_{i+1}^{\prime}T_{2}\overset{d}{=}G_{i+1}\) and \(H_{i+1}^{\prime}T_{2}\overset{d}{=}H_{i+1}\) combining the previous equations with symmetry and orthogonality of \(T_{2}\). In summary, we have \[L(D,x) =f(B_{k},G_{k}u+H_{k}v,H_{k}u+G_{k}v)\] \[\overset{d}{=}f(B_{k}^{\prime},G_{k}^{\prime}u+H_{k}^{\prime}v,H_ {k}^{\prime}u+G_{k}^{\prime}v)\] \[\overset{d}{=}f(B_{k}^{\prime},G_{k}^{\prime}u+H_{k}^{\prime}T_{2 }v,H_{k}^{\prime}u+G_{k}^{\prime}T_{2}v)\] \[=L(\tau(D),\tau(x)),\] which concludes the proof. **Remark III.2** (On the assumptions of Theorem III.1).: At first glance, the assumptions of Theorem III.1 might seem quite restrictive, especially the assumption about the invariance of the distributions of \(G_{0}\) and \(H_{0}\) with respect to right-multiplication by the symmetric orthogonal matrix \(T_{2}\). Yet, this hypothesis holds, e.g., when the entries of \(G_{0}\) and \(H_{0}\) are independently and identically distributed according to a centered normal distribution thanks to the rotational invariance of isotropic random Gaussian vectors (see, e.g., [45, Proposition 3.3.2]). This is the case in common initialization strategies such as Xavier initialization [46]. In addition, numerical results presented in SSIV suggest that rating impossibility might hold in more general settings, such as when the model \(f\) includes ReLU activations (hence, when \(F\) has points of nondifferentiability) or for models trained via Adam as opposed to SGD. #### Iii-B1 Application to identity effects As a practical application of Theorem III.1 to identity effects, we consider the problem of classifying identical two-letter words of the English alphabet \(\mathcal{A}:=\{\mathsf{A},\mathsf{B},\ldots,\mathsf{Z}\}\), already mentioned in SSI and following [41]. Consider a training set \(D\) formed by two-letter words that do not contain \(\mathsf{Y}\) nor \(\mathsf{Z}\). Words are assigned the label 1 if they are composed by identical letters and 0 otherwise. Our goal is to verify whether a learning algorithm is capable of generalizing this pattern correctly to words containing the letters \(\mathsf{Y}\) or \(\mathsf{Z}\). The transformation \(\tau\) of Theorem III.1 is defined by \[\tau(\mathsf{x}\mathsf{Y})=\mathsf{x}\mathsf{Z},\;\tau(\mathsf{x}\mathsf{Z})= \mathsf{x}\mathsf{Y},\;\text{and}\;\tau(\mathsf{x}\mathsf{y})=\mathsf{x} \mathsf{y}, \tag{14}\] for all letters \(\mathsf{x},\mathsf{y}\in\mathcal{A}\), with \(\mathsf{y}\neq\mathsf{Y},\mathsf{Z}\). Note that this transformation is of the form \(\tau=I\otimes\tau_{2}\), where \(I\) is the identity map. Hence, it fits the setting of Theorem III.1. Moreover, since \(D\) does not contain Y nor \(\mathsf{Z}\) letters, \(\tau(D)=D\). Hence, condition (i) of Theorem II.2 is satisfied. In order to represent letters as vectors of \(\mathbb{R}^{d}\), we need to use a suitable _encoding_. Its choice is crucial to determine the properties of the transformation matrix \(T_{2}\) associated with \(\tau_{2}\), needed to apply Theorem III.1. Formally, an encoding of an alphabet \(\mathcal{A}\) is a set of vectors \(\mathcal{E}\subseteq\mathbb{R}^{d}\), of the same cardinality of \(\mathcal{A}\), to which letters can be associated with. In our case, \(|\mathcal{A}|=26=|\mathcal{E}|\). We say that an encoding is _orthogonal_ if it is an orthonormal set of \(\mathbb{R}^{d}\). For example, the popular one-hot encoding \(\mathcal{E}=\{e_{i}\}_{i=1}^{26}\subseteq\mathbb{R}^{26}\), i.e., the canonical basis of \(\mathbb{R}^{26}\), is an orthogonal encoding. In this setting, every word is modeled as a graph defined by two nodes connected by a single unweighted and undirected edge. Each node \(v\) is labeled with a node feature \(\alpha(v)\in\mathbb{R}^{d}\), corresponding to a letter's encoding. An example is depicted in Figure 1. **Theorem III.3** (Inability of GNNs to classify identical two-letter words outside the training set).: _Let \(\mathcal{E}\subseteq\mathbb{R}^{26}\) be an orthogonal encoding of the English alphabet \(\mathcal{A}\) and let \(L\) be a learner obtained by training a GNN of the form (2) via SGD to classify identical two-letter words. Assume that words in the training set \(D\) do not contain the letter \(\mathsf{Y}\) nor \(\mathsf{Z}\). Then, Fig. 1: Graph modeling of a two-letter word: a vertex feature \(\alpha(v)\in\mathbb{R}^{d}\) is attached to each node \(v\) of a two-node undirected graph, according to a given encoding \(\mathcal{E}\) of the English alphabet \(\mathcal{A}\). In this figure, \(\mathcal{E}\) is the one-hot encoding. \(L\) assigns the same rating (in distribution) to any word of the form \(\mathsf{xy}\) where \(\mathsf{y}\in\{\mathsf{Y},\mathsf{Z}\}\), i.e., \(L(D,\mathsf{x}\mathsf{Y})\stackrel{{ d}}{{=}}L(D,\mathsf{x}\mathsf{Z})\) for any \(\mathsf{x}\in\mathcal{A}\). Hence, it is unable to generalize to identity effect outside the training set._ Proof.: As discussed above, the transformation \(\tau\) defined by (14) is of the form \(\tau=I\otimes\tau_{2}\). Moreover, the matrix associated with the linear transformation \(\tau_{2}\) is of the form \(T_{2}=B^{-1}PB\), where \(B\) is the change-of-basis matrix from the orthonormal basis associated with the encoding \(\mathcal{E}\) to the canonical basis of \(\mathbb{R}^{26}\) (in particular, \(B\) is orthogonal and \(B^{-1}=B^{T}\)) and \(P\) is a permutation matrix that switches the last two entries of a vector, i.e., using block-matrix notation, \[P=\begin{bmatrix}I&0\\ \hline 0&0&1\\ &1&0\end{bmatrix},\quad I\in\mathbb{R}^{24\times 24}.\] Hence, \(T_{2}\) is orthogonal and symmetric, and therefore fits the framework of the Theorem III.1. On the other hand, as discussed in SSIII-A, every GNN of the form (2) is a model of the form (4). Thus, Theorem III.1 yields \(L(D,\mathsf{xy})\stackrel{{ d}}{{=}}L(\tau(D),\mathsf{(xy)})\), for all letters \(\mathsf{x},\mathsf{y}\in\mathcal{A}\). In particular, \(L(D,\mathsf{x}\mathsf{Y})\stackrel{{ d}}{{=}}L(\tau(D),\mathsf{ x}\mathsf{Z})\), which corresponds to condition (ii) of Theorem II.2. Recalling that \(\tau(D)=D\), also condition (i) holds. Hence, we can apply Theorem II.2 and conclude the proof. ### _What GNNs can learn: identity effects on dicyclic graphs_ We now analyze the expressivity of GNNs to learn identity effects related to the _topology_ of the graphs in the dataset. This novel setting requires to design _ex novo_ the formulation of our problem. In fact, we are not focusing on the feature matrix \(X_{G}\) of a graph anymore, but on its adjacency matrix \(A\), which contains all the topological information. Here we focus on a particular class of graphs, which we call _dicyclic graphs_. A dicyclic graph is a graph composed by an \(m\)-cycle and an \(n\)-cycle, linked by a single edge. Since a dicyclic graph is uniquely determined by the length of the two cycles, we can identify it with the equivalence class \([m,n]\) over the set of pairs \((a,b)\), \(a,b\in\mathbb{N}\), defined as \([m,n]:=\{(m,n),(n,m)\}\). A dicyclic graph \([m,n]\) is _symmetric_ if \(m=n\) and _asymmetric_ otherwise. In this section we provide an analysis of the expressive power of GNNs when learning identity effects on dicyclic graphs (i.e., classifying whether a dicyclic graph is symmetric or not). We start by proving a lemma that shows how information propagates through the nodes of a cycle, during the 1-WL test iterations, when one of the nodes has a different initial color with respect to all the other nodes. **Lemma III.4** (1-WL test on \(m\)-cycles).: _Consider an \(m\)-cycle in which the vertices are numbered from \(0\) to \(m-1\) clockwise, an initial coloring \(c^{(0)}=[0,1,\ldots,1]^{T}\in\mathbb{N}^{m}\) (vector indexing begins from 0, and the vector is meant to be circular, i.e., \(c^{(0)}(m)=c^{(0)}(0)\)), and define the function \(\operatorname{HASH}\) as_ \[\begin{cases}\operatorname{HASH}(0,\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Termination of the 1-WL testAt iteration \(\lfloor\frac{m}{2}\rfloor-1\) we have \[\begin{cases}c^{(\lfloor\frac{m}{2}\rfloor-1)}(i)=i&\text{if}\ \ 0\leq i\leq \lfloor\frac{m}{2}\rfloor-1\\ c^{(\lfloor\frac{m}{2}\rfloor-1)}(i)=\lfloor\frac{m}{2}\rfloor&\text{if}\ \ i= \lfloor\frac{m}{2}\rfloor\text{ or }i=\lceil\frac{m}{2}\rceil\.\\ \\ c^{(\lfloor\frac{m}{2}\rfloor-1)}(i)=m-i&\text{if}\ \ \lceil\frac{m}{2}\rceil+1\leq i<m \end{cases}\] This concludes the proof. A graphical representation of Lemma III.4 can be found in Figure 2. This lemma represents the core of next theorem's proof, which establishes the ability of the 1-WL test to classify dicyclic graphs with identical cycles. Intuitively, if we have a dicyclic graph where node colors are uniformly initialized, one step of 1-WL test yields a coloring depending entirely on the number of neighbours for each node. In a dicyclic graph \([m,n]\) we always have \(m+n-2\) nodes of degree two and \(2\) nodes of degree three, so \(c^{(1)}(i)=1\) for all 2-degree nodes \(i\), and \(c^{(1)}(j)=0\) for the two 3-degree nodes \(j\). Hence, each cycle of the dicyclic graph satisfies the initial coloring hypothesis of Lemma III.4. **Theorem III.5** (1-WL test on dicyclic graphs).: _The 1-WL test gives the same color to the 3-degree nodes of a uniformly colored dicyclic graph \([m,n]\) (i.e., \(c^{(0)}=0\in\mathbb{N}^{m+n}\)) if and only if \(m=n\). Therefore, the 1-WL test can classify symmetric dicyclic graphs._ Proof.: After one iteration on the 1-WL test, regardless of the symmetry of the dicyclic graph, we obtain a coloring in which only 3-degree nodes have a different color, whose value we set to 0. We can therefore split the coloring vector \(c^{(1)}\in\mathbb{N}^{m+n}\) in two subvectors, namely, \(c^{(1)}=[(c^{(1)}_{1})^{T},(c^{(1)}_{2})^{T}]^{T}\) corresponding to each cycle, respectively, and where \(c^{(1)}_{1}(0)\) and \(c^{(1)}_{2}(0)\) correspond to the 3-degree nodes. We treat the symmetric and the asymmetric cases separately. The symmetric caseWe let \(c^{(0)}_{1}=c^{(0)}_{2}=c^{(0)}_{0}\), with \(c^{(0)}_{0}=[0,1,\dots,1]\). In this case, we run the 1-WL test in parallel on both vectors \(c^{(t)}_{1}\) and \(c^{(t)}_{2}\), where the HASH function in Lemma III.4 is extended on the 3-degree nodes as \(\text{HASH}(0,\{\!\!\{0,j,k\}\!\!\})=0\). Therefore, for each \(t\geq 0\), \[c^{(t+1)}_{0}(0)=\text{HASH}(c^{(t)}_{0}(0),\{\!\!\{c^{(t)}_{0}(0),c^{(t)}_{0 }(1),c^{(t)}_{0}(m-1)\}\!\!\})=0.\] Thanks to Lemma III.4 we obtain \(c^{(\lfloor\frac{m}{2}\rfloor)}_{1}=c^{(\lfloor\frac{m}{2}\rfloor)}_{2}\), which is a stable coloring for the whole graph, as the color partition is not refined anymore. The asymmetric caseWithout loss of generality, we can assume \(m=\text{length}(c^{(t)}_{1})\neq\text{length}(c^{(t)}_{2})=m+h\) for some \(h>0\). We also assume for now that \(m\) is odd (the case \(m\) even will be briefly discussed later). We extend the HASH function from Lemma III.4 to colors \(j,k>\lfloor\frac{m}{2}\rfloor\). For \(j>\lfloor\frac{m}{2}\rfloor\) or \(k>\lfloor\frac{m}{2}\rfloor\) we define \[\begin{cases}\text{HASH}(0,\{\!\!\{j,k\}\!\!\})=\infty\\ \text{HASH}(i,\{\!\!\{j,k\}\!\!\})=\lfloor\frac{m}{2}\rfloor+i&\text{if}\ j \neq k,\ i\leq\lfloor\frac{m}{2}\rfloor\\ \text{HASH}(i,\{\!\!\{j,k\}\!\!\})=\lfloor\frac{m}{2}\rfloor+i+1&\text{if}\ j=k,\ i\leq\lfloor\frac{m}{2}\rfloor\end{cases}.\] Running in parallel the 1-WL test on the two cycles, computing the coloring vectors \(c^{(\lfloor\frac{m}{2}\rfloor+1)}_{1}\) and \(c^{(\lfloor\frac{m}{2}\rfloor+1)}_{2}\) up to iteration \(\lfloor\frac{m}{2}\rfloor+1\), for \(i=\lfloor\frac{m}{2}\rfloor+1\) we have \(c_{2}(i)=\lfloor\frac{m}{2}\rfloor+1\). Therefore, given the extension of the HASH function just provided, this new color starts to backpropagate on the indices \(i<\lfloor\frac{m}{2}\rfloor+1\), \(i>m-h-\lfloor\frac{m}{2}\rfloor-1\) until it reaches the index \(0\). As a consequence, it exists an iteration index \(T\) such that \(c^{(T)}_{2}(0)=\text{HASH}(0,\{\!\!\{j,k^{*}\!\!\}\!\})\) with \(k^{*}>\lfloor\frac{m}{2}\rfloor\) and, finally, \(c^{(T)}_{2}(0)=\infty\), giving \(c^{(T)}_{1}(0)\neq c^{(T)}_{2}(0)\), as claimed. The case in which \(m\) is even works analogously, but we have to modify the HASH function in a different way to preserve injectivity. In particular, for \(j,k\leq m/2\), we define \[\begin{cases}\text{HASH}(i,\{\!\!\{j,k\}\!\!\})=\frac{m}{2}&\text{if}\ j=k,i= \frac{m}{2}\\ \text{HASH}(i,\{\!\!\{j,k\}\!\!\})=\frac{m}{2}+1&\text{if}\ j\neq k,i=\frac{m}{2 }\end{cases}.\] This concludes the proof. Theorem III.5 establishes in a deterministic way the power of the 1-WL test in terms of distinguishing between symmetric and asymmetric dicyclic graphs, given a sufficient number of iterations directly linked with the maximum cycle length in the considered domain. Examples of 1-WL stable colorings on dicyclic graphs are presented in Figure 3. Employing well-known results in the literature concerning the expressive power of GNNs (see [10, 20] and in particular Theorem II.1), we can prove the main result of this subsection on the classification power of GNNs on the domain of dicyclic graphs. Fig. 2: Graphical illustration of Lemma III.4: a 6-cycle reaches a stable coloring in \(\lfloor\frac{6}{2}\rfloor=3\) steps with \(\lfloor\frac{6}{2}\rfloor+1=4\) colors. Numbers are used to identify nodes. **Corollary III.6** (GNNs can classify symmetric dicyclic graphs).: _There exist a GNN of the form (2) and a READOUT function able to classify symmetric dicyclic graphs._ Proof.: Let \([m,n]\) be a dicyclic graph and \(c^{(T)}\) be the stable coloring of \([m,n]\) produced by the 1-WL test with initial uniform coloring. By Theorem III.5 the graph can be correctly classified by the 1-WL test, i.e., by its stable coloring. Using Theorem II.1, a GNN \(f_{\Theta}\) exists such that \(f_{\Theta}\) can learn the stable coloring for each input graph for each iteration step \(t\). Let \(c^{(T)}\) be the stable coloring computed by a GNN for a dicyclic graph \([m,n]\). Let \((u,v)\) be the 3-degree nodes of the dicyclic graph. Then, the READOUT can be modeled as \[\text{READOUT}(c^{(T)})=\begin{cases}1&\text{if }c^{(T)}(u)=c^{(T)}(v)\\ 0&\text{otherwise}\end{cases}.\] With such a READOUT, the GNN assigns the correct rating to the dicyclic graph (i.e., 1 if the graph is symmetric, 0 otherwise). **Remark III.7** (The gap between theory and practice in Corollary III.6).: Corollary III.6 shows that GNNs are powerful enough to match the 1-WL test's expressive power for the classification of symmetric dicyclic graphs (as established by Theorem III.5). However, it is worth underlining that this result only proves the _existence_ of a GNN model able to perform this task. In contrast to the results presented in SSIII-A, this corollary does not mention any training procedure. Nevertheless, the numerical experiments in SSIV-C show that GNNs able to classify symmetric dicyclic graphs _can_ be trained in practice, albeit achieving generalization outside the training set is not straightforward and depends on the GNN architecture. ## IV Numerical results This section presents the results of experimental tasks designed to validate our theorems. We analyze the consistency between theoretical and numerical findings, highlighting the significance of specific hypotheses, and addressing potential limitations of the theoretical results. ### _Experimental Setup_ We take in account two different models for our analysis: * The Global Additive Pooling GNN (_Gconv-glob_) applies a sum pooling at the end of the message-passing convolutional layers [7]. In the case of the 2-letter words setting, the resulting vector \(h_{\text{glob}}\in\mathbb{R}^{h}\) undergoes processing by a linear layer, while in the dicyclic graphs setting, an MLP is employed. A sigmoid activation function is applied at the end. * The Difference GNN (_Gconv-diff_), takes the difference between the hidden states of the two nodes in the graph (in the 2-letter words setting) or the difference between the hidden states of the 3-degree nodes (in the dicyclic graphs setting) after the message-passing convolutional layers. The resulting vector \(h_{\text{diff}}\in\mathbb{R}^{h}\) is then fed into a final linear layer, followed by the application of a sigmoid activation function. The choice of the last READOUT part is driven by empirical observation on their effectiveness on the two different tasks. Training is performed on an Intel(R) Core(TM) i7-9800X processor running at 3.80GHz using 31GB of RAM along with a GeForce GTX 1080 Ti GPU unit. The Python code is available at [https://github.com/AleDinve/gmn_identity_effects.git](https://github.com/AleDinve/gmn_identity_effects.git). ### _Case study #1: two-letter words_ #### Iv-B1 Task and datasets To validate Theorem III.1, we consider a classification task using the two-letter word identity effect problem described in SSIII-A1, following the experimental setup presented in [41]. In accordance with the setting of SSIII-A1, each word is represented as a graph consisting of two nodes connected by a single unweighted and undirected edge (see Figure 1). Each node is assigned a node feature \(x\in\mathbb{R}^{26}\), corresponding to a letter's encoding. The training set \(D_{\text{train}}\) includes all two-letter words composed of any English alphabet letters except Y and Z. The test set \(D_{\text{test}}\) is a set of two-letter words where at least one of the letters is chosen from \(\mathsf{Y},\mathsf{Z}\). Specifically, we consider \(D_{\text{test}}=\{\mathsf{Y}\mathsf{Y},\mathsf{Z}\mathsf{Z},\mathsf{Y}\mathsf{ Z},\mathsf{Z}\mathsf{Y},\mathsf{S}\mathsf{Z}\}\). #### Iv-B2 Vertex feature encodings In our experiments, we consider four different encodings of the English alphabet, following the framework outlined in SSIII-A. Each encoding consists of a set of vectors drawn from \(\mathbb{R}^{26}\). * _One-hot encoding_: This encoding assigns a vector from the canonical basis to each letter: A is encoded as \(e_{1}\), B as \(e_{2}\),..., and Z as \(e_{26}\). * _Haar encoding_: This encoding assigns to each letter the columns of a \(26\times 26\) orthogonal matrix drawn from the orthogonal group \(\text{O}(26)\) using the Haar distribution [47]. Fig. 3: Stable 1-WL coloring for different types of dicyclic graphs: as stated in Theorem III.5, 3-degree nodes have the same color in symmetric dicyclic graphs, and different color in the asymmetric ones. * _Distributed encoding_: This encoding assigns a random combination of 26 bits to each letter. In this binary encoding, only \(j\) bits are set to 1, while the remaining \(26-j\) bits are set to 0. In our experiments, we set \(j=6\). * _Gaussian encoding_: This encoding assigns samples from the multivariate normal distribution \(\mathcal{N}(0,I)\), where \(0\in\mathbb{R}^{n}\) and \(I\in\mathbb{R}^{n\times n}\). In our experiments, we set \(n=16\). Observe that only the one-hot and the Haar encodings are orthogonal (see SSIII-A1) and hence satisfy the assumption of Theorem III.3. On the other hand, the distributed and the Gaussian encodings do not fall within the setting of Theorem III.3. We run 40 trials for each model (i.e., Gconv-glob or Gconv-diff, defined in SSIV-A) with \(l\) layers (ranging from 1 to 3). In each trial, a different training set is randomly generated. The models are trained for 5000 epochs using the Adam optimizer with a learning rate of \(\lambda=0.0025\), while minimizing the binary cross-entropy loss. The hidden state dimension is set to \(d=64\), and Rectified Linear Units (ReLUs) are used as activation functions. The numerical results are shown in Figures 4-5, where we propose two different types of plots: * On the top row, we compare the ratings obtained using the four adopted encodings. The first two words, AA and a randomly generated word with nonidentical letters, denoted xy, are selected from the training set to showcase the training accuracy. The remaining words are taken from \(D_{\text{test}}\), allowing assessment of the generalization capabilities of the encoding scheme outside the training test. The bars represent the mean across trials, while the segments at the center of each bar represent the standard deviation. * On the bottom row, we show loss functions with respect to the test set over the training epochs for each encoding. The lines represent the average, while the shaded areas represents the standard deviation. Our numerical findings indicate that the rating impossibility theorem holds true for the one-hot encoding and the Haar encoding. However, notable differences in behavior emerge for the other two encodings. The 6-bit distributed encoding exhibits superior performance across all experiments, demonstrating higher rating accuracy and better loss convergence. The Gaussian encoding yields slightly inferior results, yet still showcases some generalization capability. It is important to note that despite variations in experimental settings such as architecture and optimizer (specifically, the use of ReLU activations and the Adam optimizer), the divergent behavior among the considered encodings remains consistent. This highlights the critical role of the transformation matrix \(T_{2}\) within the hypothesis outlined in Theorem III.3. ### _Case study #2: dicyclic graphs_ We now consider the problem of classifying symmetric dicyclic graphs, introduced in SSIII-B. In Corollary III.6 we proved the existence of GNNs able to classify symmetric dicyclic graphs. In this section, we assess whether such GNNs can be computed via training (see also Remark III.7). With this aim, we consider two experimental settings based on different choices of training and test set: an _extraction task_ and an _extrapolation task_, summarized in Figures 8 and 10, respectively, and described in detail below. Each task involves running 25 trials for the Gconv-glob and Gconv-diff models Fig. 4: Numerical results for the rating task on the two-letter words dataset using Gconv-glob with \(l=1,2,3\) layers. The distributed and Gaussian encodings, which deviate from the framework outlined in Theorem III.1, exhibit superior performance compared to the other encodings. The other encodings makes the transformation matrix orthogonal and symmetric, being themselves orthogonal encodings. defined in SSIV-A. The number of layers in each model is determined based on the specific task. The models are trained over 5000 epochs using a learning rate of \(\lambda=0.001\). We employ the Adam optimizer, minimizing the binary crossentropy, and incorporate the AMSGrad fixer [48] to enhance training stability due to the large number of layers. The hidden state dimension is set to \(d=100\), and ReLU activation functions are utilized. The results presented in Figures 6, 8, and 10 should be interpreted as follows: each circle represents a dicyclic graph \([m,n]\); the color of the circle corresponds to the rating, while the circle's radius represents the standard deviation. #### Iv-A1 1-WL test performance In Theorem III.5 we showed that the 1-WL test can classify symmetric dicyclic graphs. This holds true regardless of the length of the longer cycle, provided that a sufficient number of iterations is performed. The results in Figure 6 show that the 1-WL test achieves indeed perfect classification accuracy in \(n_{\max}\) iterations, where \(n_{\max}\) is the maximum length of a cycle in the dataset, in accordance with Theorem III.5. #### Iv-A2 Extraction task In this task, we evaluate the capability of GNNs to generalize to unseen data, specifically when the minimum length of cycles in the test dataset is smaller than the maximum length of those in the training dataset. More specifically, the training set \(D_{\text{train}}\) consists of pairs \([m,n]\) where \(3\leq m,n\leq n_{\max}\) and \(m,n\neq k\) with \(3\leq k\leq n_{\max}\), while the test set \(D_{\text{test}}\) comprises pairs \([k,a]\) with \(3\leq a\leq n_{\max}\). Figure 7 illustrates this setting. In our experiments, we set \(n_{\max}=8\) and consider \(k\) values of 7, 6, and 5. The number of GNN layers is \(l=n_{\max}\). The numerical results are presented in Figure 8. We observe that the Gconv-diff model achieves perfect performance in our experiments (standard deviation values are not reported because they are too low), showing consistence with the theoretical setting. On the other hand, the Gconv-glob model demonstrates good, but not perfect, performance on the test set. A critical point in our numerical examples seems to be \(k=5\), which falls in the middle range between the minimum Fig. 5: Numerical results for the rating task on the two-letter words dataset using Gconv-diff with \(l=1,2,3\) layers. The same observations to those in Figure 4 can be made here as well. Fig. 6: Perfect classification of symmetric dicyclic graphs by \(n_{\max}\) iterations of the 1-WL test. Fig. 7: Graphical illustration of the extraction task. In this example, \(n_{\max}=6\) and \(k=5\). and maximum cycle lengths in the training set (3 and 8, respectively). This particular value is closer to the minimum length, indicating a relatively unbalanced scenario. Overall, the different performance of Gconv-diff and Gconv-glob on the extraction task shows that, despite the theoretical existence result proved in Corollary III.6, the choice of architecture is crucial for achieving successful generalization. #### Iv-B3 Extrapolation task In this task, we assess GNNs' ability to generalize to unseen data with cycle lengths exceeding the maximum length in the training dataset. Specifically, the training set \(D_{\text{train}}\) comprises pairs \([m,n]\) where \(3\leq m,n\leq n_{\max}\), while the test set \(D_{\text{test}}\) consists of pairs \([n_{\max}+k,n^{\prime}]\) with \(0<k\leq g\) and \(3\leq n^{\prime}\leq n_{\max}+g\). Figure 9 illustrates the extrapolation task. In our experiments, we set \(n_{\max}=8\) and consider values of \(g\) as 1, 2, and 3. The number of GNN layers is \(l=n_{\max}+g\). Numerical results are presented in Figure 10. In the extraction task, both models achieved perfect training accuracy. Conversely, in the extrapolation task, the Gconv-glob model struggles to classify the training set accurately, especially when the number of layers is equal to 9. This behavior may be attributed to the homogeneous nature of sum pooling at the end of the message passing, as it does not take into account the role of 3-degree nodes (which play a key role in our theory, as illustrated by Theorem III.5 and Corollary III.6). On the other hand, the Gconv-diff model consistently achieves perfect training accuracy over the training set and achieves perfect generalization for \(g=1\), showing once again the importance of architecture choice in practice. However, when \(g\geq 2\) there is a noticeable region of misclassification for pairs \([m,n]\) where \(m,n\geq n_{\max}\). This behavior could be explained by the limited capacity of the hidden states, but the optimization process might also play a significant role. Moreover, for \(g\geq 2\) the numerical results of the extrapolation task resemble the rating impossibility phenomenon observed in the two-letter words framework. However, it is important to note that, at least for the Gconv-diff model, we observe significantly different ratings between graphs \([m,n_{\max}+g]\) where \(m<n_{\max}\) and graphs \([n_{\max}+i,n_{\max}+j]\) with \(i,j>0\). In contrast, in the two-letter words framework ratings typically do not exhibit such a consistent and distinguishable pattern. Fig. 8: Extraction task performed by different GNN models, namely Gconv-glob (left) and Gconv-diff (right). We set \(n_{\max}=8\), \(l=8\) and, from top to bottom, \(k=7,6,5\). Fig. 10: Extrapolation task performed by different GNN models, namely Gconv-glob (left) and Gconv-diff (right). We set \(n_{\max}=8\), \(l=8\) and, from top to bottom, \((l,g)=(9,1),(10,2),(11,3)\). Fig. 9: Graphical illustration of the extrapolation task. In this example, \(n_{\max}=5\) and \(g=2\). ## V Conclusions This work extensively investigates the generalization capabilities of GNNs when learning identity effects through a combination of theoretical and experimental analysis. From the theoretical perspective, in Theorem III.3 we established that GNNs, under mild assumptions, cannot learn identity effects when orthogonal encodings are used in a specific two-letter word classification task. On the positive side, in Corollary III.6 we showed the existence of GNNs able to successfully learn identity effects on dicyclic graphs, thanks to the expressive power of the Weisfeiler-Lehman test (see Theorem III.5). The experimental results strongly support these theoretical findings and provide valuable insights into the problem. In the case of two-letter words, our experiments highlight the key influence of encoding orthogonality on misclassification behavior. Our experiments on dicyclic graphs demonstrate the importance of architecture choice in order to achieve generalization. Several directions of future research naturally stem from our work. First, while Theorem III.3 identifies sufficient conditions for rating impossibility, it is not known whether (any of) these conditions are also necessary. Moreover, numerical experiments on two-letter words show that generalization outside the training set is possible when using nonorthogonal encodings; justifying this phenomenon from a theoretical perspective is an open problem. On the other hand, our numerical experiments on dicyclic graphs show that achieving generalization depends on choice of the architecture; this suggests that rating impossibility theorems might hold under suitable conditions on the GNN architecture in that setting. Another interesting open problem is the evaluation of GNNs' expressive power on more complex graph domains. In particular, conducting extensive experiments on molecule analyses mentioned in SS1, which naturally exhibit intricate structures, could provide valuable insights into modern chemistry and drug discovery applications. ## Acknowledgments GAD is partially supported by the Gruppo Nazionale per il Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematica (INdAM). SB acknowledges the support of the Natural Sciences and Engineering Research Council of Canada of Canada (NSERC) through grant RGPIN-2020-06766 and the Fonds de Recherche du Quebec Nature et Technologies (FRQNT) through grant 313276. MR acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through grant 263344. The authors are thankful to Aaron Berk for providing feedback on a preliminary version of this manuscript and to Kara Hughes for fruitful discussions on applications to the molecule domain.